Sample records for visual word forms

  1. Visual Cortical Representation of Whole Words and Hemifield-split Word Parts.

    PubMed

    Strother, Lars; Coros, Alexandra M; Vilis, Tutis

    2016-02-01

    Reading requires the neural integration of visual word form information that is split between our retinal hemifields. We examined multiple visual cortical areas involved in this process by measuring fMRI responses while observers viewed words that changed or repeated in one or both hemifields. We were specifically interested in identifying brain areas that exhibit decreased fMRI responses as a result of repeated versus changing visual word form information in each visual hemifield. Our method yielded highly significant effects of word repetition in a previously reported visual word form area (VWFA) in occipitotemporal cortex, which represents hemifield-split words as whole units. We also identified a more posterior occipital word form area (OWFA), which represents word form information in the right and left hemifields independently and is thus both functionally and anatomically distinct from the VWFA. Both the VWFA and the OWFA were left-lateralized in our study and strikingly symmetric in anatomical location relative to known face-selective visual cortical areas in the right hemisphere. Our findings are consistent with the observation that category-selective visual areas come in pairs and support the view that neural mechanisms in left visual cortex--especially those that evolved to support the visual processing of faces--are developmentally malleable and become incorporated into a left-lateralized visual word form network that supports rapid word recognition and reading.

  2. Dysfunctional visual word form processing in progressive alexia

    PubMed Central

    Rising, Kindle; Stib, Matthew T.; Rapcsak, Steven Z.; Beeson, Pélagie M.

    2013-01-01

    Progressive alexia is an acquired reading deficit caused by degeneration of brain regions that are essential for written word processing. Functional imaging studies have shown that early processing of the visual word form depends on a hierarchical posterior-to-anterior processing stream in occipito-temporal cortex, whereby successive areas code increasingly larger and more complex perceptual attributes of the letter string. A region located in the left lateral occipito-temporal sulcus and adjacent fusiform gyrus shows maximal selectivity for words and has been dubbed the ‘visual word form area’. We studied two patients with progressive alexia in order to determine whether their reading deficits were associated with structural and/or functional abnormalities in this visual word form system. Voxel-based morphometry showed left-lateralized occipito-temporal atrophy in both patients, very mild in one, but moderate to severe in the other. The two patients, along with 10 control subjects, were scanned with functional magnetic resonance imaging as they viewed rapidly presented words, false font strings, or a fixation crosshair. This paradigm was optimized to reliably map brain regions involved in orthographic processing in individual subjects. All 10 control subjects showed a posterior-to-anterior gradient of selectivity for words, and all 10 showed a functionally defined visual word form area in the left hemisphere that was activated for words relative to false font strings. In contrast, neither of the two patients with progressive alexia showed any evidence for a selectivity gradient or for word-specific activation of the visual word form area. The patient with mild atrophy showed normal responses to both words and false font strings in the posterior part of the visual word form system, but a failure to develop selectivity for words in the more anterior part of the system. In contrast, the patient with moderate to severe atrophy showed minimal activation of any part of the visual word form system for either words or false font strings. Our results suggest that progressive alexia is associated with a dysfunctional visual word form system, with or without substantial cortical atrophy. Furthermore, these findings demonstrate that functional MRI has the potential to reveal the neural bases of cognitive deficits in neurodegenerative patients at very early stages, in some cases before the development of extensive atrophy. PMID:23471694

  3. Dysfunctional visual word form processing in progressive alexia.

    PubMed

    Wilson, Stephen M; Rising, Kindle; Stib, Matthew T; Rapcsak, Steven Z; Beeson, Pélagie M

    2013-04-01

    Progressive alexia is an acquired reading deficit caused by degeneration of brain regions that are essential for written word processing. Functional imaging studies have shown that early processing of the visual word form depends on a hierarchical posterior-to-anterior processing stream in occipito-temporal cortex, whereby successive areas code increasingly larger and more complex perceptual attributes of the letter string. A region located in the left lateral occipito-temporal sulcus and adjacent fusiform gyrus shows maximal selectivity for words and has been dubbed the 'visual word form area'. We studied two patients with progressive alexia in order to determine whether their reading deficits were associated with structural and/or functional abnormalities in this visual word form system. Voxel-based morphometry showed left-lateralized occipito-temporal atrophy in both patients, very mild in one, but moderate to severe in the other. The two patients, along with 10 control subjects, were scanned with functional magnetic resonance imaging as they viewed rapidly presented words, false font strings, or a fixation crosshair. This paradigm was optimized to reliably map brain regions involved in orthographic processing in individual subjects. All 10 control subjects showed a posterior-to-anterior gradient of selectivity for words, and all 10 showed a functionally defined visual word form area in the left hemisphere that was activated for words relative to false font strings. In contrast, neither of the two patients with progressive alexia showed any evidence for a selectivity gradient or for word-specific activation of the visual word form area. The patient with mild atrophy showed normal responses to both words and false font strings in the posterior part of the visual word form system, but a failure to develop selectivity for words in the more anterior part of the system. In contrast, the patient with moderate to severe atrophy showed minimal activation of any part of the visual word form system for either words or false font strings. Our results suggest that progressive alexia is associated with a dysfunctional visual word form system, with or without substantial cortical atrophy. Furthermore, these findings demonstrate that functional MRI has the potential to reveal the neural bases of cognitive deficits in neurodegenerative patients at very early stages, in some cases before the development of extensive atrophy.

  4. Word learning and the cerebral hemispheres: from serial to parallel processing of written words

    PubMed Central

    Ellis, Andrew W.; Ferreira, Roberto; Cathles-Hagan, Polly; Holt, Kathryn; Jarvis, Lisa; Barca, Laura

    2009-01-01

    Reading familiar words differs from reading unfamiliar non-words in two ways. First, word reading is faster and more accurate than reading of unfamiliar non-words. Second, effects of letter length are reduced for words, particularly when they are presented in the right visual field in familiar formats. Two experiments are reported in which right-handed participants read aloud non-words presented briefly in their left and right visual fields before and after training on those items. The non-words were interleaved with familiar words in the naming tests. Before training, naming was slow and error prone, with marked effects of length in both visual fields. After training, fewer errors were made, naming was faster, and the effect of length was much reduced in the right visual field compared with the left. We propose that word learning creates orthographic word forms in the mid-fusiform gyrus of the left cerebral hemisphere. Those word forms allow words to access their phonological and semantic representations on a lexical basis. But orthographic word forms also interact with more posterior letter recognition systems in the middle/inferior occipital gyri, inducing more parallel processing of right visual field words than is possible for any left visual field stimulus, or for unfamiliar non-words presented in the right visual field. PMID:19933140

  5. Functional Specificity of the Visual Word Form Area: General Activation for Words and Symbols but Specific Network Activation for Words

    ERIC Educational Resources Information Center

    Reinke, Karen; Fernandes, Myra; Schwindt, Graeme; O'Craven, Kathleen; Grady, Cheryl L.

    2008-01-01

    The functional specificity of the brain region known as the Visual Word Form Area (VWFA) was examined using fMRI. We explored whether this area serves a general role in processing symbolic stimuli, rather than being selective for the processing of words. Brain activity was measured during a visual 1-back task to English words, meaningful symbols…

  6. Effects of visual familiarity for words on interhemispheric cooperation for lexical processing.

    PubMed

    Yoshizaki, K

    2001-12-01

    The purpose of this study was to examine the effects of visual familiarity of words on interhemispheric lexical processing. Words and pseudowords were tachistoscopically presented in a left, a right, or bilateral visual fields. Two types of words, Katakana-familiar-type and Hiragana-familiar-type, were used as the word stimuli. The former refers to the words which are more frequently written with Katakana script, and the latter refers to the words which are written predominantly in Hiragana script. Two conditions for the words were set up in terms of visual familiarity for a word. In visually familiar condition, words were presented in familiar script form and in visually unfamiliar condition, words were presented in less familiar script form. The 32 right-handed Japanese students were asked to make a lexical decision. Results showed that a bilateral gain, which indicated that the performance in the bilateral visual fields was superior to that in the unilateral visual field, was obtained only in the visually familiar condition, not in the visually unfamiliar condition. These results suggested that the visual familiarity for a word had an influence on the interhemispheric lexical processing.

  7. The development of cortical sensitivity to visual word forms.

    PubMed

    Ben-Shachar, Michal; Dougherty, Robert F; Deutsch, Gayle K; Wandell, Brian A

    2011-09-01

    The ability to extract visual word forms quickly and efficiently is essential for using reading as a tool for learning. We describe the first longitudinal fMRI study to chart individual changes in cortical sensitivity to written words as reading develops. We conducted four annual measurements of brain function and reading skills in a heterogeneous group of children, initially 7-12 years old. The results show age-related increase in children's cortical sensitivity to word visibility in posterior left occipito-temporal sulcus (LOTS), nearby the anatomical location of the visual word form area. Moreover, the rate of increase in LOTS word sensitivity specifically correlates with the rate of improvement in sight word efficiency, a measure of speeded overt word reading. Other cortical regions, including V1, posterior parietal cortex, and the right homologue of LOTS, did not demonstrate such developmental changes. These results provide developmental support for the hypothesis that LOTS is part of the cortical circuitry that extracts visual word forms quickly and efficiently and highlight the importance of developing cortical sensitivity to word visibility in reading acquisition.

  8. The Development of Cortical Sensitivity to Visual Word Forms

    PubMed Central

    Ben-Shachar, Michal; Dougherty, Robert F.; Deutsch, Gayle K.; Wandell, Brian A.

    2011-01-01

    The ability to extract visual word forms quickly and efficiently is essential for using reading as a tool for learning. We describe the first longitudinal fMRI study to chart individual changes in cortical sensitivity to written words as reading develops. We conducted four annual measurements of brain function and reading skills in a heterogeneous group of children, initially 7–12 years old. The results show age-related increase in children's cortical sensitivity to word visibility in posterior left occipito-temporal sulcus (LOTS), nearby the anatomical location of the visual word form area. Moreover, the rate of increase in LOTS word sensitivity specifically correlates with the rate of improvement in sight word efficiency, a measure of speeded overt word reading. Other cortical regions, including V1, posterior parietal cortex, and the right homologue of LOTS, did not demonstrate such developmental changes. These results provide developmental support for the hypothesis that LOTS is part of the cortical circuitry that extracts visual word forms quickly and efficiently and highlight the importance of developing cortical sensitivity to word visibility in reading acquisition. PMID:21261451

  9. Dissociating visual form from lexical frequency using Japanese.

    PubMed

    Twomey, Tae; Kawabata Duncan, Keith J; Hogan, John S; Morita, Kenji; Umeda, Kazumasa; Sakai, Katsuyuki; Devlin, Joseph T

    2013-05-01

    In Japanese, the same word can be written in either morphographic Kanji or syllabographic Hiragana and this provides a unique opportunity to disentangle a word's lexical frequency from the frequency of its visual form - an important distinction for understanding the neural information processing in regions engaged by reading. Behaviorally, participants responded more quickly to high than low frequency words and to visually familiar relative to less familiar words, independent of script. Critically, the imaging results showed that visual familiarity, as opposed to lexical frequency, had a strong effect on activation in ventral occipito-temporal cortex. Activation here was also greater for Kanji than Hiragana words and this was not due to their inherent differences in visual complexity. These findings can be understood within a predictive coding framework in which vOT receives bottom-up information encoding complex visual forms and top-down predictions from regions encoding non-visual attributes of the stimulus. Copyright © 2012 Elsevier Inc. All rights reserved.

  10. Decoding and disrupting left midfusiform gyrus activity during word reading

    PubMed Central

    Hirshorn, Elizabeth A.; Ward, Michael J.; Fiez, Julie A.; Ghuman, Avniel Singh

    2016-01-01

    The nature of the visual representation for words has been fiercely debated for over 150 y. We used direct brain stimulation, pre- and postsurgical behavioral measures, and intracranial electroencephalography to provide support for, and elaborate upon, the visual word form hypothesis. This hypothesis states that activity in the left midfusiform gyrus (lmFG) reflects visually organized information about words and word parts. In patients with electrodes placed directly in their lmFG, we found that disrupting lmFG activity through stimulation, and later surgical resection in one of the patients, led to impaired perception of whole words and letters. Furthermore, using machine-learning methods to analyze the electrophysiological data from these electrodes, we found that information contained in early lmFG activity was consistent with an orthographic similarity space. Finally, the lmFG contributed to at least two distinguishable stages of word processing, an early stage that reflects gist-level visual representation sensitive to orthographic statistics, and a later stage that reflects more precise representation sufficient for the individuation of orthographic word forms. These results provide strong support for the visual word form hypothesis and demonstrate that across time the lmFG is involved in multiple stages of orthographic representation. PMID:27325763

  11. Decoding and disrupting left midfusiform gyrus activity during word reading.

    PubMed

    Hirshorn, Elizabeth A; Li, Yuanning; Ward, Michael J; Richardson, R Mark; Fiez, Julie A; Ghuman, Avniel Singh

    2016-07-19

    The nature of the visual representation for words has been fiercely debated for over 150 y. We used direct brain stimulation, pre- and postsurgical behavioral measures, and intracranial electroencephalography to provide support for, and elaborate upon, the visual word form hypothesis. This hypothesis states that activity in the left midfusiform gyrus (lmFG) reflects visually organized information about words and word parts. In patients with electrodes placed directly in their lmFG, we found that disrupting lmFG activity through stimulation, and later surgical resection in one of the patients, led to impaired perception of whole words and letters. Furthermore, using machine-learning methods to analyze the electrophysiological data from these electrodes, we found that information contained in early lmFG activity was consistent with an orthographic similarity space. Finally, the lmFG contributed to at least two distinguishable stages of word processing, an early stage that reflects gist-level visual representation sensitive to orthographic statistics, and a later stage that reflects more precise representation sufficient for the individuation of orthographic word forms. These results provide strong support for the visual word form hypothesis and demonstrate that across time the lmFG is involved in multiple stages of orthographic representation.

  12. The Neural Basis of Obligatory Decomposition of Suffixed Words

    ERIC Educational Resources Information Center

    Lewis, Gwyneth; Solomyak, Olla; Marantz, Alec

    2011-01-01

    Recent neurolinguistic studies present somewhat conflicting evidence concerning the role of the inferior temporal cortex (IT) in visual word recognition within the first 200 ms after presentation. On the one hand, fMRI studies of the Visual Word Form Area (VWFA) suggest that the IT might recover representations of the orthographic form of words.…

  13. Visual feature-tolerance in the reading network.

    PubMed

    Rauschecker, Andreas M; Bowen, Reno F; Perry, Lee M; Kevan, Alison M; Dougherty, Robert F; Wandell, Brian A

    2011-09-08

    A century of neurology and neuroscience shows that seeing words depends on ventral occipital-temporal (VOT) circuitry. Typically, reading is learned using high-contrast line-contour words. We explored whether a specific VOT region, the visual word form area (VWFA), learns to see only these words or recognizes words independent of the specific shape-defining visual features. Word forms were created using atypical features (motion-dots, luminance-dots) whose statistical properties control word-visibility. We measured fMRI responses as word form visibility varied, and we used TMS to interfere with neural processing in specific cortical circuits, while subjects performed a lexical decision task. For all features, VWFA responses increased with word-visibility and correlated with performance. TMS applied to motion-specialized area hMT+ disrupted reading performance for motion-dots, but not line-contours or luminance-dots. A quantitative model describes feature-convergence in the VWFA and relates VWFA responses to behavioral performance. These findings suggest how visual feature-tolerance in the reading network arises through signal convergence from feature-specialized cortical areas. Copyright © 2011 Elsevier Inc. All rights reserved.

  14. On the Functional Neuroanatomy of Visual Word Processing: Effects of Case and Letter Deviance

    ERIC Educational Resources Information Center

    Kronbichler, Martin; Klackl, Johannes; Richlan, Fabio; Schurz, Matthias; Staffen, Wolfgang; Ladurner, Gunther; Wimmer, Heinz

    2009-01-01

    This functional magnetic resonance imaging study contrasted case-deviant and letter-deviant forms with familiar forms of the same phonological words (e.g., "TaXi" and "Taksi" vs. "Taxi") and found that both types of deviance led to increased activation in a left occipito-temporal region, corresponding to the visual word form area (VWFA). The…

  15. Representation of visual symbols in the visual word processing network.

    PubMed

    Muayqil, Taim; Davies-Thompson, Jodie; Barton, Jason J S

    2015-03-01

    Previous studies have shown that word processing involves a predominantly left-sided occipitotemporal network. Words are a form of symbolic representation, in that they are arbitrary perceptual stimuli that represent other objects, actions or concepts. Lesions of parts of the visual word processing network can cause alexia, which can be associated with difficulty processing other types of symbols such as musical notation or road signs. We investigated whether components of the visual word processing network were also activated by other types of symbols. In 16 music-literate subjects, we defined the visual word network using fMRI and examined responses to four symbolic categories: visual words, musical notation, instructive symbols (e.g. traffic signs), and flags and logos. For each category we compared responses not only to scrambled stimuli, but also to similar stimuli that lacked symbolic meaning. The left visual word form area and a homologous right fusiform region responded similarly to all four categories, but equally to both symbolic and non-symbolic equivalents. Greater response to symbolic than non-symbolic stimuli occurred only in the left inferior frontal and middle temporal gyri, but only for words, and in the case of the left inferior frontal gyri, also for musical notation. A whole-brain analysis comparing symbolic versus non-symbolic stimuli revealed a distributed network of inferior temporooccipital and parietal regions that differed for different symbols. The fusiform gyri are involved in processing the form of many symbolic stimuli, but not specifically for stimuli with symbolic content. Selectivity for stimuli with symbolic content only emerges in the visual word network at the level of the middle temporal and inferior frontal gyri, but is specific for words and musical notation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Visual word form familiarity and attention in lateral difference during processing Japanese Kana words.

    PubMed

    Nakagawa, A; Sukigara, M

    2000-09-01

    The purpose of this study was to examine the relationship between familiarity and laterality in reading Japanese Kana words. In two divided-visual-field experiments, three- or four-character Hiragana or Katakana words were presented in both familiar and unfamiliar scripts, to which subjects performed lexical decisions. Experiment 1, using three stimulus durations (40, 100, 160 ms), suggested that only in the unfamiliar script condition was increased stimulus presentation time differently affected in each visual field. To examine this lateral difference during the processing of unfamiliar scripts as related to attentional laterality, a concurrent auditory shadowing task was added in Experiment 2. The results suggested that processing words in an unfamiliar script requires attention, which could be left-hemisphere lateralized, while orthographically familiar kana words can be processed automatically on the basis of their word-level orthographic representations or visual word form. Copyright 2000 Academic Press.

  17. Artful terms: A study on aesthetic word usage for visual art versus film and music.

    PubMed

    Augustin, M Dorothee; Carbon, Claus-Christian; Wagemans, Johan

    2012-01-01

    Despite the importance of the arts in human life, psychologists still know relatively little about what characterises their experience for the recipient. The current research approaches this problem by studying people's word usage in aesthetics, with a focus on three important art forms: visual art, film, and music. The starting point was a list of 77 words known to be useful to describe aesthetic impressions of visual art (Augustin et al 2012, Acta Psychologica139 187-201). Focusing on ratings of likelihood of use, we examined to what extent word usage in aesthetic descriptions of visual art can be generalised to film and music. The results support the claim of an interplay of generality and specificity in aesthetic word usage. Terms with equal likelihood of use for all art forms included beautiful, wonderful, and terms denoting originality. Importantly, emotion-related words received higher ratings for film and music than for visual art. To our knowledge this is direct evidence that aesthetic experiences of visual art may be less affectively loaded than, for example, experiences of music. The results render important information about aesthetic word usage in the realm of the arts and may serve as a starting point to develop tailored measurement instruments for different art forms.

  18. Artful terms: A study on aesthetic word usage for visual art versus film and music

    PubMed Central

    Augustin, M Dorothee; Carbon, Claus-Christian; Wagemans, Johan

    2012-01-01

    Despite the importance of the arts in human life, psychologists still know relatively little about what characterises their experience for the recipient. The current research approaches this problem by studying people's word usage in aesthetics, with a focus on three important art forms: visual art, film, and music. The starting point was a list of 77 words known to be useful to describe aesthetic impressions of visual art (Augustin et al 2012, Acta Psychologica 139 187–201). Focusing on ratings of likelihood of use, we examined to what extent word usage in aesthetic descriptions of visual art can be generalised to film and music. The results support the claim of an interplay of generality and specificity in aesthetic word usage. Terms with equal likelihood of use for all art forms included beautiful, wonderful, and terms denoting originality. Importantly, emotion-related words received higher ratings for film and music than for visual art. To our knowledge this is direct evidence that aesthetic experiences of visual art may be less affectively loaded than, for example, experiences of music. The results render important information about aesthetic word usage in the realm of the arts and may serve as a starting point to develop tailored measurement instruments for different art forms. PMID:23145287

  19. A Dual-Route Perspective on Brain Activation in Response to Visual Words: Evidence for a Length by Lexicality Interaction in the Visual Word Form Area (VWFA)

    PubMed Central

    Schurz, Matthias; Sturm, Denise; Richlan, Fabio; Kronbichler, Martin; Ladurner, Gunther; Wimmer, Heinz

    2010-01-01

    Based on our previous work, we expected the Visual Word Form Area (VWFA) in the left ventral visual pathway to be engaged by both whole-word recognition and by serial sublexical coding of letter strings. To examine this double function, a phonological lexical decision task (i.e., “Does xxx sound like an existing word?”) presented short and long letter strings of words, pseudohomophones, and pseudowords (e.g., Taxi, Taksi and Tazi). Main findings were that the length effect for words was limited to occipital regions and absent in the VWFA. In contrast, a marked length effect for pseudowords was found throughout the ventral visual pathway including the VWFA, as well as in regions presumably engaged by visual attention and silent-articulatory processes. The length by lexicality interaction on brain activation corresponds to well-established behavioral findings of a length by lexicality interaction on naming latencies and speaks for the engagement of the VWFA by both lexical and sublexical processes. PMID:19896538

  20. A dual-route perspective on brain activation in response to visual words: evidence for a length by lexicality interaction in the visual word form area (VWFA).

    PubMed

    Schurz, Matthias; Sturm, Denise; Richlan, Fabio; Kronbichler, Martin; Ladurner, Gunther; Wimmer, Heinz

    2010-02-01

    Based on our previous work, we expected the Visual Word Form Area (VWFA) in the left ventral visual pathway to be engaged by both whole-word recognition and by serial sublexical coding of letter strings. To examine this double function, a phonological lexical decision task (i.e., "Does xxx sound like an existing word?") presented short and long letter strings of words, pseudohomophones, and pseudowords (e.g., Taxi, Taksi and Tazi). Main findings were that the length effect for words was limited to occipital regions and absent in the VWFA. In contrast, a marked length effect for pseudowords was found throughout the ventral visual pathway including the VWFA, as well as in regions presumably engaged by visual attention and silent-articulatory processes. The length by lexicality interaction on brain activation corresponds to well-established behavioral findings of a length by lexicality interaction on naming latencies and speaks for the engagement of the VWFA by both lexical and sublexical processes. Copyright (c) 2009 Elsevier Inc. All rights reserved.

  1. The Development of Cortical Sensitivity to Visual Word Forms

    ERIC Educational Resources Information Center

    Ben-Shachar, Michal; Dougherty, Robert F.; Deutsch, Gayle K.; Wandell, Brian A.

    2011-01-01

    The ability to extract visual word forms quickly and efficiently is essential for using reading as a tool for learning. We describe the first longitudinal fMRI study to chart individual changes in cortical sensitivity to written words as reading develops. We conducted four annual measurements of brain function and reading skills in a heterogeneous…

  2. Evidence for highly selective neuronal tuning to whole words in the "visual word form area".

    PubMed

    Glezer, Laurie S; Jiang, Xiong; Riesenhuber, Maximilian

    2009-04-30

    Theories of reading have posited the existence of a neural representation coding for whole real words (i.e., an orthographic lexicon), but experimental support for such a representation has proved elusive. Using fMRI rapid adaptation techniques, we provide evidence that the human left ventral occipitotemporal cortex (specifically the "visual word form area," VWFA) contains a representation based on neurons highly selective for individual real words, in contrast to current theories that posit a sublexical representation in the VWFA.

  3. Early access to abstract representations in developing readers: evidence from masked priming.

    PubMed

    Perea, Manuel; Mallouh, Reem Abu; Carreiras, Manuel

    2013-07-01

    A commonly shared assumption in the field of visual-word recognition is that retinotopic representations are rapidly converted into abstract representations. Here we examine the role of visual form vs. abstract representations during the early stages of word processing - as measured by masked priming - in young children (3rd and 6th Graders) and adult readers. To maximize the chances of detecting an effect of visual form, we employed a language with a very intricate orthography, Arabic. If visual form plays a role in the early stages of processing, greater benefit would be expected from related primes that have the same visual form (in terms of the ligation pattern between a word's letters) as the target word (e.g.- [ktz b-ktA b] - note that the three initial letters are connected in prime and target) than for those that do not (- [ktxb-ktA b]). Results showed that the magnitude of priming effect relative to an unrelated condition (e.g. -) was remarkably similar for both types of prime. Thus, despite the visual complexity of Arabic orthography, there is fast access to the abstract letter representations not only in adult readers by also in developing readers. © 2013 Blackwell Publishing Ltd.

  4. Dynamic spatial organization of the occipito-temporal word form area for second language processing.

    PubMed

    Gao, Yue; Sun, Yafeng; Lu, Chunming; Ding, Guosheng; Guo, Taomei; Malins, Jeffrey G; Booth, James R; Peng, Danling; Liu, Li

    2017-08-01

    Despite the left occipito-temporal region having shown consistent activation in visual word form processing across numerous studies in different languages, the mechanisms by which word forms of second languages are processed in this region remain unclear. To examine this more closely, 16 Chinese-English and 14 English-Chinese late bilinguals were recruited to perform lexical decision tasks to visually presented words in both their native and second languages (L1 and L2) during functional magnetic resonance imaging scanning. Here we demonstrate that visual word form processing for L1 versus L2 engaged different spatial areas of the left occipito-temporal region. Namely, the spatial organization of the visual word form processing in the left occipito-temporal region is more medial and posterior for L2 than L1 processing in Chinese-English bilinguals, whereas activation is more lateral and anterior for L2 in English-Chinese bilinguals. In addition, for Chinese-English bilinguals, more lateral recruitment of the occipito-temporal region was correlated with higher L2 proficiency, suggesting higher L2 proficiency is associated with greater involvement of L1-preferred mechanisms. For English-Chinese bilinguals, higher L2 proficiency was correlated with more lateral and anterior activation of the occipito-temporal region, suggesting higher L2 proficiency is associated with greater involvement of L2-preferred mechanisms. Taken together, our results indicate that L1 and L2 recruit spatially different areas of the occipito-temporal region in visual word processing when the two scripts belong to different writing systems, and that the spatial organization of this region for L2 visual word processing is dynamically modulated by L2 proficiency. Specifically, proficiency in L2 in Chinese-English is associated with assimilation to the native language mechanisms, whereas L2 in English-Chinese is associated with accommodation to second language mechanisms. Copyright © 2017. Published by Elsevier Ltd.

  5. Similarity and Difference in Learning L2 Word-Form

    ERIC Educational Resources Information Center

    Hamada, Megumi; Koda, Keiko

    2011-01-01

    This study explored similarity and difference in L2 written word-form learning from a cross-linguistic perspective. This study investigated whether learners' L1 orthographic background, which influences L2 visual word recognition (e.g., Wang et al., 2003), also influences L2 word-form learning, in particular, the sensitivity to phonological and…

  6. Left-lateralized N170 Effects of Visual Expertise in Reading: Evidence from Japanese Syllabic and Logographic Scripts

    PubMed Central

    Maurer, Urs; Zevin, Jason D.; McCandliss, Bruce D.

    2015-01-01

    The N170 component of the event-related potential (ERP) reflects experience-dependent neural changes in several forms of visual expertise, including expertise for visual words. Readers skilled in writing systems that link characters to phonemes (i.e., alphabetic writing) typically produce a left-lateralized N170 to visual word forms. This study examined the N170 in three Japanese scripts that link characters to larger phonological units. Participants were monolingual English speakers (EL1) and native Japanese speakers (JL1) who were also proficient in English. ERPs were collected using a 129-channel array, as participants performed a series of experiments viewing words or novel control stimuli in a repetition detection task. The N170 was strongly left-lateralized for all three Japanese scripts (including logographic Kanji characters) in JL1 participants, but bilateral in EL1 participants viewing these same stimuli. This demonstrates that left-lateralization of the N170 is dependent on specific reading expertise and is not limited to alphabetic scripts. Additional contrasts within the moraic Katakana script revealed equivalent N170 responses in JL1 speakers for familiar Katakana words and for Kanji words transcribed into novel Katakana words, suggesting that the N170 expertise effect is driven by script familiarity rather than familiarity with particular visual word forms. Finally, for English words and novel symbol string stimuli, both EL1 and JL1 subjects produced equivalent responses for the novel symbols, and more left-lateralized N170 responses for the English words, indicating that such effects are not limited to the first language. Taken together, these cross-linguistic results suggest that similar neural processes underlie visual expertise for print in very different writing systems. PMID:18370600

  7. Dissociating Visual Form from Lexical Frequency Using Japanese

    ERIC Educational Resources Information Center

    Twomey, Tae; Duncan, Keith J. Kawabata; Hogan, John S.; Morita, Kenji; Umeda, Kazumasa; Sakai, Katsuyuki; Devlin, Joseph T.

    2013-01-01

    In Japanese, the same word can be written in either morphographic Kanji or syllabographic Hiragana and this provides a unique opportunity to disentangle a word's lexical frequency from the frequency of its visual form--an important distinction for understanding the neural information processing in regions engaged by reading. Behaviorally,…

  8. Developmental Differences for Word Processing in the Ventral Stream

    ERIC Educational Resources Information Center

    Olulade, Olumide A.; Flowers, D. Lynn; Napoliello, Eileen M.; Eden, Guinevere F.

    2013-01-01

    The visual word form system (VWFS), located in the occipito-temporal cortex, is involved in orthographic processing of visually presented words (Cohen et al., 2002). Recent fMRI studies in children and adults have demonstrated a gradient of increasing word-selectivity along the posterior-to-anterior axis of this system (Vinckier et al., 2007), yet…

  9. Early Decomposition in Visual Word Recognition: Dissociating Morphology, Form, and Meaning

    ERIC Educational Resources Information Center

    Marslen-Wilson, William D.; Bozic, Mirjana; Randall, Billi

    2008-01-01

    The role of morphological, semantic, and form-based factors in the early stages of visual word recognition was investigated across different SOAs in a masked priming paradigm, focusing on English derivational morphology. In a first set of experiments, stimulus pairs co-varying in morphological decomposability and in semantic and orthographic…

  10. Direct comparison of four implicit memory tests.

    PubMed

    Rajaram, S; Roediger, H L

    1993-07-01

    Four verbal implicit memory tests, word identification, word stem completion, word fragment completion, and anagram solution, were directly compared in one experiment and were contrasted with free recall. On all implicit tests, priming was greatest from prior visual presentation of words, less (but significant) from auditory presentation, and least from pictorial presentations. Typefont did not affect priming. In free recall, pictures were recalled better than words. The four implicit tests all largely index perceptual (lexical) operations in recognizing words, or visual word form representations.

  11. Vernier But Not Grating Acuity Contributes to an Early Stage of Visual Word Processing.

    PubMed

    Tan, Yufei; Tong, Xiuhong; Chen, Wei; Weng, Xuchu; He, Sheng; Zhao, Jing

    2018-03-28

    The process of reading words depends heavily on efficient visual skills, including analyzing and decomposing basic visual features. Surprisingly, previous reading-related studies have almost exclusively focused on gross aspects of visual skills, while only very few have investigated the role of finer skills. The present study filled this gap and examined the relations of two finer visual skills measured by grating acuity (the ability to resolve periodic luminance variations across space) and Vernier acuity (the ability to detect/discriminate relative locations of features) to Chinese character-processing as measured by character form-matching and lexical decision tasks in skilled adult readers. The results showed that Vernier acuity was significantly correlated with performance in character form-matching but not visual symbol form-matching, while no correlation was found between grating acuity and character processing. Interestingly, we found no correlation of the two visual skills with lexical decision performance. These findings provide for the first time empirical evidence that the finer visual skills, particularly as reflected in Vernier acuity, may directly contribute to an early stage of hierarchical word processing.

  12. Do preschool children learn to read words from environmental prints?

    PubMed

    Zhao, Jing; Zhao, Pei; Weng, Xuchu; Li, Su

    2014-01-01

    Parents and teachers worldwide believe that a visual environment rich with print can contribute to young children's literacy. Children seem to recognize words in familiar logos at an early age. However, most of previous studies were carried out with alphabetic scripts. Alphabetic letters regularly correspond to phonological segments in a word and provide strong cues about the identity of the whole word. Thus it was not clear whether children can learn to read words by extracting visual word form information from environmental prints. To exclude the phonological-cue confound, this study tested children's knowledge of Chinese words embedded in familiar logos. The four environmental logos were employed and transformed into four versions with the contextual cues (i.e., something apart from the presentation of the words themselves in logo format like the color, logo and font type cues) gradually minimized. Children aged from 3 to 5 were tested. We observed that children of different ages all performed better when words were presented in highly familiar logos compared to when they were presented in a plain fashion, devoid of context. This advantage for familiar logos was also present when the contextual information was only partial. However, the role of various cues in learning words changed with age. The color and logo cues had a larger effect in 3- and 4- year-olds than in 5-year-olds, while the font type cue played a greater role in 5-year-olds than in the other two groups. Our findings demonstrated that young children did not easily learn words by extracting their visual form information even from familiar environmental prints. However, children aged 5 begin to pay more attention to the visual form information of words in highly familiar logos than those aged 3 and 4.

  13. Do Preschool Children Learn to Read Words from Environmental Prints?

    PubMed Central

    Zhao, Jing; Zhao, Pei; Weng, Xuchu; Li, Su

    2014-01-01

    Parents and teachers worldwide believe that a visual environment rich with print can contribute to young children's literacy. Children seem to recognize words in familiar logos at an early age. However, most of previous studies were carried out with alphabetic scripts. Alphabetic letters regularly correspond to phonological segments in a word and provide strong cues about the identity of the whole word. Thus it was not clear whether children can learn to read words by extracting visual word form information from environmental prints. To exclude the phonological-cue confound, this study tested children's knowledge of Chinese words embedded in familiar logos. The four environmental logos were employed and transformed into four versions with the contextual cues (i.e., something apart from the presentation of the words themselves in logo format like the color, logo and font type cues) gradually minimized. Children aged from 3 to 5 were tested. We observed that children of different ages all performed better when words were presented in highly familiar logos compared to when they were presented in a plain fashion, devoid of context. This advantage for familiar logos was also present when the contextual information was only partial. However, the role of various cues in learning words changed with age. The color and logo cues had a larger effect in 3- and 4- year-olds than in 5-year-olds, while the font type cue played a greater role in 5-year-olds than in the other two groups. Our findings demonstrated that young children did not easily learn words by extracting their visual form information even from familiar environmental prints. However, children aged 5 begin to pay more attention to the visual form information of words in highly familiar logos than those aged 3 and 4. PMID:24465677

  14. The neurobiological basis of seeing words

    PubMed Central

    Wandell, Brian A.

    2011-01-01

    This review summarizes recent ideas about the cortical circuits for seeing words, an important part of the brain system for reading. Historically, the link between the visual cortex and reading has been contentious. One influential position is that the visual cortex plays a minimal role, limited to identifying contours, and that information about these contours is delivered to cortical regions specialized for reading and language. An alternative position is that specializations for seeing words develop within the visual cortex itself. Modern neuroimaging measurements—including both functional magnetic resonance imaging (fMRI) and diffusion weighted imaging with tractography data—support the position that circuitry for seeing the statistical regularities of word forms develops within the ventral occipitotemporal cortex, which also contains important circuitry for seeing faces, colors, and forms. The review explains new findings about the visual pathways, including visual field maps, as well as new findings about how we see words. The measurements from the two fields are in close cortical proximity, and there are good opportunities for coordinating theoretical ideas about function in the ventral occipitotemporal cortex. PMID:21486296

  15. The neurobiological basis of seeing words.

    PubMed

    Wandell, Brian A

    2011-04-01

    This review summarizes recent ideas about the cortical circuits for seeing words, an important part of the brain system for reading. Historically, the link between the visual cortex and reading has been contentious. One influential position is that the visual cortex plays a minimal role, limited to identifying contours, and that information about these contours is delivered to cortical regions specialized for reading and language. An alternative position is that specializations for seeing words develop within the visual cortex itself. Modern neuroimaging measurements-including both functional magnetic resonance imaging (fMRI) and diffusion weighted imaging with tractography (DTI) data-support the position that circuitry for seeing the statistical regularities of word forms develops within the ventral occipitotemporal cortex, which also contains important circuitry for seeing faces, colors, and forms. This review explains new findings about the visual pathways, including visual field maps, as well as new findings about how we see words. The measurements from the two fields are in close cortical proximity, and there are good opportunities for coordinating theoretical ideas about function in the ventral occipitotemporal cortex. © 2011 New York Academy of Sciences.

  16. The Embroidered Word: A Stitchery Overview for Visual Arts Education

    ERIC Educational Resources Information Center

    Julian, June

    2012-01-01

    This historical research provides an examination of the embroidered word as a visual art piece, from early traditional examples to contemporary forms. It is intended to encourage appreciation of embroidery as an art form and to stimulate discussion about the role of historical contexts in the studio education of artists at the university level.…

  17. Neural Correlates of Morphological Decomposition in a Morphologically Rich Language: An fMRI Study

    ERIC Educational Resources Information Center

    Lehtonen, Minna; Vorobyev, Victor A.; Hugdahl, Kenneth; Tuokkola, Terhi; Laine, Matti

    2006-01-01

    By employing visual lexical decision and functional MRI, we studied the neural correlates of morphological decomposition in a highly inflected language (Finnish) where most inflected noun forms elicit a consistent processing cost during word recognition. This behavioral effect could reflect suffix stripping at the visual word form level and/or…

  18. A Visual Literacy Approach to Developmental and Remedial Reading.

    ERIC Educational Resources Information Center

    Barley, Steven D.

    Photography, films, and other visual materials offer a different approach to teaching reading. For example, photographs may be arranged in sequences analogous to the ways words form sentences and sentences for stories. If, as is possible, children respond first to pictures and later to words, training they receive in visual literacy may help them…

  19. A supramodal brain substrate of word form processing--an fMRI study on homonym finding with auditory and visual input.

    PubMed

    Balthasar, Andrea J R; Huber, Walter; Weis, Susanne

    2011-09-02

    Homonym processing in German is of theoretical interest as homonyms specifically involve word form information. In a previous study (Weis et al., 2001), we found inferior parietal activation as a correlate of successfully finding a homonym from written stimuli. The present study tries to clarify the underlying mechanism and to examine to what extend the previous homonym effect is dependent on visual in contrast to auditory input modality. 18 healthy subjects were examined using an event-related functional magnetic resonance imaging paradigm. Participants had to find and articulate a homonym in relation to two spoken or written words. A semantic-lexical task - oral naming from two-word definitions - was used as a control condition. When comparing brain activation for solved homonym trials to both brain activation for unsolved homonyms and solved definition trials we obtained two activations patterns, which characterised both auditory and visual processing. Semantic-lexical processing was related to bilateral inferior frontal activation, whereas left inferior parietal activation was associated with finding the correct homonym. As the inferior parietal activation during successful access to the word form of a homonym was independent of input modality, it might be the substrate of access to word form knowledge. Copyright © 2011 Elsevier B.V. All rights reserved.

  20. Early access to abstract representations in developing readers: Evidence from masked priming

    PubMed Central

    Perea, Manuel; Abu Mallouh, Reem; Carreiras, Manuel

    2013-01-01

    A commonly shared assumption in the field of visual-word recognition is that retinotopic representations are rapidly converted into abstract representations. Here we examine the role of visual form vs. abstract representations during the early stages of word processing –as measured by masked priming– in young children (3rd and 6th graders) and adult readers. To maximize the chances of detecting an effect of visual form, we employed a language with a very intricate orthography, Arabic. If visual form plays a role in the early moments of processing, greater benefit would be expected from related primes that have the same visual form (in terms of the ligation pattern between a word’s letters) as the target word (e.g., - [ktzb-ktAb] –note that the three initial letters are connected in prime and target) than for those that do not ( [ktxb-ktAb]). Results showed that the magnitude of priming effect relative to an unrelated condition (e.g., ) was remarkably similar for both types of primes. Thus, despite the visual complexity of Arabic orthography, there is fast access to the abstract letter representations not only in adult readers by also in developing readers. PMID:23786474

  1. Sequential then Interactive Processing of Letters and Words in the Left Fusiform Gyrus

    PubMed Central

    Thesen, Thomas; McDonald, Carrie R.; Carlson, Chad; Doyle, Werner; Cash, Syd; Sherfey, Jason; Felsovalyi, Olga; Girard, Holly; Barr, William; Devinsky, Orrin; Kuzniecky, Ruben; Halgren, Eric

    2013-01-01

    Despite decades of cognitive, neuropsychological, and neuroimaging studies, it is unclear if letters are identified prior to word-form encoding during reading, or if letters and their combinations are encoded simultaneously and interactively. Here, using functional magnetic resonance imaging, we show that a ‘letter-form’ area (responding more to consonant strings than false fonts) can be distinguished from an immediately anterior ‘visual word-form area’ in ventral occipitotemporal cortex (responding more to words than consonant strings). Letter-selective magnetoencephalographic responses begin in the letter-form area ~60ms earlier than word-selective responses in the word-form area. Local field potentials confirm the latency and location of letter-selective responses. This area shows increased high gamma power for ~400ms, and strong phase-locking with more anterior areas supporting lexico-semantic processing. These findings suggest that during reading, visual stimuli are first encoded as letters before their combinations are encoded as words. Activity then rapidly spreads anteriorly, and the entire network is engaged in sustained integrative processing. PMID:23250414

  2. Orthographic processing in pigeons (Columba livia)

    PubMed Central

    Scarf, Damian; Boy, Karoline; Uber Reinert, Anelisie; Devine, Jack; Güntürkün, Onur; Colombo, Michael

    2016-01-01

    Learning to read involves the acquisition of letter–sound relationships (i.e., decoding skills) and the ability to visually recognize words (i.e., orthographic knowledge). Although decoding skills are clearly human-unique, given they are seated in language, recent research and theory suggest that orthographic processing may derive from the exaptation or recycling of visual circuits that evolved to recognize everyday objects and shapes in our natural environment. An open question is whether orthographic processing is limited to visual circuits that are similar to our own or a product of plasticity common to many vertebrate visual systems. Here we show that pigeons, organisms that separated from humans more than 300 million y ago, process words orthographically. Specifically, we demonstrate that pigeons trained to discriminate words from nonwords picked up on the orthographic properties that define words and used this knowledge to identify words they had never seen before. In addition, the pigeons were sensitive to the bigram frequencies of words (i.e., the common co-occurrence of certain letter pairs), the edit distance between nonwords and words, and the internal structure of words. Our findings demonstrate that visual systems organizationally distinct from the primate visual system can also be exapted or recycled to process the visual word form. PMID:27638211

  3. Words with and without internal structure: what determines the nature of orthographic and morphological processing?

    PubMed Central

    Velan, Hadas; Frost, Ram

    2010-01-01

    Recent studies suggest that basic effects which are markers of visual word recognition in Indo-European languages cannot be obtained in Hebrew or in Arabic. Although Hebrew has an alphabetic writing system, just like English, French, or Spanish, a series of studies consistently suggested that simple form-orthographic priming, or letter-transposition priming are not found in Hebrew. In four experiments, we tested the hypothesis that this is due to the fact that Semitic words have an underlying structure that constrains the possible alignment of phonemes and their respective letters. The experiments contrasted typical Semitic words which are root-derived, with Hebrew words of non-Semitic origin, which are morphologically simple and resemble base words in European languages. Using RSVP, TL priming, and form-priming manipulations, we show that Hebrew readers process Hebrew words which are morphologically simple similar to the way they process English words. These words indeed reveal the typical form-priming and TL priming effects reported in European languages. In contrast, words with internal structure are processed differently, and require a different code for lexical access. We discuss the implications of these findings for current models of visual word recognition. PMID:21163472

  4. Language Non-Selective Activation of Orthography during Spoken Word Processing in Hindi-English Sequential Bilinguals: An Eye Tracking Visual World Study

    ERIC Educational Resources Information Center

    Mishra, Ramesh Kumar; Singh, Niharika

    2014-01-01

    Previous psycholinguistic studies have shown that bilinguals activate lexical items of both the languages during auditory and visual word processing. In this study we examined if Hindi-English bilinguals activate the orthographic forms of phonological neighbors of translation equivalents of the non target language while listening to words either…

  5. Effects of audio-visual presentation of target words in word translation training

    NASA Astrophysics Data System (ADS)

    Akahane-Yamada, Reiko; Komaki, Ryo; Kubo, Rieko

    2004-05-01

    Komaki and Akahane-Yamada (Proc. ICA2004) used 2AFC translation task in vocabulary training, in which the target word is presented visually in orthographic form of one language, and the appropriate meaning in another language has to be chosen between two choices. Present paper examined the effect of audio-visual presentation of target word when native speakers of Japanese learn to translate English words into Japanese. Pairs of English words contrasted in several phonemic distinctions (e.g., /r/-/l/, /b/-/v/, etc.) were used as word materials, and presented in three conditions; visual-only (V), audio-only (A), and audio-visual (AV) presentations. Identification accuracy of those words produced by two talkers was also assessed. During pretest, the accuracy for A stimuli was lowest, implying that insufficient translation ability and listening ability interact with each other when aurally presented word has to be translated. However, there was no difference in accuracy between V and AV stimuli, suggesting that participants translate the words depending on visual information only. The effect of translation training using AV stimuli did not transfer to identification ability, showing that additional audio information during translation does not help improve speech perception. Further examination is necessary to determine the effective L2 training method. [Work supported by TAO, Japan.

  6. Comparison of spatiotemporal cortical activation pattern during visual perception of Korean, English, Chinese words: an event-related potential study.

    PubMed

    Kim, Kyung Hwan; Kim, Ja Hyun

    2006-02-20

    The aim of this study was to compare spatiotemporal cortical activation patterns during the visual perception of Korean, English, and Chinese words. The comparison of these three languages offers an opportunity to study the effect of written forms on cortical processing of visually presented words, because of partial similarity/difference among words of these languages, and the familiarity of native Koreans with these three languages at the word level. Single-character words and pictograms were excluded from the stimuli in order to activate neuronal circuitries that are involved only in word perception. Since a variety of cerebral processes are sequentially evoked during visual word perception, a high-temporal resolution is required and thus we utilized event-related potential (ERP) obtained from high-density electroencephalograms. The differences and similarities observed from statistical analyses of ERP amplitudes, the correlation between ERP amplitudes and response times, and the patterns of current source density, appear to be in line with demands of visual and semantic analysis resulting from the characteristics of each language, and the expected task difficulties for native Korean subjects.

  7. Development of the Visual Word Form Area Requires Visual Experience: Evidence from Blind Braille Readers.

    PubMed

    Kim, Judy S; Kanjlia, Shipra; Merabet, Lotfi B; Bedny, Marina

    2017-11-22

    Learning to read causes the development of a letter- and word-selective region known as the visual word form area (VWFA) within the human ventral visual object stream. Why does a reading-selective region develop at this anatomical location? According to one hypothesis, the VWFA develops at the nexus of visual inputs from retinotopic cortices and linguistic input from the frontotemporal language network because reading involves extracting linguistic information from visual symbols. Surprisingly, the anatomical location of the VWFA is also active when blind individuals read Braille by touch, suggesting that vision is not required for the development of the VWFA. In this study, we tested the alternative prediction that VWFA development is in fact influenced by visual experience. We predicted that in the absence of vision, the "VWFA" is incorporated into the frontotemporal language network and participates in high-level language processing. Congenitally blind ( n = 10, 9 female, 1 male) and sighted control ( n = 15, 9 female, 6 male), male and female participants each took part in two functional magnetic resonance imaging experiments: (1) word reading (Braille for blind and print for sighted participants), and (2) listening to spoken sentences of different grammatical complexity (both groups). We find that in blind, but not sighted participants, the anatomical location of the VWFA responds both to written words and to the grammatical complexity of spoken sentences. This suggests that in blindness, this region takes on high-level linguistic functions, becoming less selective for reading. More generally, the current findings suggest that experience during development has a major effect on functional specialization in the human cortex. SIGNIFICANCE STATEMENT The visual word form area (VWFA) is a region in the human cortex that becomes specialized for the recognition of written letters and words. Why does this particular brain region become specialized for reading? We tested the hypothesis that the VWFA develops within the ventral visual stream because reading involves extracting linguistic information from visual symbols. Consistent with this hypothesis, we find that in congenitally blind Braille readers, but not sighted readers of print, the VWFA region is active during grammatical processing of spoken sentences. These results suggest that visual experience contributes to VWFA specialization, and that different neural implementations of reading are possible. Copyright © 2017 the authors 0270-6474/17/3711495-10$15.00/0.

  8. Development of the Visual Word Form Area Requires Visual Experience: Evidence from Blind Braille Readers

    PubMed Central

    Kanjlia, Shipra; Merabet, Lotfi B.

    2017-01-01

    Learning to read causes the development of a letter- and word-selective region known as the visual word form area (VWFA) within the human ventral visual object stream. Why does a reading-selective region develop at this anatomical location? According to one hypothesis, the VWFA develops at the nexus of visual inputs from retinotopic cortices and linguistic input from the frontotemporal language network because reading involves extracting linguistic information from visual symbols. Surprisingly, the anatomical location of the VWFA is also active when blind individuals read Braille by touch, suggesting that vision is not required for the development of the VWFA. In this study, we tested the alternative prediction that VWFA development is in fact influenced by visual experience. We predicted that in the absence of vision, the “VWFA” is incorporated into the frontotemporal language network and participates in high-level language processing. Congenitally blind (n = 10, 9 female, 1 male) and sighted control (n = 15, 9 female, 6 male), male and female participants each took part in two functional magnetic resonance imaging experiments: (1) word reading (Braille for blind and print for sighted participants), and (2) listening to spoken sentences of different grammatical complexity (both groups). We find that in blind, but not sighted participants, the anatomical location of the VWFA responds both to written words and to the grammatical complexity of spoken sentences. This suggests that in blindness, this region takes on high-level linguistic functions, becoming less selective for reading. More generally, the current findings suggest that experience during development has a major effect on functional specialization in the human cortex. SIGNIFICANCE STATEMENT The visual word form area (VWFA) is a region in the human cortex that becomes specialized for the recognition of written letters and words. Why does this particular brain region become specialized for reading? We tested the hypothesis that the VWFA develops within the ventral visual stream because reading involves extracting linguistic information from visual symbols. Consistent with this hypothesis, we find that in congenitally blind Braille readers, but not sighted readers of print, the VWFA region is active during grammatical processing of spoken sentences. These results suggest that visual experience contributes to VWFA specialization, and that different neural implementations of reading are possible. PMID:29061700

  9. Surviving Blind Decomposition: A Distributional Analysis of the Time-Course of Complex Word Recognition

    ERIC Educational Resources Information Center

    Schmidtke, Daniel; Matsuki, Kazunaga; Kuperman, Victor

    2017-01-01

    The current study addresses a discrepancy in the psycholinguistic literature about the chronology of information processing during the visual recognition of morphologically complex words. "Form-then-meaning" accounts of complex word recognition claim that morphemes are processed as units of form prior to any influence of their meanings,…

  10. The Influence of Visual Word Form in Reading: Single Case Study of an Arabic Patient with Deep Dyslexia

    ERIC Educational Resources Information Center

    Boumaraf, Assia; Macoir, Joël

    2016-01-01

    Deep dyslexia is a written language disorder characterized by poor reading of non-words, and advantage for concrete over abstract words with production of semantic, visual and morphological errors. In this single case study of an Arabic patient with input deep dyslexia, we investigated the impact of graphic features of Arabic on manifestations of…

  11. Rapid interactions between lexical semantic and word form analysis during word recognition in context: evidence from ERPs.

    PubMed

    Kim, Albert; Lai, Vicky

    2012-05-01

    We used ERPs to investigate the time course of interactions between lexical semantic and sublexical visual word form processing during word recognition. Participants read sentence-embedded pseudowords that orthographically resembled a contextually supported real word (e.g., "She measured the flour so she could bake a ceke…") or did not (e.g., "She measured the flour so she could bake a tont…") along with nonword consonant strings (e.g., "She measured the flour so she could bake a srdt…"). Pseudowords that resembled a contextually supported real word ("ceke") elicited an enhanced positivity at 130 msec (P130), relative to real words (e.g., "She measured the flour so she could bake a cake…"). Pseudowords that did not resemble a plausible real word ("tont") enhanced the N170 component, as did nonword consonant strings ("srdt"). The effect pattern shows that the visual word recognition system is, perhaps, counterintuitively, more rapidly sensitive to minor than to flagrant deviations from contextually predicted inputs. The findings are consistent with rapid interactions between lexical and sublexical representations during word recognition, in which rapid lexical access of a contextually supported word (CAKE) provides top-down excitation of form features ("cake"), highlighting the anomaly of an unexpected word "ceke."

  12. Ease of identifying words degraded by visual noise.

    PubMed

    Barber, P; de la Mahotière, C

    1982-08-01

    A technique is described for investigating word recognition involving the superimposition of 'noise' on the visual target word. For this task a word is printed in the form of letters made up of separate elements; noise consists of additional elements which serve to reduce the ease whereby the words may be recognized, and a threshold-like measure can be obtained in terms of the amount of noise. A word frequency effect was obtained for the noise task, and for words presented tachistoscopically but in conventional typography. For the tachistoscope task, however, the frequency effect depended on the method of presentation. A second study showed no effect of inspection interval on performance on the noise task. A word-frequency effect was also found in a third experiment with tachistoscopic exposure of the noise task stimuli in undegraded form. The question of whether common processes are drawn on by tasks entailing different ways of varying ease of recognition is addressed, and the suitability of different tasks for word recognition research is discussed.

  13. The relationship between visual word and face processing lateralization in the fusiform gyri: A cross-sectional study.

    PubMed

    Davies-Thompson, Jodie; Johnston, Samantha; Tashakkor, Yashar; Pancaroglu, Raika; Barton, Jason J S

    2016-08-01

    Visual words and faces activate similar networks but with complementary hemispheric asymmetries, faces being lateralized to the right and words to the left. A recent theory proposes that this reflects developmental competition between visual word and face processing. We investigated whether this results in an inverse correlation between the degree of lateralization of visual word and face activation in the fusiform gyri. 26 literate right-handed healthy adults underwent functional MRI with face and word localizers. We derived lateralization indices for cluster size and peak responses for word and face activity in left and right fusiform gyri, and correlated these across subjects. A secondary analysis examined all face- and word-selective voxels in the inferior occipitotemporal cortex. No negative correlations were found. There were positive correlations for the peak MR response between word and face activity within the left hemisphere, and between word activity in the left visual word form area and face activity in the right fusiform face area. The face lateralization index was positively rather than negatively correlated with the word index. In summary, we do not find a complementary relationship between visual word and face lateralization across subjects. The significance of the positive correlations is unclear: some may reflect the influences of general factors such as attention, but others may point to other factors that influence lateralization of function. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Subliminal convergence of Kanji and Kana words: further evidence for functional parcellation of the posterior temporal cortex in visual word perception.

    PubMed

    Nakamura, Kimihiro; Dehaene, Stanislas; Jobert, Antoinette; Le Bihan, Denis; Kouider, Sid

    2005-06-01

    Recent evidence has suggested that the human occipitotemporal region comprises several subregions, each sensitive to a distinct processing level of visual words. To further explore the functional architecture of visual word recognition, we employed a subliminal priming method with functional magnetic resonance imaging (fMRI) during semantic judgments of words presented in two different Japanese scripts, Kanji and Kana. Each target word was preceded by a subliminal presentation of either the same or a different word, and in the same or a different script. Behaviorally, word repetition produced significant priming regardless of whether the words were presented in the same or different script. At the neural level, this cross-script priming was associated with repetition suppression in the left inferior temporal cortex anterior and dorsal to the visual word form area hypothesized for alphabetical writing systems, suggesting that cross-script convergence occurred at a semantic level. fMRI also evidenced a shared visual occipito-temporal activation for words in the two scripts, with slightly more mesial and right-predominant activation for Kanji and with greater occipital activation for Kana. These results thus allow us to separate script-specific and script-independent regions in the posterior temporal lobe, while demonstrating that both can be activated subliminally.

  15. Does Temporal Integration Occur for Unrecognizable Words in Visual Crowding?

    PubMed Central

    Zhou, Jifan; Lee, Chia-Lin; Li, Kuei-An; Tien, Yung-Hsuan; Yeh, Su-Ling

    2016-01-01

    Visual crowding—the inability to see an object when it is surrounded by flankers in the periphery—does not block semantic activation: unrecognizable words due to visual crowding still generated robust semantic priming in subsequent lexical decision tasks. Based on the previous finding, the current study further explored whether unrecognizable crowded words can be temporally integrated into a phrase. By showing one word at a time, we presented Chinese four-word idioms with either a congruent or incongruent ending word in order to examine whether the three preceding crowded words can be temporally integrated to form a semantic context so as to affect the processing of the ending word. Results from both behavioral (Experiment 1) and Event-Related Potential (Experiment 2 and 3) measures showed congruency effect in only the non-crowded condition, which does not support the existence of unconscious multi-word integration. Aside from four-word idioms, we also found that two-word (modifier + adjective combination) integration—the simplest kind of temporal semantic integration—did not occur in visual crowding (Experiment 4). Our findings suggest that integration of temporally separated words might require conscious awareness, at least under the timing conditions tested in the current study. PMID:26890366

  16. The Left Occipitotemporal Cortex Does Not Show Preferential Activity for Words

    PubMed Central

    Petersen, Steven E.; Schlaggar, Bradley L.

    2012-01-01

    Regions in left occipitotemporal (OT) cortex, including the putative visual word form area, are among the most commonly activated in imaging studies of single-word reading. It remains unclear whether this part of the brain is more precisely characterized as specialized for words and/or letters or contains more general-use visual regions having properties useful for processing word stimuli, among others. In Analysis 1, we found no evidence of greater activity in left OT regions for words or letter strings relative to other high–spatial frequency high-contrast stimuli, including line drawings and Amharic strings (which constitute the Ethiopian writing system). In Analysis 2, we further investigated processing characteristics of OT cortex potentially useful in reading. Analysis 2 showed that a specific part of OT cortex 1) is responsive to visual feature complexity, measured by the number of strokes forming groups of letters or Amharic strings and 2) processes learned combinations of characters, such as those in words and pseudowords, as groups but does not do so in consonant and Amharic strings. Together, these results indicate that while regions of left OT cortex are not specialized for words, at least part of OT cortex has properties particularly useful for processing words and letters. PMID:22235035

  17. When canary primes yellow: effects of semantic memory on overt attention.

    PubMed

    Léger, Laure; Chauvet, Elodie

    2015-02-01

    This study explored how overt attention is influenced by the colour that is primed when a target word is read during a lexical visual search task. Prior studies have shown that attention can be influenced by conceptual or perceptual overlap between a target word and distractor pictures: attention is attracted to pictures that have the same form (rope--snake) or colour (green--frog) as the spoken target word or is drawn to an object from the same category as the spoken target word (trumpet--piano). The hypothesis for this study was that attention should be attracted to words displayed in the colour that is primed by reading a target word (for example, yellow for canary). An experiment was conducted in which participants' eye movements were recorded whilst they completed a lexical visual search task. The primary finding was that participants' eye movements were mainly directed towards words displayed in the colour primed by reading the target word, even though this colour was not relevant to completing the visual search task. This result is discussed in terms of top-down guidance of overt attention in visual search for words.

  18. Operational Symbols: Can a Picture Be Worth a Thousand Words?

    DTIC Science & Technology

    1991-04-01

    internal visualization, because forms are to visual communication what words are to verbal communication. From a psychological point of view, the process... Visual Communication . Washington, DC: National Education Association, 1960. Bohannan, Anthony G. "C31 In Support of the Land Commander," in Principles...captions guide what is learned from a picture or graphic. 40. John C. Ball and Francis C. Byrnes, ed., Research, Principles, and Practices in Visual

  19. A test of the orthographic recoding hypothesis

    NASA Astrophysics Data System (ADS)

    Gaygen, Daniel E.

    2003-04-01

    The Orthographic Recoding Hypothesis [D. E. Gaygen and P. A. Luce, Percept. Psychophys. 60, 465-483 (1998)] was tested. According to this hypothesis, listeners recognize spoken words heard for the first time by mapping them onto stored representations of the orthographic forms of the words. Listeners have a stable orthographic representation of words, but no phonological representation, when those words have been read frequently but never heard or spoken. Such may be the case for low frequency words such as jargon. Three experiments using visually and auditorily presented nonword stimuli tested this hypothesis. The first two experiments were explicit tests of memory (old-new tests) for words presented visually. In the first experiment, the recognition of auditorily presented nonwords was facilitated when they previously appeared on a visually presented list. The second experiment was similar, but included a concurrent articulation task during a visual word list presentation, thus preventing covert rehearsal of the nonwords. The results were similar to the first experiment. The third experiment was an indirect test of memory (auditory lexical decision task) for visually presented nonwords. Auditorily presented nonwords were identified as nonwords significantly more slowly if they had previously appeared on the visually presented list accompanied by a concurrent articulation task.

  20. Auditory Selective Attention to Speech Modulates Activity in the Visual Word Form Area

    PubMed Central

    Yoncheva, Yuliya N.; Zevin, Jason D.; Maurer, Urs

    2010-01-01

    Selective attention to speech versus nonspeech signals in complex auditory input could produce top-down modulation of cortical regions previously linked to perception of spoken, and even visual, words. To isolate such top-down attentional effects, we contrasted 2 equally challenging active listening tasks, performed on the same complex auditory stimuli (words overlaid with a series of 3 tones). Instructions required selectively attending to either the speech signals (in service of rhyme judgment) or the melodic signals (tone-triplet matching). Selective attention to speech, relative to attention to melody, was associated with blood oxygenation level–dependent (BOLD) increases during functional magnetic resonance imaging (fMRI) in left inferior frontal gyrus, temporal regions, and the visual word form area (VWFA). Further investigation of the activity in visual regions revealed overall deactivation relative to baseline rest for both attention conditions. Topographic analysis demonstrated that while attending to melody drove deactivation equivalently across all fusiform regions of interest examined, attending to speech produced a regionally specific modulation: deactivation of all fusiform regions, except the VWFA. Results indicate that selective attention to speech can topographically tune extrastriate cortex, leading to increased activity in VWFA relative to surrounding regions, in line with the well-established connectivity between areas related to spoken and visual word perception in skilled readers. PMID:19571269

  1. The Role of Auditory and Visual Speech in Word Learning at 18 Months and in Adulthood

    ERIC Educational Resources Information Center

    Havy, Mélanie; Foroud, Afra; Fais, Laurel; Werker, Janet F.

    2017-01-01

    Visual information influences speech perception in both infants and adults. It is still unknown whether lexical representations are multisensory. To address this question, we exposed 18-month-old infants (n = 32) and adults (n = 32) to new word-object pairings: Participants either heard the acoustic form of the words or saw the talking face in…

  2. Visual processing of words in a patient with visual form agnosia: a behavioural and fMRI study.

    PubMed

    Cavina-Pratesi, Cristiana; Large, Mary-Ellen; Milner, A David

    2015-03-01

    Patient D.F. has a profound and enduring visual form agnosia due to a carbon monoxide poisoning episode suffered in 1988. Her inability to distinguish simple geometric shapes or single alphanumeric characters can be attributed to a bilateral loss of cortical area LO, a loss that has been well established through structural and functional fMRI. Yet despite this severe perceptual deficit, D.F. is able to "guess" remarkably well the identity of whole words. This paradoxical finding, which we were able to replicate more than 20 years following her initial testing, raises the question as to whether D.F. has retained specialized brain circuitry for word recognition that is able to function to some degree without the benefit of inputs from area LO. We used fMRI to investigate this, and found regions in the left fusiform gyrus, left inferior frontal gyrus, and left middle temporal cortex that responded selectively to words. A group of healthy control subjects showed similar activations. The left fusiform activations appear to coincide with the area commonly named the visual word form area (VWFA) in studies of healthy individuals, and appear to be quite separate from the fusiform face area (FFA). We hypothesize that there is a route to this area that lies outside area LO, and which remains relatively unscathed in D.F. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Encoding in the visual word form area: an fMRI adaptation study of words versus handwriting.

    PubMed

    Barton, Jason J S; Fox, Christopher J; Sekunova, Alla; Iaria, Giuseppe

    2010-08-01

    Written texts are not just words but complex multidimensional stimuli, including aspects such as case, font, and handwriting style, for example. Neuropsychological reports suggest that left fusiform lesions can impair the reading of text for word (lexical) content, being associated with alexia, whereas right-sided lesions may impair handwriting recognition. We used fMRI adaptation in 13 healthy participants to determine if repetition-suppression occurred for words but not handwriting in the left visual word form area (VWFA) and the reverse in the right fusiform gyrus. Contrary to these expectations, we found adaptation for handwriting but not for words in both the left VWFA and the right VWFA homologue. A trend to adaptation for words but not handwriting was seen only in the left middle temporal gyrus. An analysis of anterior and posterior subdivisions of the left VWFA also failed to show any adaptation for words. We conclude that the right and the left fusiform gyri show similar patterns of adaptation for handwriting, consistent with a predominantly perceptual contribution to text processing.

  4. Word meaning and the control of eye fixation: semantic competitor effects and the visual world paradigm.

    PubMed

    Huettig, Falk; Altmann, Gerry T M

    2005-05-01

    When participants are presented simultaneously with spoken language and a visual display depicting objects to which that language refers, participants spontaneously fixate the visual referents of the words being heard [Cooper, R. M. (1974). The control of eye fixation by the meaning of spoken language: A new methodology for the real-time investigation of speech perception, memory, and language processing. Cognitive Psychology, 6(1), 84-107; ]. We demonstrate here that such spontaneous fixation can be driven by partial semantic overlap between a word and a visual object. Participants heard the word 'piano' when (a) a piano was depicted amongst unrelated distractors; (b) a trumpet was depicted amongst those same distractors; and (c), both the piano and trumpet were depicted. The probability of fixating the piano and the trumpet in the first two conditions rose as the word 'piano' unfolded. In the final condition, only fixations to the piano rose, although the trumpet was fixated more than the distractors. We conclude that eye movements are driven by the degree of match, along various dimensions that go beyond simple visual form, between a word and the mental representations of objects in the concurrent visual field.

  5. A Novel Image Retrieval Based on Visual Words Integration of SIFT and SURF

    PubMed Central

    Ali, Nouman; Bajwa, Khalid Bashir; Sablatnig, Robert; Chatzichristofis, Savvas A.; Iqbal, Zeshan; Rashid, Muhammad; Habib, Hafiz Adnan

    2016-01-01

    With the recent evolution of technology, the number of image archives has increased exponentially. In Content-Based Image Retrieval (CBIR), high-level visual information is represented in the form of low-level features. The semantic gap between the low-level features and the high-level image concepts is an open research problem. In this paper, we present a novel visual words integration of Scale Invariant Feature Transform (SIFT) and Speeded-Up Robust Features (SURF). The two local features representations are selected for image retrieval because SIFT is more robust to the change in scale and rotation, while SURF is robust to changes in illumination. The visual words integration of SIFT and SURF adds the robustness of both features to image retrieval. The qualitative and quantitative comparisons conducted on Corel-1000, Corel-1500, Corel-2000, Oliva and Torralba and Ground Truth image benchmarks demonstrate the effectiveness of the proposed visual words integration. PMID:27315101

  6. The impact of inverted text on visual word processing: An fMRI study.

    PubMed

    Sussman, Bethany L; Reddigari, Samir; Newman, Sharlene D

    2018-06-01

    Visual word recognition has been studied for decades. One question that has received limited attention is how different text presentation orientations disrupt word recognition. By examining how word recognition processes may be disrupted by different text orientations it is hoped that new insights can be gained concerning the process. Here, we examined the impact of rotating and inverting text on the neural network responsible for visual word recognition focusing primarily on a region of the occipto-temporal cortex referred to as the visual word form area (VWFA). A lexical decision task was employed in which words and pseudowords were presented in one of three orientations (upright, rotated or inverted). The results demonstrate that inversion caused the greatest disruption of visual word recognition processes. Both rotated and inverted text elicited increased activation in spatial attention regions within the right parietal cortex. However, inverted text recruited phonological and articulatory processing regions within the left inferior frontal and left inferior parietal cortices. Finally, the VWFA was found to not behave similarly to the fusiform face area in that unusual text orientations resulted in increased activation and not decreased activation. It is hypothesized here that the VWFA activation is modulated by feedback from linguistic processes. Copyright © 2018 Elsevier Inc. All rights reserved.

  7. Top-down processing of symbolic meanings modulates the visual word form area.

    PubMed

    Song, Yiying; Tian, Moqian; Liu, Jia

    2012-08-29

    Functional magnetic resonance imaging (fMRI) studies on humans have identified a region in the left middle fusiform gyrus consistently activated by written words. This region is called the visual word form area (VWFA). Recently, a hypothesis, called the interactive account, is proposed that to effectively analyze the bottom-up visual properties of words, the VWFA receives predictive feedback from higher-order regions engaged in processing sounds, meanings, or actions associated with words. Further, this top-down influence on the VWFA is independent of stimulus formats. To test this hypothesis, we used fMRI to examine whether a symbolic nonword object (e.g., the Eiffel Tower) intended to represent something other than itself (i.e., Paris) could activate the VWFA. We found that scenes associated with symbolic meanings elicited a higher VWFA response than those not associated with symbolic meanings, and such top-down modulation on the VWFA can be established through short-term associative learning, even across modalities. In addition, the magnitude of the symbolic effect observed in the VWFA was positively correlated with the subjective experience on the strength of symbol-referent association across individuals. Therefore, the VWFA is likely a neural substrate for the interaction of the top-down processing of symbolic meanings with the analysis of bottom-up visual properties of sensory inputs, making the VWFA the location where the symbolic meaning of both words and nonword objects is represented.

  8. Development of sensitivity versus specificity for print in the visual word form area.

    PubMed

    Centanni, Tracy M; King, Livia W; Eddy, Marianna D; Whitfield-Gabrieli, Susan; Gabrieli, John D E

    2017-07-01

    An area near the left lateral occipito-temporal sulcus that responds preferentially to print has been designated as the visual word form area (VWFA). Research suggests that specialization in this brain region increases as reading expertise is achieved. Here we aimed to characterize that development in terms of sensitivity (response to printed words relative to non-linguistic faces) versus specificity (response to printed words versus line drawings of nameable objects) in typically reading children ages 7-14 versus young adults as measured by functional magnetic resonance imaging (fMRI). Relative to adults, children displayed equivalent sensitivity but reduced specificity. These findings suggest that sensitivity for print relative to non-linguistic stimuli develops relatively early in the VWFA in the course of reading development, but that specificity for printed words in VWFA is still developing through at least age 14. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. Alphabet Avenue: Wordplay in the Fast Lane.

    ERIC Educational Resources Information Center

    Morice, Dave

    This collection of palindromes, pangrams, acrostics, word squares and word ladders, visual and numerical puzzles, silly names, and much more is designed to delight, surprise, and challenge both the novice and the expert player of word games. It uses the metaphor of a busy, cosmopolitan city to showcase three facets of words: forms of letters,…

  10. Cognate and Word Class Ambiguity Effects in Noun and Verb Processing

    ERIC Educational Resources Information Center

    Bultena, Sybrine; Dijkstra, Ton; van Hell, Janet G.

    2013-01-01

    This study examined how noun and verb processing in bilingual visual word recognition are affected by within and between-language overlap. We investigated how word class ambiguous noun and verb cognates are processed by bilinguals, to see if co-activation of overlapping word forms between languages benefits from additional overlap within a…

  11. Resting state neural networks for visual Chinese word processing in Chinese adults and children.

    PubMed

    Li, Ling; Liu, Jiangang; Chen, Feiyan; Feng, Lu; Li, Hong; Tian, Jie; Lee, Kang

    2013-07-01

    This study examined the resting state neural networks for visual Chinese word processing in Chinese children and adults. Both the functional connectivity (FC) and amplitude of low frequency fluctuation (ALFF) approaches were used to analyze the fMRI data collected when Chinese participants were not engaged in any specific explicit tasks. We correlated time series extracted from the visual word form area (VWFA) with those in other regions in the brain. We also performed ALFF analysis in the resting state FC networks. The FC results revealed that, regarding the functionally connected brain regions, there exist similar intrinsically organized resting state networks for visual Chinese word processing in adults and children, suggesting that such networks may already be functional after 3-4 years of informal exposure to reading plus 3-4 years formal schooling. The ALFF results revealed that children appear to recruit more neural resources than adults in generally reading-irrelevant brain regions. Differences between child and adult ALFF results suggest that children's intrinsic word processing network during the resting state, though similar in functional connectivity, is still undergoing development. Further exposure to visual words and experience with reading are needed for children to develop a mature intrinsic network for word processing. The developmental course of the intrinsically organized word processing network may parallel that of the explicit word processing network. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. Tracking real-time neural activation of conceptual knowledge using single-trial event-related potentials.

    PubMed

    Amsel, Ben D

    2011-04-01

    Empirically derived semantic feature norms categorized into different types of knowledge (e.g., visual, functional, auditory) can be summed to create number-of-feature counts per knowledge type. Initial evidence suggests several such knowledge types may be recruited during language comprehension. The present study provides a more detailed understanding of the timecourse and intensity of influence of several such knowledge types on real-time neural activity. A linear mixed-effects model was applied to single trial event-related potentials for 207 visually presented concrete words measured on total number of features (semantic richness), imageability, and number of visual motion, color, visual form, smell, taste, sound, and function features. Significant influences of multiple feature types occurred before 200ms, suggesting parallel neural computation of word form and conceptual knowledge during language comprehension. Function and visual motion features most prominently influenced neural activity, underscoring the importance of action-related knowledge in computing word meaning. The dynamic time courses and topographies of these effects are most consistent with a flexible conceptual system wherein temporally dynamic recruitment of representations in modal and supramodal cortex are a crucial element of the constellation of processes constituting word meaning computation in the brain. Copyright © 2011 Elsevier Ltd. All rights reserved.

  13. Adding words to the brain's visual dictionary: novel word learning selectively sharpens orthographic representations in the VWFA.

    PubMed

    Glezer, Laurie S; Kim, Judy; Rule, Josh; Jiang, Xiong; Riesenhuber, Maximilian

    2015-03-25

    The nature of orthographic representations in the human brain is still subject of much debate. Recent reports have claimed that the visual word form area (VWFA) in left occipitotemporal cortex contains an orthographic lexicon based on neuronal representations highly selective for individual written real words (RWs). This theory predicts that learning novel words should selectively increase neural specificity for these words in the VWFA. We trained subjects to recognize novel pseudowords (PWs) and used fMRI rapid adaptation to compare neural selectivity with RWs, untrained PWs (UTPWs), and trained PWs (TPWs). Before training, PWs elicited broadly tuned responses, whereas responses to RWs indicated tight tuning. After training, TPW responses resembled those of RWs, whereas UTPWs continued to show broad tuning. This change in selectivity was specific to the VWFA. Therefore, word learning appears to selectively increase neuronal specificity for the new words in the VWFA, thereby adding these words to the brain's visual dictionary. Copyright © 2015 the authors 0270-6474/15/354965-08$15.00/0.

  14. ERP signatures of conscious and unconscious word and letter perception in an inattentional blindness paradigm.

    PubMed

    Schelonka, Kathryn; Graulty, Christian; Canseco-Gonzalez, Enriqueta; Pitts, Michael A

    2017-09-01

    A three-phase inattentional blindness paradigm was combined with ERPs. While participants performed a distracter task, line segments in the background formed words or consonant-strings. Nearly half of the participants failed to notice these word-forms and were deemed inattentionally blind. All participants noticed the word-forms in phase 2 of the experiment while they performed the same distracter task. In the final phase, participants performed a task on the word-forms. In all phases, including during inattentional blindness, word-forms elicited distinct ERPs during early latencies (∼200-280ms) suggesting unconscious orthographic processing. A subsequent ERP (∼320-380ms) similar to the visual awareness negativity appeared only when subjects were aware of the word-forms, regardless of the task. Finally, word-forms elicited a P3b (∼400-550ms) only when these stimuli were task-relevant. These results are consistent with previous inattentional blindness studies and help distinguish brain activity associated with pre- and post-perceptual processing from correlates of conscious perception. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. The unique role of the visual word form area in reading.

    PubMed

    Dehaene, Stanislas; Cohen, Laurent

    2011-06-01

    Reading systematically activates the left lateral occipitotemporal sulcus, at a site known as the visual word form area (VWFA). This site is reproducible across individuals/scripts, attuned to reading-specific processes, and partially selective for written strings relative to other categories such as line drawings. Lesions affecting the VWFA cause pure alexia, a selective deficit in word recognition. These findings must be reconciled with the fact that human genome evolution cannot have been influenced by such a recent and culturally variable activity as reading. Capitalizing on recent functional magnetic resonance imaging experiments, we provide strong corroborating evidence for the hypothesis that reading acquisition partially recycles a cortical territory evolved for object and face recognition, the prior properties of which influenced the form of writing systems. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. The effect of visual and verbal modes of presentation on children's retention of images and words

    NASA Astrophysics Data System (ADS)

    Vasu, Ellen Storey; Howe, Ann C.

    This study tested the hypothesis that the use of two modes of presenting information to children has an additive memory effect for the retention of both images and words. Subjects were 22 first-grade and 22 fourth-grade children randomly assigned to visual and visual-verbal treatment groups. The visual-verbal group heard a description while observing an object; the visual group observed the same object but did not hear a description. Children were tested individually immediately after presentation of stimuli and two weeks later. They were asked to represent the information recalled through a drawing and an oral verbal description. In general, results supported the hypothesis and indicated, in addition, that children represent more information in iconic (pictorial) form than in symbolic (verbal) form. Strategies for using these results to enhance science learning at the elementary school level are discussed.

  17. Effects of Numerical Surface Form in Arithmetic Word Problems

    ERIC Educational Resources Information Center

    Orrantia, Josetxu; Múñez, David; San Romualdo, Sara; Verschaffel, Lieven

    2015-01-01

    Adults' simple arithmetic performance is more efficient when operands are presented in Arabic digit (3 + 5) than in number word (three + five) formats. An explanation provided is that visual familiarity with digits is higher respect to number words. However, most studies have been limited to single-digit addition and multiplication problems. In…

  18. Words, Hemispheres, and Processing Mechanisms: A Response to Marsolek and Deason (2007)

    ERIC Educational Resources Information Center

    Ellis, Andrew W.; Ansorge, Lydia; Lavidor, Michal

    2007-01-01

    Ellis, Ansorge and Lavidor (2007) [Ellis, A.W., Ansorge, L., & Lavidor, M. (2007). Words, hemispheres, and dissociable subsystems: The effects of exposure duration, case alternation, priming and continuity of form on word recognition in the left and right visual fields. "Brain and Language," 103, 292-303.] presented three experiments investigating…

  19. Early Visual Word Processing Is Flexible: Evidence from Spatiotemporal Brain Dynamics.

    PubMed

    Chen, Yuanyuan; Davis, Matthew H; Pulvermüller, Friedemann; Hauk, Olaf

    2015-09-01

    Visual word recognition is often described as automatic, but the functional locus of top-down effects is still a matter of debate. Do task demands modulate how information is retrieved, or only how it is used? We used EEG/MEG recordings to assess whether, when, and how task contexts modify early retrieval of specific psycholinguistic information in occipitotemporal cortex, an area likely to contribute to early stages of visual word processing. Using a parametric approach, we analyzed the spatiotemporal response patterns of occipitotemporal cortex for orthographic, lexical, and semantic variables in three psycholinguistic tasks: silent reading, lexical decision, and semantic decision. Task modulation of word frequency and imageability effects occurred simultaneously in ventral occipitotemporal regions-in the vicinity of the putative visual word form area-around 160 msec, following task effects on orthographic typicality around 100 msec. Frequency and typicality also produced task-independent effects in anterior temporal lobe regions after 200 msec. The early task modulation for several specific psycholinguistic variables indicates that occipitotemporal areas integrate perceptual input with prior knowledge in a task-dependent manner. Still, later task-independent effects in anterior temporal lobes suggest that word recognition eventually leads to retrieval of semantic information irrespective of task demands. We conclude that even a highly overlearned visual task like word recognition should be described as flexible rather than automatic.

  20. The emergence of the visual word form: Longitudinal evolution of category-specific ventral visual areas during reading acquisition

    PubMed Central

    Monzalvo, Karla; Dehaene, Stanislas

    2018-01-01

    How does education affect cortical organization? All literate adults possess a region specialized for letter strings, the visual word form area (VWFA), within the mosaic of ventral regions involved in processing other visual categories such as objects, places, faces, or body parts. Therefore, the acquisition of literacy may induce a reorientation of cortical maps towards letters at the expense of other categories such as faces. To test this cortical recycling hypothesis, we studied how the visual cortex of individual children changes during the first months of reading acquisition. Ten 6-year-old children were scanned longitudinally 6 or 7 times with functional magnetic resonance imaging (fMRI) before and throughout the first year of school. Subjects were exposed to a variety of pictures (words, numbers, tools, houses, faces, and bodies) while performing an unrelated target-detection task. Behavioral assessment indicated a sharp rise in grapheme–phoneme knowledge and reading speed in the first trimester of school. Concurrently, voxels specific to written words and digits emerged at the VWFA location. The responses to other categories remained largely stable, although right-hemispheric face-related activity increased in proportion to reading scores. Retrospective examination of the VWFA voxels prior to reading acquisition showed that reading encroaches on voxels that are initially weakly specialized for tools and close to but distinct from those responsive to faces. Remarkably, those voxels appear to keep their initial category selectivity while acquiring an additional and stronger responsivity to words. We propose a revised model of the neuronal recycling process in which new visual categories invade weakly specified cortex while leaving previously stabilized cortical responses unchanged. PMID:29509766

  1. The emergence of the visual word form: Longitudinal evolution of category-specific ventral visual areas during reading acquisition.

    PubMed

    Dehaene-Lambertz, Ghislaine; Monzalvo, Karla; Dehaene, Stanislas

    2018-03-01

    How does education affect cortical organization? All literate adults possess a region specialized for letter strings, the visual word form area (VWFA), within the mosaic of ventral regions involved in processing other visual categories such as objects, places, faces, or body parts. Therefore, the acquisition of literacy may induce a reorientation of cortical maps towards letters at the expense of other categories such as faces. To test this cortical recycling hypothesis, we studied how the visual cortex of individual children changes during the first months of reading acquisition. Ten 6-year-old children were scanned longitudinally 6 or 7 times with functional magnetic resonance imaging (fMRI) before and throughout the first year of school. Subjects were exposed to a variety of pictures (words, numbers, tools, houses, faces, and bodies) while performing an unrelated target-detection task. Behavioral assessment indicated a sharp rise in grapheme-phoneme knowledge and reading speed in the first trimester of school. Concurrently, voxels specific to written words and digits emerged at the VWFA location. The responses to other categories remained largely stable, although right-hemispheric face-related activity increased in proportion to reading scores. Retrospective examination of the VWFA voxels prior to reading acquisition showed that reading encroaches on voxels that are initially weakly specialized for tools and close to but distinct from those responsive to faces. Remarkably, those voxels appear to keep their initial category selectivity while acquiring an additional and stronger responsivity to words. We propose a revised model of the neuronal recycling process in which new visual categories invade weakly specified cortex while leaving previously stabilized cortical responses unchanged.

  2. Affective Overload: The Effect of Emotive Visual Stimuli on Target Vocabulary Retrieval.

    PubMed

    Çetin, Yakup; Griffiths, Carol; Özel, Zeynep Ebrar Yetkiner; Kinay, Hüseyin

    2016-04-01

    There has been considerable interest in cognitive load in recent years, but the effect of affective load and its relationship to mental functioning has not received as much attention. In order to investigate the effects of affective stimuli on cognitive function as manifest in the ability to remember foreign language vocabulary, two groups of student volunteers (N = 64) aged from 17 to 25 years were shown a Powerpoint presentation of 21 target language words with a picture, audio, and written form for every word. The vocabulary was presented in comfortable rooms with padded chairs and the participants were provided with snacks so that they would be comfortable and relaxed. After the Powerpoint they were exposed to two forms of visual stimuli for 27 min. The different formats contained either visually affective content (sexually suggestive, violent or frightening material) or neutral content (a nature documentary). The group which was exposed to the emotive visual stimuli remembered significantly fewer words than the group which watched the emotively neutral nature documentary. Implications of this finding are discussed and suggestions made for ongoing research.

  3. Before the N400: effects of lexical-semantic violations in visual cortex.

    PubMed

    Dikker, Suzanne; Pylkkanen, Liina

    2011-07-01

    There exists an increasing body of research demonstrating that language processing is aided by context-based predictions. Recent findings suggest that the brain generates estimates about the likely physical appearance of upcoming words based on syntactic predictions: words that do not physically look like the expected syntactic category show increased amplitudes in the visual M100 component, the first salient MEG response to visual stimulation. This research asks whether violations of predictions based on lexical-semantic information might similarly generate early visual effects. In a picture-noun matching task, we found early visual effects for words that did not accurately describe the preceding pictures. These results demonstrate that, just like syntactic predictions, lexical-semantic predictions can affect early visual processing around ∼100ms, suggesting that the M100 response is not exclusively tuned to recognizing visual features relevant to syntactic category analysis. Rather, the brain might generate predictions about upcoming visual input whenever it can. However, visual effects of lexical-semantic violations only occurred when a single lexical item could be predicted. We argue that this may be due to the fact that in natural language processing, there is typically no straightforward mapping between lexical-semantic fields (e.g., flowers) and visual or auditory forms (e.g., tulip, rose, magnolia). For syntactic categories, in contrast, certain form features do reliably correlate with category membership. This difference may, in part, explain why certain syntactic effects typically occur much earlier than lexical-semantic effects. Copyright © 2011 Elsevier Inc. All rights reserved.

  4. When semantics aids phonology: A processing advantage for iconic word forms in aphasia.

    PubMed

    Meteyard, Lotte; Stoppard, Emily; Snudden, Dee; Cappa, Stefano F; Vigliocco, Gabriella

    2015-09-01

    Iconicity is the non-arbitrary relation between properties of a phonological form and semantic content (e.g. "moo", "splash"). It is a common feature of both spoken and signed languages, and recent evidence shows that iconic forms confer an advantage during word learning. We explored whether iconic forms conferred a processing advantage for 13 individuals with aphasia following left-hemisphere stroke. Iconic and control words were compared in four different tasks: repetition, reading aloud, auditory lexical decision and visual lexical decision. An advantage for iconic words was seen for some individuals in all tasks, with consistent group effects emerging in reading aloud and auditory lexical decision. Both these tasks rely on mapping between semantics and phonology. We conclude that iconicity aids spoken word processing for individuals with aphasia. This advantage is due to a stronger connection between semantic information and phonological forms. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. An exploratory study of linguistic-colour associations across languages in multilingual synaesthetes.

    PubMed

    Barnett, Kylie J; Feeney, Joanne; Gormley, Michael; Newell, Fiona N

    2009-07-01

    In one of the most common forms of synaesthesia, linguistic-colour synaesthesia, colour is induced by stimuli such as numbers, letters, days of the week, and months of the year. It is not clear, however, whether linguistic-colour synaesthesia is determined more by higher level semantic information--that is, word meaning--or by lower level grapheme or phoneme structure. To explore this issue, we tested whether colour is consistently induced by grapheme or phoneme form or word meaning in bilingual and trilingual linguistic-colour synaesthetes. We reasoned that if the induced colour was related to word meaning, rather than to the acoustic or visual properties of the words, then the induced colours would remain consistent across languages. We found that colours were not consistently related to word meaning across languages. Instead, induced colours were more related to form properties of the word across languages, particularly visual structure. However, the type of inducing stimulus influenced specific colour associations. For example, colours to months of the year were more consistent across languages than were colours to numbers or days of the week. Furthermore, the effect of inducing stimuli was also associated with the age of acquisition of additional languages. Our findings are discussed with reference to a critical period in language acquisition on synaesthesia.

  6. The putative visual word form area is functionally connected to the dorsal attention network.

    PubMed

    Vogel, Alecia C; Miezin, Fran M; Petersen, Steven E; Schlaggar, Bradley L

    2012-03-01

    The putative visual word form area (pVWFA) is the most consistently activated region in single word reading studies (i.e., Vigneau et al. 2006), yet its function remains a matter of debate. The pVWFA may be predominantly used in reading or it could be a more general visual processor used in reading but also in other visual tasks. Here, resting-state functional connectivity magnetic resonance imaging (rs-fcMRI) is used to characterize the functional relationships of the pVWFA to help adjudicate between these possibilities. rs-fcMRI defines relationships based on correlations in slow fluctuations of blood oxygen level-dependent activity occurring at rest. In this study, rs-fcMRI correlations show little relationship between the pVWFA and reading-related regions but a strong relationship between the pVWFA and dorsal attention regions thought to be related to spatial and feature attention. The rs-fcMRI correlations between the pVWFA and regions of the dorsal attention network increase with age and reading skill, while the correlations between the pVWFA and reading-related regions do not. These results argue the pVWFA is not used predominantly in reading but is a more general visual processor used in other visual tasks, as well as reading.

  7. The Putative Visual Word Form Area Is Functionally Connected to the Dorsal Attention Network

    PubMed Central

    Miezin, Fran M.; Petersen, Steven E.; Schlaggar, Bradley L.

    2012-01-01

    The putative visual word form area (pVWFA) is the most consistently activated region in single word reading studies (i.e., Vigneau et al. 2006), yet its function remains a matter of debate. The pVWFA may be predominantly used in reading or it could be a more general visual processor used in reading but also in other visual tasks. Here, resting-state functional connectivity magnetic resonance imaging (rs-fcMRI) is used to characterize the functional relationships of the pVWFA to help adjudicate between these possibilities. rs-fcMRI defines relationships based on correlations in slow fluctuations of blood oxygen level–dependent activity occurring at rest. In this study, rs-fcMRI correlations show little relationship between the pVWFA and reading-related regions but a strong relationship between the pVWFA and dorsal attention regions thought to be related to spatial and feature attention. The rs-fcMRI correlations between the pVWFA and regions of the dorsal attention network increase with age and reading skill, while the correlations between the pVWFA and reading-related regions do not. These results argue the pVWFA is not used predominantly in reading but is a more general visual processor used in other visual tasks, as well as reading. PMID:21690259

  8. Modulation of brain activity by multiple lexical and word form variables in visual word recognition: A parametric fMRI study.

    PubMed

    Hauk, Olaf; Davis, Matthew H; Pulvermüller, Friedemann

    2008-09-01

    Psycholinguistic research has documented a range of variables that influence visual word recognition performance. Many of these variables are highly intercorrelated. Most previous studies have used factorial designs, which do not exploit the full range of values available for continuous variables, and are prone to skewed stimulus selection as well as to effects of the baseline (e.g. when contrasting words with pseudowords). In our study, we used a parametric approach to study the effects of several psycholinguistic variables on brain activation. We focussed on the variable word frequency, which has been used in numerous previous behavioural, electrophysiological and neuroimaging studies, in order to investigate the neuronal network underlying visual word processing. Furthermore, we investigated the variable orthographic typicality as well as a combined variable for word length and orthographic neighbourhood size (N), for which neuroimaging results are still either scarce or inconsistent. Data were analysed using multiple linear regression analysis of event-related fMRI data acquired from 21 subjects in a silent reading paradigm. The frequency variable correlated negatively with activation in left fusiform gyrus, bilateral inferior frontal gyri and bilateral insulae, indicating that word frequency can affect multiple aspects of word processing. N correlated positively with brain activity in left and right middle temporal gyri as well as right inferior frontal gyrus. Thus, our analysis revealed multiple distinct brain areas involved in visual word processing within one data set.

  9. Dissociation of sensitivity to spatial frequency in word and face preferential areas of the fusiform gyrus.

    PubMed

    Woodhead, Zoe Victoria Joan; Wise, Richard James Surtees; Sereno, Marty; Leech, Robert

    2011-10-01

    Different cortical regions within the ventral occipitotemporal junction have been reported to show preferential responses to particular objects. Thus, it is argued that there is evidence for a left-lateralized visual word form area and a right-lateralized fusiform face area, but the unique specialization of these areas remains controversial. Words are characterized by greater power in the high spatial frequency (SF) range, whereas faces comprise a broader range of high and low frequencies. We investigated how these high-order visual association areas respond to simple sine-wave gratings that varied in SF. Using functional magnetic resonance imaging, we demonstrated lateralization of activity that was concordant with the low-level visual property of words and faces; left occipitotemporal cortex is more strongly activated by high than by low SF gratings, whereas the right occipitotemporal cortex responded more to low than high spatial frequencies. Therefore, the SF of a visual stimulus may bias the lateralization of processing irrespective of its higher order properties.

  10. The Effects of Semantic Transparency and Base Frequency on the Recognition of English Complex Words

    ERIC Educational Resources Information Center

    Xu, Joe; Taft, Marcus

    2015-01-01

    A visual lexical decision task was used to examine the interaction between base frequency (i.e., the cumulative frequencies of morphologically related forms) and semantic transparency for a list of derived words. Linear mixed effects models revealed that high base frequency facilitates the recognition of the complex word (i.e., a "base…

  11. Does a pear growl? Interference from semantic properties of orthographic neighbors.

    PubMed

    Pecher, Diane; de Rooij, Jimmy; Zeelenberg, René

    2009-07-01

    In this study, we investigated whether semantic properties of a word's orthographic neighbors are activated during visual word recognition. In two experiments, words were presented with a property that was not true for the word itself. We manipulated whether the property was true for an orthographic neighbor of the word. Our results showed that rejection of the property was slower and less accurate when the property was true for a neighbor than when the property was not true for a neighbor. These findings indicate that semantic information is activated before orthographic processing is finished. The present results are problematic for the links model (Forster, 2006; Forster & Hector, 2002) that was recently proposed in order to bring form-first models of visual word recognition into line with previously reported findings (Forster & Hector, 2002; Pecher, Zeelenberg, & Wagenmakers, 2005; Rodd, 2004).

  12. Making the invisible visible: verbal but not visual cues enhance visual detection.

    PubMed

    Lupyan, Gary; Spivey, Michael J

    2010-07-07

    Can hearing a word change what one sees? Although visual sensitivity is known to be enhanced by attending to the location of the target, perceptual enhancements of following cues to the identity of an object have been difficult to find. Here, we show that perceptual sensitivity is enhanced by verbal, but not visual cues. Participants completed an object detection task in which they made an object-presence or -absence decision to briefly-presented letters. Hearing the letter name prior to the detection task increased perceptual sensitivity (d'). A visual cue in the form of a preview of the to-be-detected letter did not. Follow-up experiments found that the auditory cuing effect was specific to validly cued stimuli. The magnitude of the cuing effect positively correlated with an individual measure of vividness of mental imagery; introducing uncertainty into the position of the stimulus did not reduce the magnitude of the cuing effect, but eliminated the correlation with mental imagery. Hearing a word made otherwise invisible objects visible. Interestingly, seeing a preview of the target stimulus did not similarly enhance detection of the target. These results are compatible with an account in which auditory verbal labels modulate lower-level visual processing. The findings show that a verbal cue in the form of hearing a word can influence even the most elementary visual processing and inform our understanding of how language affects perception.

  13. [Representation of letter position in visual word recognition process].

    PubMed

    Makioka, S

    1994-08-01

    Two experiments investigated the representation of letter position in visual word recognition process. In Experiment 1, subjects (12 undergraduates and graduates) were asked to detect a target word in a briefly-presented probe. Probes consisted of two kanji words. The latters which formed targets (critical letters) were always contained in probes. (e.g. target: [symbol: see text] probe: [symbol: see text]) High false alarm rate was observed when critical letters occupied the same within-word relative position (left or right within the word) in the probe words as in the target word. In Experiment 2 (subject were ten undergraduates and graduates), spaces adjacent to probe words were replaced by randomly chosen hiragana letters (e.g. [symbol: see text]), because spaces are not used to separate words in regular Japanese sentences. In addition to the effect of within-word relative position as in Experiment 1, the effect of between-word relative position (left or right across the probe words) was observed. These results suggest that information about within-word relative position of a letter is used in word recognition process. The effect of within-word relative position was explained by a connectionist model of word recognition.

  14. Our World Their World

    ERIC Educational Resources Information Center

    Brisco, Nicole

    2011-01-01

    Build, create, make, blog, develop, organize, structure, perform. These are just a few verbs that illustrate the visual world. These words create images that allow students to respond to their environment. Visual culture studies recognize the predominance of visual forms of media, communication, and information in the postmodern world. This…

  15. Training-related changes in early visual processing of functionally illiterate adults: evidence from event-related brain potentials

    PubMed Central

    2013-01-01

    Background Event-related brain potentials (ERPs) were used to investigate training-related changes in fast visual word recognition of functionally illiterate adults. Analyses focused on the left-lateralized occipito-temporal N170, which represents the earliest processing of visual word forms. Event-related brain potentials were recorded from 20 functional illiterates receiving intensive literacy training for adults, 10 functional illiterates not participating in the training and 14 regular readers while they read words, pseudowords or viewed symbol strings. Subjects were required to press a button whenever a stimulus was immediately repeated. Results Attending intensive literacy training was associated with improvements in reading and writing skills and with an increase of the word-related N170 amplitude. For untrained functional illiterates and regular readers no changes in literacy skills or N170 amplitude were observed. Conclusions Results of the present study suggest that the word-related N170 can still be modulated in adulthood as a result of the improvements in literacy skills. PMID:24330622

  16. Training-related changes in early visual processing of functionally illiterate adults: evidence from event-related brain potentials.

    PubMed

    Boltzmann, Melanie; Rüsseler, Jascha

    2013-12-13

    Event-related brain potentials (ERPs) were used to investigate training-related changes in fast visual word recognition of functionally illiterate adults. Analyses focused on the left-lateralized occipito-temporal N170, which represents the earliest processing of visual word forms. Event-related brain potentials were recorded from 20 functional illiterates receiving intensive literacy training for adults, 10 functional illiterates not participating in the training and 14 regular readers while they read words, pseudowords or viewed symbol strings. Subjects were required to press a button whenever a stimulus was immediately repeated. Attending intensive literacy training was associated with improvements in reading and writing skills and with an increase of the word-related N170 amplitude. For untrained functional illiterates and regular readers no changes in literacy skills or N170 amplitude were observed. Results of the present study suggest that the word-related N170 can still be modulated in adulthood as a result of the improvements in literacy skills.

  17. Functional magnetic resonance imaging of neural activity related to orthographic, phonological, and lexico-semantic judgments of visually presented characters and words.

    PubMed

    Fujimaki, N; Miyauchi, S; Pütz, B; Sasaki, Y; Takino, R; Sakai, K; Tamada, T

    1999-01-01

    Functional magnetic resonance imaging was used to investigate neural activity during the judgment of visual stimuli in two groups of experiments using seven and five normal subjects. The subjects were given tasks designed differentially to involve orthographic (more generally, visual form), phonological, and lexico-semantic processes. These tasks included the judgments of whether a line was horizontal, whether a pseudocharacter or pseudocharacter string included a horizontal line, whether a Japanese katakana (phonogram) character or character string included a certain vowel, or whether a character string was meaningful (noun or verb) or meaningless. Neural activity related to the visual form process was commonly observed during judgments of both single real-characters and single pseudocharacters in lateral extrastriate visual cortex, the posterior ventral or medial occipito-temporal area, and the posterior inferior temporal area of both hemispheres. In contrast, left-lateralized activation was observed in the latter two areas during judgments of real- and pseudo-character strings. These results show that there is no katakana "word form center" whose activity is specific to real words. Activation related to the phonological process was observed, in Broca's area, the insula, the supramarginal gyrus, and the posterior superior temporal area, with greater activation in the left hemisphere. These activation foci for visual form and phonological processes of katakana also were reported for the English alphabet in previous studies. The present activation showed no additional areas for contrasts of noun judgment with other conditions and was similar between noun and verb judgment tasks, suggesting two possibilities: no strong semantic activation was produced, or the semantic process shared activation foci with the phonological process.

  18. Reading with sounds: sensory substitution selectively activates the visual word form area in the blind.

    PubMed

    Striem-Amit, Ella; Cohen, Laurent; Dehaene, Stanislas; Amedi, Amir

    2012-11-08

    Using a visual-to-auditory sensory-substitution algorithm, congenitally fully blind adults were taught to read and recognize complex images using "soundscapes"--sounds topographically representing images. fMRI was used to examine key questions regarding the visual word form area (VWFA): its selectivity for letters over other visual categories without visual experience, its feature tolerance for reading in a novel sensory modality, and its plasticity for scripts learned in adulthood. The blind activated the VWFA specifically and selectively during the processing of letter soundscapes relative to both textures and visually complex object categories and relative to mental imagery and semantic-content controls. Further, VWFA recruitment for reading soundscapes emerged after 2 hr of training in a blind adult on a novel script. Therefore, the VWFA shows category selectivity regardless of input sensory modality, visual experience, and long-term familiarity or expertise with the script. The VWFA may perform a flexible task-specific rather than sensory-specific computation, possibly linking letter shapes to phonology. Copyright © 2012 Elsevier Inc. All rights reserved.

  19. Alternating-script priming in Japanese: Are Katakana and Hiragana characters interchangeable?

    PubMed

    Perea, Manuel; Nakayama, Mariko; Lupker, Stephen J

    2017-07-01

    Models of written word recognition in languages using the Roman alphabet assume that a word's visual form is quickly mapped onto abstract units. This proposal is consistent with the finding that masked priming effects are of similar magnitude from lowercase, uppercase, and alternating-case primes (e.g., beard-BEARD, BEARD-BEARD, and BeArD-BEARD). We examined whether this claim can be readily generalized to the 2 syllabaries of Japanese Kana (Hiragana and Katakana). The specific rationale was that if the visual form of Kana words is lost early in the lexical access process, alternating-script repetition primes should be as effective as same-script repetition primes at activating a target word. Results showed that alternating-script repetition primes were less effective at activating lexical representations of Katakana words than same-script repetition primes-indeed, they were no more effective than partial primes that contained only the Katakana characters from the alternating-script primes. Thus, the idiosyncrasies of each writing system do appear to shape the pathways to lexical access. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  20. Top-down modulation of ventral occipito-temporal responses during visual word recognition.

    PubMed

    Twomey, Tae; Kawabata Duncan, Keith J; Price, Cathy J; Devlin, Joseph T

    2011-04-01

    Although interactivity is considered a fundamental principle of cognitive (and computational) models of reading, it has received far less attention in neural models of reading that instead focus on serial stages of feed-forward processing from visual input to orthographic processing to accessing the corresponding phonological and semantic information. In particular, the left ventral occipito-temporal (vOT) cortex is proposed to be the first stage where visual word recognition occurs prior to accessing nonvisual information such as semantics and phonology. We used functional magnetic resonance imaging (fMRI) to investigate whether there is evidence that activation in vOT is influenced top-down by the interaction of visual and nonvisual properties of the stimuli during visual word recognition tasks. Participants performed two different types of lexical decision tasks that focused on either visual or nonvisual properties of the word or word-like stimuli. The design allowed us to investigate how vOT activation during visual word recognition was influenced by a task change to the same stimuli and by a stimulus change during the same task. We found both stimulus- and task-driven modulation of vOT activation that can only be explained by top-down processing of nonvisual aspects of the task and stimuli. Our results are consistent with the hypothesis that vOT acts as an interface linking visual form with nonvisual processing in both bottom up and top down directions. Such interactive processing at the neural level is in agreement with cognitive and computational models of reading but challenges some of the assumptions made by current neuro-anatomical models of reading. Copyright © 2011 Elsevier Inc. All rights reserved.

  1. Stimulus-driven changes in the direction of neural priming during visual word recognition.

    PubMed

    Pas, Maciej; Nakamura, Kimihiro; Sawamoto, Nobukatsu; Aso, Toshihiko; Fukuyama, Hidenao

    2016-01-15

    Visual object recognition is generally known to be facilitated when targets are preceded by the same or relevant stimuli. For written words, however, the beneficial effect of priming can be reversed when primes and targets share initial syllables (e.g., "boca" and "bono"). Using fMRI, the present study explored neuroanatomical correlates of this negative syllabic priming. In each trial, participants made semantic judgment about a centrally presented target, which was preceded by a masked prime flashed either to the left or right visual field. We observed that the inhibitory priming during reading was associated with a left-lateralized effect of repetition enhancement in the inferior frontal gyrus (IFG), rather than repetition suppression in the ventral visual region previously associated with facilitatory behavioral priming. We further performed a second fMRI experiment using a classical whole-word repetition priming paradigm with the same hemifield procedure and task instruction, and obtained well-known effects of repetition suppression in the left occipito-temporal cortex. These results therefore suggest that the left IFG constitutes a fast word processing system distinct from the posterior visual word-form system and that the directions of repetition effects can change with intrinsic properties of stimuli even when participants' cognitive and attentional states are kept constant. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. Using spoken words to guide open-ended category formation.

    PubMed

    Chauhan, Aneesh; Seabra Lopes, Luís

    2011-11-01

    Naming is a powerful cognitive tool that facilitates categorization by forming an association between words and their referents. There is evidence in child development literature that strong links exist between early word-learning and conceptual development. A growing view is also emerging that language is a cultural product created and acquired through social interactions. Inspired by these studies, this paper presents a novel learning architecture for category formation and vocabulary acquisition in robots through active interaction with humans. This architecture is open-ended and is capable of acquiring new categories and category names incrementally. The process can be compared to language grounding in children at single-word stage. The robot is embodied with visual and auditory sensors for world perception. A human instructor uses speech to teach the robot the names of the objects present in a visually shared environment. The robot uses its perceptual input to ground these spoken words and dynamically form/organize category descriptions in order to achieve better categorization. To evaluate the learning system at word-learning and category formation tasks, two experiments were conducted using a simple language game involving naming and corrective feedback actions from the human user. The obtained results are presented and discussed in detail.

  3. The company objects keep: Linking referents together during cross-situational word learning.

    PubMed

    Zettersten, Martin; Wojcik, Erica; Benitez, Viridiana L; Saffran, Jenny

    2018-04-01

    Learning the meanings of words involves not only linking individual words to referents but also building a network of connections among entities in the world, concepts, and words. Previous studies reveal that infants and adults track the statistical co-occurrence of labels and objects across multiple ambiguous training instances to learn words. However, it is less clear whether, given distributional or attentional cues, learners also encode associations amongst the novel objects. We investigated the consequences of two types of cues that highlighted object-object links in a cross-situational word learning task: distributional structure - how frequently the referents of novel words occurred together - and visual context - whether the referents were seen on matching backgrounds. Across three experiments, we found that in addition to learning novel words, adults formed connections between frequently co-occurring objects. These findings indicate that learners exploit statistical regularities to form multiple types of associations during word learning.

  4. Spatiotemporal Dynamics of Bilingual Word Processing

    PubMed Central

    Leonard, Matthew K.; Brown, Timothy T.; Travis, Katherine E.; Gharapetian, Lusineh; Hagler, Donald J.; Dale, Anders M.; Elman, Jeffrey L.; Halgren, Eric

    2009-01-01

    Studies with monolingual adults have identified successive stages occurring in different brain regions for processing single written words. We combined magnetoencephalography and magnetic resonance imaging to compare these stages between the first (L1) and second (L2) languages in bilingual adults. L1 words in a size judgment task evoked a typical left-lateralized sequence of activity first in ventral occipitotemporal cortex (VOT: previously associated with visual word-form encoding), and then ventral frontotemporal regions (associated with lexico-semantic processing). Compared to L1, words in L2 activated right VOT more strongly from ~135 ms; this activation was attenuated when words became highly familiar with repetition. At ~400ms, L2 responses were generally later than L1, more bilateral, and included the same lateral occipitotemporal areas as were activated by pictures. We propose that acquiring a language involves the recruitment of right hemisphere and posterior visual areas that are not necessary once fluency is achieved. PMID:20004256

  5. Visualizing the qualitative: making sense of written comments from an evaluative satisfaction survey.

    PubMed

    Bletzer, Keith V

    2015-01-01

    Satisfaction surveys are common in the field of health education, as a means of assisting organizations to improve the appropriateness of training materials and the effectiveness of facilitation-presentation. Data can be qualitative of which analysis often become specialized. This technical article aims to reveal whether qualitative survey results can be visualized by presenting them as a Word Cloud. Qualitative materials in the form of written comments on an agency-specific satisfaction survey were coded and quantified. The resulting quantitative data were used to convert comments into "input terms" to generate Word Clouds to increase comprehension and accessibility through visualization of the written responses. A three-tier display incorporated a Word Cloud at the top, followed by the corresponding frequency table, and a textual summary of the qualitative data represented by the Word Cloud imagery. This mixed format adheres to recognition that people vary in what format is most effective for assimilating new information. The combination of visual representation through Word Clouds complemented by quantified qualitative materials is one means of increasing comprehensibility for a range of stakeholders, who might not be familiar with numerical tables or statistical analyses.

  6. Words in Context: The Effects of Length, Frequency, and Predictability on Brain Responses During Natural Reading

    PubMed Central

    Schuster, Sarah; Hawelka, Stefan; Hutzler, Florian; Kronbichler, Martin; Richlan, Fabio

    2016-01-01

    Word length, frequency, and predictability count among the most influential variables during reading. Their effects are well-documented in eye movement studies, but pertinent evidence from neuroimaging primarily stem from single-word presentations. We investigated the effects of these variables during reading of whole sentences with simultaneous eye-tracking and functional magnetic resonance imaging (fixation-related fMRI). Increasing word length was associated with increasing activation in occipital areas linked to visual analysis. Additionally, length elicited a U-shaped modulation (i.e., least activation for medium-length words) within a brain stem region presumably linked to eye movement control. These effects, however, were diminished when accounting for multiple fixation cases. Increasing frequency was associated with decreasing activation within left inferior frontal, superior parietal, and occipito-temporal regions. The function of the latter region—hosting the putative visual word form area—was originally considered as limited to sublexical processing. An exploratory analysis revealed that increasing predictability was associated with decreasing activation within middle temporal and inferior frontal regions previously implicated in memory access and unification. The findings are discussed with regard to their correspondence with findings from single-word presentations and with regard to neurocognitive models of visual word recognition, semantic processing, and eye movement control during reading. PMID:27365297

  7. A dual-task investigation of automaticity in visual word processing

    NASA Technical Reports Server (NTRS)

    McCann, R. S.; Remington, R. W.; Van Selst, M.

    2000-01-01

    An analysis of activation models of visual word processing suggests that frequency-sensitive forms of lexical processing should proceed normally while unattended. This hypothesis was tested by having participants perform a speeded pitch discrimination task followed by lexical decisions or word naming. As the stimulus onset asynchrony between the tasks was reduced, lexical-decision and naming latencies increased dramatically. Word-frequency effects were additive with the increase, indicating that frequency-sensitive processing was subject to postponement while attention was devoted to the other task. Either (a) the same neural hardware shares responsibility for lexical processing and central stages of choice reaction time task processing and cannot perform both computations simultaneously, or (b) lexical processing is blocked in order to optimize performance on the pitch discrimination task. Either way, word processing is not as automatic as activation models suggest.

  8. Character Decomposition and Transposition Processes in Chinese Compound Words Modulates Attentional Blink.

    PubMed

    Cao, Hongwen; Gao, Min; Yan, Hongmei

    2016-01-01

    The attentional blink (AB) is the phenomenon in which the identification of the second of two targets (T2) is attenuated if it is presented less than 500 ms after the first target (T1). Although the AB is eliminated in canonical word conditions, it remains unclear whether the character order in compound words affects the magnitude of the AB. Morpheme decomposition and transposition of Chinese two-character compound words can provide an effective means to examine AB priming and to assess combinations of the component representations inherent to visual word identification. In the present study, we examined the processing of consecutive targets in a rapid serial visual presentation (RSVP) paradigm using Chinese two-character compound words in which the two characters were transposed to form meaningful words or meaningless combinations (reversible, transposed, or canonical words). We found that when two Chinese characters that form a compound word, regardless of their order, are presented in an RSVP sequence, the likelihood of an AB for the second character is greatly reduced or eliminated compared to when the two characters constitute separate words rather than a compound word. Moreover, the order of the report for the two characters is more likely to be reversed when the normal order of the two characters in a compound word is reversed, especially when the interval between the presentation of the two characters is extremely short. These findings are more consistent with the cognitive strategy hypothesis than the resource-limited hypothesis during character decomposition and transposition of Chinese two-character compound words. These results suggest that compound characters are perceived as a unit, rather than two separate words. The data further suggest that readers could easily understand the text with character transpositions in compound words during Chinese reading.

  9. Making the Invisible Visible: Verbal but Not Visual Cues Enhance Visual Detection

    PubMed Central

    Lupyan, Gary; Spivey, Michael J.

    2010-01-01

    Background Can hearing a word change what one sees? Although visual sensitivity is known to be enhanced by attending to the location of the target, perceptual enhancements of following cues to the identity of an object have been difficult to find. Here, we show that perceptual sensitivity is enhanced by verbal, but not visual cues. Methodology/Principal Findings Participants completed an object detection task in which they made an object-presence or -absence decision to briefly-presented letters. Hearing the letter name prior to the detection task increased perceptual sensitivity (d′). A visual cue in the form of a preview of the to-be-detected letter did not. Follow-up experiments found that the auditory cuing effect was specific to validly cued stimuli. The magnitude of the cuing effect positively correlated with an individual measure of vividness of mental imagery; introducing uncertainty into the position of the stimulus did not reduce the magnitude of the cuing effect, but eliminated the correlation with mental imagery. Conclusions/Significance Hearing a word made otherwise invisible objects visible. Interestingly, seeing a preview of the target stimulus did not similarly enhance detection of the target. These results are compatible with an account in which auditory verbal labels modulate lower-level visual processing. The findings show that a verbal cue in the form of hearing a word can influence even the most elementary visual processing and inform our understanding of how language affects perception. PMID:20628646

  10. Reading in the dark: neural correlates and cross-modal plasticity for learning to read entire words without visual experience.

    PubMed

    Sigalov, Nadine; Maidenbaum, Shachar; Amedi, Amir

    2016-03-01

    Cognitive neuroscience has long attempted to determine the ways in which cortical selectivity develops, and the impact of nature vs. nurture on it. Congenital blindness (CB) offers a unique opportunity to test this question as the brains of blind individuals develop without visual experience. Here we approach this question through the reading network. Several areas in the visual cortex have been implicated as part of the reading network, and one of the main ones among them is the VWFA, which is selective to the form of letters and words. But what happens in the CB brain? On the one hand, it has been shown that cross-modal plasticity leads to the recruitment of occipital areas, including the VWFA, for linguistic tasks. On the other hand, we have recently demonstrated VWFA activity for letters in contrast to other visual categories when the information is provided via other senses such as touch or audition. Which of these tasks is more dominant? By which mechanism does the CB brain process reading? Using fMRI and visual-to-auditory sensory substitution which transfers the topographical features of the letters we compare reading with semantic and scrambled conditions in a group of CB. We found activation in early auditory and visual cortices during the early processing phase (letter), while the later phase (word) showed VWFA and bilateral dorsal-intraparietal activations for words. This further supports the notion that many visual regions in general, even early visual areas, also maintain a predilection for task processing even when the modality is variable and in spite of putative lifelong linguistic cross-modal plasticity. Furthermore, we find that the VWFA is recruited preferentially for letter and word form, while it was not recruited, and even exhibited deactivation, for an immediately subsequent semantic task suggesting that despite only short sensory substitution experience orthographic task processing can dominate semantic processing in the VWFA. On a wider scope, this implies that at least in some cases cross-modal plasticity which enables the recruitment of areas for new tasks may be dominated by sensory independent task specific activation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Visual Journaling: Engaging Adolescents in Sketchbook Activities

    ERIC Educational Resources Information Center

    Cummings, Karen L.

    2011-01-01

    A wonderful way to engage high-school students in sketchbook activities is to have them create journals that combine images with words to convey emotions, ideas, and understandings. Visual journaling is a creative way for them to share their experiences and personal responses to life's events in visual and written form. Through selecting and…

  12. Effects of Referent Token Variability on L2 Vocabulary Learning

    ERIC Educational Resources Information Center

    Sommers, Mitchell S.; Barcroft, Joe

    2013-01-01

    Previous research has demonstrated substantially improved second language (L2) vocabulary learning when spoken word forms are varied using multiple talkers, speaking styles, or speaking rates. In contrast, the present study varied visual representations of referents for target vocabulary. English speakers learned Spanish words in formats of no…

  13. Embodied Writing: Choreographic Composition as Methodology

    ERIC Educational Resources Information Center

    Ulmer, Jasmine B.

    2015-01-01

    This paper seeks to examine how embodied methodological approaches might inform dance education practice and research. Through a series of examples, this paper explores how choreographic writing might function as an embodied writing methodology. Here, choreographic writing is envisioned as a form of visual word choreography in which words move,…

  14. Early, Equivalent ERP Masked Priming Effects for Regular and Irregular Morphology

    ERIC Educational Resources Information Center

    Morris, Joanna; Stockall, Linnaea

    2012-01-01

    Converging evidence from behavioral masked priming (Rastle & Davis, 2008), EEG masked priming (Morris, Frank, Grainger, & Holcomb, 2007) and single word MEG (Zweig & Pylkkanen, 2008) experiments has provided robust support for a model of lexical processing which includes an early, automatic, visual word form based stage of morphological parsing…

  15. Words-in-Freedom and the Oral Tradition.

    ERIC Educational Resources Information Center

    Webster, Michael

    1989-01-01

    Explores how oral and print characteristics mesh or clash in "words-in-freedom," a form of visual poetry invented by Filippo Tommaso Marinetti. Analyzes Marinetti's poster-poem "Apres la Marne, Joffre visita le front en auto," highlighting the different natures of the two media and the coding difficulties occasioned by…

  16. Information processing of visually presented picture and word stimuli by young hearing-impaired and normal-hearing children.

    PubMed

    Kelly, R R; Tomlison-Keasey, C

    1976-12-01

    Eleven hearing-impaired children and 11 normal-hearing children (mean = four years 11 months) were visually presented familiar items in either picture or word form. Subjects were asked to recognize the stimuli they had seen from cue cards consisting of pictures or words. They were then asked to recall the sequence of stimuli by arranging the cue cards selected. The hearing-impaired group and normal-hearing subjects performed differently with the picture/picture (P/P) and word/word (W/W) modes in the recognition phase. The hearing impaired performed equally well with both modes (P/P and W/W), while the normal hearing did significantly better on the P/P mode. Furthermore, the normal-hearing group showed no difference in processing like modes (P/P and W/W) when compared to unlike modes (W/P and P/W). In contrast, the hearing-impaired subjects did better on like modes. The results were interpreted, in part, as supporting the position that young normal-hearing children dual code their visual information better than hearing-impaired children.

  17. Universal brain systems for recognizing word shapes and handwriting gestures during reading

    PubMed Central

    Nakamura, Kimihiro; Kuo, Wen-Jui; Pegado, Felipe; Cohen, Laurent; Tzeng, Ovid J. L.; Dehaene, Stanislas

    2012-01-01

    Do the neural circuits for reading vary across culture? Reading of visually complex writing systems such as Chinese has been proposed to rely on areas outside the classical left-hemisphere network for alphabetic reading. Here, however, we show that, once potential confounds in cross-cultural comparisons are controlled for by presenting handwritten stimuli to both Chinese and French readers, the underlying network for visual word recognition may be more universal than previously suspected. Using functional magnetic resonance imaging in a semantic task with words written in cursive font, we demonstrate that two universal circuits, a shape recognition system (reading by eye) and a gesture recognition system (reading by hand), are similarly activated and show identical patterns of activation and repetition priming in the two language groups. These activations cover most of the brain regions previously associated with culture-specific tuning. Our results point to an extended reading network that invariably comprises the occipitotemporal visual word-form system, which is sensitive to well-formed static letter strings, and a distinct left premotor region, Exner’s area, which is sensitive to the forward or backward direction with which cursive letters are dynamically presented. These findings suggest that cultural effects in reading merely modulate a fixed set of invariant macroscopic brain circuits, depending on surface features of orthographies. PMID:23184998

  18. Words, Hemispheres, and Dissociable Subsystems: The Effects of Exposure Duration, Case Alternation, Priming, and Continuity of Form on Word Recognition in the Left and Right Visual Fields

    ERIC Educational Resources Information Center

    Ellis, Andrew W.; Ansorge, Lydia; Lavidor, Michal

    2007-01-01

    Three experiments explore aspects of the dissociable neural subsystems theory of hemispheric specialisation proposed by Marsolek and colleagues, and in particular a study by [Deason, R. G., & Marsolek, C. J. (2005). A critical boundary to the left-hemisphere advantage in word processing. "Brain and Language," 92, 251-261]. Experiment 1A showed…

  19. Surviving blind decomposition: A distributional analysis of the time-course of complex word recognition.

    PubMed

    Schmidtke, Daniel; Matsuki, Kazunaga; Kuperman, Victor

    2017-11-01

    The current study addresses a discrepancy in the psycholinguistic literature about the chronology of information processing during the visual recognition of morphologically complex words. Form-then-meaning accounts of complex word recognition claim that morphemes are processed as units of form prior to any influence of their meanings, whereas form-and-meaning models posit that recognition of complex word forms involves the simultaneous access of morphological and semantic information. The study reported here addresses this theoretical discrepancy by applying a nonparametric distributional technique of survival analysis (Reingold & Sheridan, 2014) to 2 behavioral measures of complex word processing. Across 7 experiments reported here, this technique is employed to estimate the point in time at which orthographic, morphological, and semantic variables exert their earliest discernible influence on lexical decision RTs and eye movement fixation durations. Contrary to form-then-meaning predictions, Experiments 1-4 reveal that surface frequency is the earliest lexical variable to exert a demonstrable influence on lexical decision RTs for English and Dutch derived words (e.g., badness ; bad + ness ), English pseudoderived words (e.g., wander ; wand + er ) and morphologically simple control words (e.g., ballad ; ball + ad ). Furthermore, for derived word processing across lexical decision and eye-tracking paradigms (Experiments 1-2; 5-7), semantic effects emerge early in the time-course of word recognition, and their effects either precede or emerge simultaneously with morphological effects. These results are not consistent with the premises of the form-then-meaning view of complex word recognition, but are convergent with a form-and-meaning account of complex word recognition. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  20. Different Dimensions of Cognitive Style in Typical and Atypical Cognition: New Evidence and a New Measurement Tool.

    PubMed

    Mealor, Andy D; Simner, Julia; Rothen, Nicolas; Carmichael, Duncan A; Ward, Jamie

    2016-01-01

    We developed the Sussex Cognitive Styles Questionnaire (SCSQ) to investigate visual and verbal processing preferences and incorporate global/local processing orientations and systemising into a single, comprehensive measure. In Study 1 (N = 1542), factor analysis revealed six reliable subscales to the final 60 item questionnaire: Imagery Ability (relating to the use of visual mental imagery in everyday life); Technical/Spatial (relating to spatial mental imagery, and numerical and technical cognition); Language & Word Forms; Need for Organisation; Global Bias; and Systemising Tendency. Thus, we replicate previous findings that visual and verbal styles are separable, and that types of imagery can be subdivided. We extend previous research by showing that spatial imagery clusters with other abstract cognitive skills, and demonstrate that global/local bias can be separated from systemising. Study 2 validated the Technical/Spatial and Language & Word Forms factors by showing that they affect performance on memory tasks. In Study 3, we validated Imagery Ability, Technical/Spatial, Language & Word Forms, Global Bias, and Systemising Tendency by issuing the SCSQ to a sample of synaesthetes (N = 121) who report atypical cognitive profiles on these subscales. Thus, the SCSQ consolidates research from traditionally disparate areas of cognitive science into a comprehensive cognitive style measure, which can be used in the general population, and special populations.

  1. Different Dimensions of Cognitive Style in Typical and Atypical Cognition: New Evidence and a New Measurement Tool

    PubMed Central

    Mealor, Andy D.; Simner, Julia; Rothen, Nicolas; Carmichael, Duncan A.; Ward, Jamie

    2016-01-01

    We developed the Sussex Cognitive Styles Questionnaire (SCSQ) to investigate visual and verbal processing preferences and incorporate global/local processing orientations and systemising into a single, comprehensive measure. In Study 1 (N = 1542), factor analysis revealed six reliable subscales to the final 60 item questionnaire: Imagery Ability (relating to the use of visual mental imagery in everyday life); Technical/Spatial (relating to spatial mental imagery, and numerical and technical cognition); Language & Word Forms; Need for Organisation; Global Bias; and Systemising Tendency. Thus, we replicate previous findings that visual and verbal styles are separable, and that types of imagery can be subdivided. We extend previous research by showing that spatial imagery clusters with other abstract cognitive skills, and demonstrate that global/local bias can be separated from systemising. Study 2 validated the Technical/Spatial and Language & Word Forms factors by showing that they affect performance on memory tasks. In Study 3, we validated Imagery Ability, Technical/Spatial, Language & Word Forms, Global Bias, and Systemising Tendency by issuing the SCSQ to a sample of synaesthetes (N = 121) who report atypical cognitive profiles on these subscales. Thus, the SCSQ consolidates research from traditionally disparate areas of cognitive science into a comprehensive cognitive style measure, which can be used in the general population, and special populations. PMID:27191169

  2. Numbers and functional lateralization: A visual half-field and dichotic listening study in proficient bilinguals.

    PubMed

    Klichowski, Michal; Króliczak, Gregory

    2017-06-01

    Potential links between language and numbers and the laterality of symbolic number representations in the brain are still debated. Furthermore, reports on bilingual individuals indicate that the language-number interrelationships might be quite complex. Therefore, we carried out a visual half-field (VHF) and dichotic listening (DL) study with action words and different forms of symbolic numbers used as stimuli to test the laterality of word and number processing in single-, dual-language and mixed -task and language- contexts. Experiment 1 (VHF) showed a significant right visual field/left hemispheric advantage in response accuracy for action word, as compared to any form of symbolic number processing. Experiment 2 (DL) revealed a substantially reversed effect - a significant right ear/left hemisphere advantage for arithmetic operations as compared to action word processing, and in response times in single- and dual-language contexts for number vs. action words. All these effects were language independent. Notably, for within-task response accuracy compared across modalities significant differences were found in all studied contexts. Thus, our results go counter to findings showing that action-relevant concepts and words, as well as number words are represented/processed primarily in the left hemisphere. Instead, we found that in the auditory context, following substantial engagement of working memory (here: by arithmetic operations), there is a subsequent functional reorganization of processing single stimuli, whether verbs or numbers. This reorganization - their weakened laterality - at least for response accuracy is not exclusive to processing of numbers, but the number of items to be processed. For response times, except for unpredictable tasks in mixed contexts, the "number problem" is more apparent. These outcomes are highly relevant to difficulties that simultaneous translators encounter when dealing with lengthy auditory material in which single items such as number words (and possibly other types of key words) need to be emphasized. Our results may also shed a new light on the "mathematical savant problem". Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Anatomical connections of the visual word form area.

    PubMed

    Bouhali, Florence; Thiebaut de Schotten, Michel; Pinel, Philippe; Poupon, Cyril; Mangin, Jean-François; Dehaene, Stanislas; Cohen, Laurent

    2014-11-12

    The visual word form area (VWFA), a region systematically involved in the identification of written words, occupies a reproducible location in the left occipitotemporal sulcus in expert readers of all cultures. Such a reproducible localization is paradoxical, given that reading is a recent invention that could not have influenced the genetic evolution of the cortex. Here, we test the hypothesis that the VWFA recycles a region of the ventral visual cortex that shows a high degree of anatomical connectivity to perisylvian language areas, thus providing an efficient circuit for both grapheme-phoneme conversion and lexical access. In two distinct experiments, using high-resolution diffusion-weighted data from 75 human subjects, we show that (1) the VWFA, compared with the fusiform face area, shows higher connectivity to left-hemispheric perisylvian superior temporal, anterior temporal and inferior frontal areas; (2) on a posterior-to-anterior axis, its localization within the left occipitotemporal sulcus maps onto a peak of connectivity with language areas, with slightly distinct subregions showing preferential projections to areas respectively involved in grapheme-phoneme conversion and lexical access. In agreement with functional data on the VWFA in blind subjects, the results suggest that connectivity to language areas, over and above visual factors, may be the primary determinant of VWFA localization. Copyright © 2014 the authors 0270-6474/14/3415402-13$15.00/0.

  4. MEG masked priming evidence for form-based decomposition of irregular verbs

    PubMed Central

    Fruchter, Joseph; Stockall, Linnaea; Marantz, Alec

    2013-01-01

    To what extent does morphological structure play a role in early processing of visually presented English past tense verbs? Previous masked priming studies have demonstrated effects of obligatory form-based decomposition for genuinely affixed words (teacher-TEACH) and pseudo-affixed words (corner-CORN), but not for orthographic controls (brothel-BROTH). Additionally, MEG single word reading studies have demonstrated that the transition probability from stem to affix (in genuinely affixed words) modulates an early evoked response known as the M170; parallel findings have been shown for the transition probability from stem to pseudo-affix (in pseudo-affixed words). Here, utilizing the M170 as a neural index of visual form-based morphological decomposition, we ask whether the M170 demonstrates masked morphological priming effects for irregular past tense verbs (following a previous study which obtained behavioral masked priming effects for irregulars). Dual mechanism theories of the English past tense predict a rule-based decomposition for regulars but not for irregulars, while certain single mechanism theories predict rule-based decomposition even for irregulars. MEG data was recorded for 16 subjects performing a visual masked priming lexical decision task. Using a functional region of interest (fROI) defined on the basis of repetition priming and regular morphological priming effects within the left fusiform and inferior temporal regions, we found that activity in this fROI was modulated by the masked priming manipulation for irregular verbs, during the time window of the M170. We also found effects of the scores generated by the learning model of Albright and Hayes (2003) on the degree of priming for irregular verbs. The results favor a single mechanism account of the English past tense, in which even irregulars are decomposed into stems and affixes prior to lexical access, as opposed to a dual mechanism model, in which irregulars are recognized as whole forms. PMID:24319420

  5. Determining the Effectiveness of Visual Input Enhancement across Multiple Linguistic Cues

    ERIC Educational Resources Information Center

    Comeaux, Ian; McDonald, Janet L.

    2018-01-01

    Visual input enhancement (VIE) increases the salience of grammatical forms, potentially facilitating acquisition through attention mechanisms. Native English speakers were exposed to an artificial language containing four linguistic cues (verb agreement, case marking, animacy, word order), with morphological cues either unmarked, marked in the…

  6. Why Do Pictures, but Not Visual Words, Reduce Older Adults’ False Memories?

    PubMed Central

    Smith, Rebekah E.; Hunt, R. Reed; Dunlap, Kathryn R.

    2015-01-01

    Prior work shows that false memories resulting from the study of associatively related lists are reduced for both young and older adults when the auditory presentation of study list words is accompanied by related pictures relative to when auditory word presentation is combined with visual presentation of the word. In contrast, young adults, but not older adults, show a reduction in false memories when presented with the visual word along with the auditory word relative to hearing the word only. In both the case of pictures relative to visual words and visual words relative to auditory words alone, the benefit of picture and visual words in reducing false memories has been explained in terms of monitoring for perceptual information. In our first experiment we provide the first simultaneous comparison of all three study presentation modalities (auditory only, auditory plus visual word, and auditory plus picture). Young and older adults show a reduction in false memories in the auditory plus picture condition, but only young adults show a reduction in the visual word condition relative to the auditory only condition. A second experiment investigates whether older adults fail to show a reduction in false memory in the visual word condition because they do not encode perceptual information in the visual word condition. In addition, the second experiment provides evidence that the failure of older adults to show the benefits of visual word presentation is related to reduced cognitive resources. PMID:26213799

  7. Why do pictures, but not visual words, reduce older adults' false memories?

    PubMed

    Smith, Rebekah E; Hunt, R Reed; Dunlap, Kathryn R

    2015-09-01

    Prior work shows that false memories resulting from the study of associatively related lists are reduced for both young and older adults when the auditory presentation of study list words is accompanied by related pictures relative to when auditory word presentation is combined with visual presentation of the word. In contrast, young adults, but not older adults, show a reduction in false memories when presented with the visual word along with the auditory word relative to hearing the word only. In both cases of pictures relative to visual words and visual words relative to auditory words alone, the benefit of picture and visual words in reducing false memories has been explained in terms of monitoring for perceptual information. In our first experiment, we provide the first simultaneous comparison of all 3 study presentation modalities (auditory only, auditory plus visual word, and auditory plus picture). Young and older adults show a reduction in false memories in the auditory plus picture condition, but only young adults show a reduction in the visual word condition relative to the auditory only condition. A second experiment investigates whether older adults fail to show a reduction in false memory in the visual word condition because they do not encode perceptual information in the visual word condition. In addition, the second experiment provides evidence that the failure of older adults to show the benefits of visual word presentation is related to reduced cognitive resources. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  8. Letters persistence after physical offset: visual word form area and left planum temporale. An fMRI study.

    PubMed

    Barban, Francesco; Zannino, Gian Daniele; Macaluso, Emiliano; Caltagirone, Carlo; Carlesimo, Giovanni A

    2013-06-01

    Iconic memory is a high-capacity low-duration visual memory store that allows the persistence of a visual stimulus after its offset. The categorical nature of this store has been extensively debated. This study provides functional magnetic resonance imaging evidence for brain regions underlying the persistence of postcategorical representations of visual stimuli. In a partial report paradigm, subjects matched a cued row of a 3 × 3 array of letters (postcategorical stimuli) or false fonts (precategorical stimuli) with a subsequent triplet of stimuli. The cued row was indicated by two visual flankers presented at the onset (physical stimulus readout) or after the offset of the array (iconic memory readout). The left planum temporale showed a greater modulation of the source of readout (iconic memory vs. physical stimulus) when letters were presented compared to false fonts. This is a multimodal brain region responsible for matching incoming acoustic and visual patterns with acoustic pattern templates. These findings suggest that letters persist after their physical offset in an abstract postcategorical representation. A targeted region of interest analysis revealed a similar pattern of activation in the Visual Word Form Area. These results suggest that multiple higher-order visual areas mediate iconic memory for postcategorical stimuli. Copyright © 2012 Wiley Periodicals, Inc.

  9. The Relationship of Error and Correction of Error in Oral Reading to Visual-Form Perception and Word Attack Skills.

    ERIC Educational Resources Information Center

    Clayman, Deborah P. Goldweber

    The ability of 100 second-grade boys and girls to self-correct oral reading errors was studied in relationship to visual-form perception, phonic skills, response speed, and reading level. Each child was tested individually with the Bender-Error Test, the Gray Oral Paragraphs, and the Roswell-Chall Diagnostic Reading Test and placed into a group of…

  10. A SUGGESTED METHOD FOR PRE-SCHOOL IDENTIFICATION OF POTENTIAL READING DISABILITY.

    ERIC Educational Resources Information Center

    NEWTON, KENNETH R.; AND OTHERS

    THE RELATIONSHIPS BETWEEN PREREADING MEASURES OF VISUAL-MOTOR-PERCEPTUAL SKILLS AND READING ACHIEVEMENT WERE STUDIED. SUBJECTS WERE 172 FIRST GRADERS. PRETESTS AND POST-TESTS FOR WORD RECOGNITION, MOTOR COORDINATION, AND VISUAL PERCEPTION WERE ADMINISTERED. FOURTEEN VARIABLES WERE TESTED. RESULTS INDICATED THAT FORM-COPYING WAS MORE EFFECTIVE THAN…

  11. Traits and causes of environmental loss-related chemical accidents in China based on co-word analysis.

    PubMed

    Wu, Desheng; Song, Yu; Xie, Kefan; Zhang, Baofeng

    2018-04-25

    Chemical accidents are major causes of environmental losses and have been debated due to the potential threat to human beings and environment. Compared with the single statistical analysis, co-word analysis of chemical accidents illustrates significant traits at various levels and presents data into a visual network. This study utilizes a co-word analysis of the keywords extracted from the Web crawling texts of environmental loss-related chemical accidents and uses the Pearson's correlation coefficient to examine the internal attributes. To visualize the keywords of the accidents, this study carries out a multidimensional scaling analysis applying PROXSCAL and centrality identification. The research results show that an enormous environmental cost is exacted, especially given the expected environmental loss-related chemical accidents with geographical features. Meanwhile, each event often brings more than one environmental impact. Large number of chemical substances are released in the form of solid, liquid, and gas, leading to serious results. Eight clusters that represent the traits of these accidents are formed, including "leakage," "poisoning," "explosion," "pipeline crack," "river pollution," "dust pollution," "emission," and "industrial effluent." "Explosion" and "gas" possess a strong correlation with "poisoning," located at the center of visualization map.

  12. Evidence from neglect dyslexia for morphological decomposition at the early stages of orthographic-visual analysis

    PubMed Central

    Reznick, Julia; Friedmann, Naama

    2015-01-01

    This study examined whether and how the morphological structure of written words affects reading in word-based neglect dyslexia (neglexia), and what can be learned about morphological decomposition in reading from the effect of morphology on neglexia. The oral reading of 7 Hebrew-speaking participants with acquired neglexia at the word level—6 with left neglexia and 1 with right neglexia—was evaluated. The main finding was that the morphological role of the letters on the neglected side of the word affected neglect errors: When an affix appeared on the neglected side, it was neglected significantly more often than when the neglected side was part of the root; root letters on the neglected side were never omitted, whereas affixes were. Perceptual effects of length and final letter form were found for words with an affix on the neglected side, but not for words in which a root letter appeared in the neglected side. Semantic and lexical factors did not affect the participants' reading and error pattern, and neglect errors did not preserve the morpho-lexical characteristics of the target words. These findings indicate that an early morphological decomposition of words to their root and affixes occurs before access to the lexicon and to semantics, at the orthographic-visual analysis stage, and that the effects did not result from lexical feedback. The same effects of morphological structure on reading were manifested by the participants with left- and right-sided neglexia. Since neglexia is a deficit at the orthographic-visual analysis level, the effect of morphology on reading patterns in neglexia further supports that morphological decomposition occurs in the orthographic-visual analysis stage, prelexically, and that the search for the three letters of the root in Hebrew is a trigger for attention shift in neglexia. PMID:26528159

  13. Remembering Plurals: Unit of Coding and Form of Coding during Serial Recall.

    ERIC Educational Resources Information Center

    Van Der Molen, Hugo; Morton, John

    1979-01-01

    Adult females recalled lists of six words, including some plural nouns, presented visually in sequence. A frequent error was to detach the plural from its root. This supports a morpheme-based as opposed to a unitary word code. Evidence for a primarily phonological coding of the plural morpheme was obtained. (Author/RD)

  14. The role of Broca's area in speech perception: evidence from aphasia revisited.

    PubMed

    Hickok, Gregory; Costanzo, Maddalena; Capasso, Rita; Miceli, Gabriele

    2011-12-01

    Motor theories of speech perception have been re-vitalized as a consequence of the discovery of mirror neurons. Some authors have even promoted a strong version of the motor theory, arguing that the motor speech system is critical for perception. Part of the evidence that is cited in favor of this claim is the observation from the early 1980s that individuals with Broca's aphasia, and therefore inferred damage to Broca's area, can have deficits in speech sound discrimination. Here we re-examine this issue in 24 patients with radiologically confirmed lesions to Broca's area and various degrees of associated non-fluent speech production. Patients performed two same-different discrimination tasks involving pairs of CV syllables, one in which both CVs were presented auditorily, and the other in which one syllable was auditorily presented and the other visually presented as an orthographic form; word comprehension was also assessed using word-to-picture matching tasks in both auditory and visual forms. Discrimination performance on the all-auditory task was four standard deviations above chance, as measured using d', and was unrelated to the degree of non-fluency in the patients' speech production. Performance on the auditory-visual task, however, was worse than, and not correlated with, the all-auditory task. The auditory-visual task was related to the degree of speech non-fluency. Word comprehension was at ceiling for the auditory version (97% accuracy) and near ceiling for the orthographic version (90% accuracy). We conclude that the motor speech system is not necessary for speech perception as measured both by discrimination and comprehension paradigms, but may play a role in orthographic decoding or in auditory-visual matching of phonological forms. 2011 Elsevier Inc. All rights reserved.

  15. Behavioral and Neural Representations of Spatial Directions across Words, Schemas, and Images.

    PubMed

    Weisberg, Steven M; Marchette, Steven A; Chatterjee, Anjan

    2018-05-23

    Modern spatial navigation requires fluency with multiple representational formats, including visual scenes, signs, and words. These formats convey different information. Visual scenes are rich and specific but contain extraneous details. Arrows, as an example of signs, are schematic representations in which the extraneous details are eliminated, but analog spatial properties are preserved. Words eliminate all spatial information and convey spatial directions in a purely abstract form. How does the human brain compute spatial directions within and across these formats? To investigate this question, we conducted two experiments on men and women: a behavioral study that was preregistered and a neuroimaging study using multivoxel pattern analysis of fMRI data to uncover similarities and differences among representational formats. Participants in the behavioral study viewed spatial directions presented as images, schemas, or words (e.g., "left"), and responded to each trial, indicating whether the spatial direction was the same or different as the one viewed previously. They responded more quickly to schemas and words than images, despite the visual complexity of stimuli being matched. Participants in the fMRI study performed the same task but responded only to occasional catch trials. Spatial directions in images were decodable in the intraparietal sulcus bilaterally but were not in schemas and words. Spatial directions were also decodable between all three formats. These results suggest that intraparietal sulcus plays a role in calculating spatial directions in visual scenes, but this neural circuitry may be bypassed when the spatial directions are presented as schemas or words. SIGNIFICANCE STATEMENT Human navigators encounter spatial directions in various formats: words ("turn left"), schematic signs (an arrow showing a left turn), and visual scenes (a road turning left). The brain must transform these spatial directions into a plan for action. Here, we investigate similarities and differences between neural representations of these formats. We found that bilateral intraparietal sulci represent spatial directions in visual scenes and across the three formats. We also found that participants respond quickest to schemas, then words, then images, suggesting that spatial directions in abstract formats are easier to interpret than concrete formats. These results support a model of spatial direction interpretation in which spatial directions are either computed for real world action or computed for efficient visual comparison. Copyright © 2018 the authors 0270-6474/18/384996-12$15.00/0.

  16. Generating descriptive visual words and visual phrases for large-scale image applications.

    PubMed

    Zhang, Shiliang; Tian, Qi; Hua, Gang; Huang, Qingming; Gao, Wen

    2011-09-01

    Bag-of-visual Words (BoWs) representation has been applied for various problems in the fields of multimedia and computer vision. The basic idea is to represent images as visual documents composed of repeatable and distinctive visual elements, which are comparable to the text words. Notwithstanding its great success and wide adoption, visual vocabulary created from single-image local descriptors is often shown to be not as effective as desired. In this paper, descriptive visual words (DVWs) and descriptive visual phrases (DVPs) are proposed as the visual correspondences to text words and phrases, where visual phrases refer to the frequently co-occurring visual word pairs. Since images are the carriers of visual objects and scenes, a descriptive visual element set can be composed by the visual words and their combinations which are effective in representing certain visual objects or scenes. Based on this idea, a general framework is proposed for generating DVWs and DVPs for image applications. In a large-scale image database containing 1506 object and scene categories, the visual words and visual word pairs descriptive to certain objects or scenes are identified and collected as the DVWs and DVPs. Experiments show that the DVWs and DVPs are informative and descriptive and, thus, are more comparable with the text words than the classic visual words. We apply the identified DVWs and DVPs in several applications including large-scale near-duplicated image retrieval, image search re-ranking, and object recognition. The combination of DVW and DVP performs better than the state of the art in large-scale near-duplicated image retrieval in terms of accuracy, efficiency and memory consumption. The proposed image search re-ranking algorithm: DWPRank outperforms the state-of-the-art algorithm by 12.4% in mean average precision and about 11 times faster in efficiency.

  17. Rapid Extraction of Lexical Tone Phonology in Chinese Characters: A Visual Mismatch Negativity Study

    PubMed Central

    Wang, Xiao-Dong; Liu, A-Ping; Wu, Yin-Yuan; Wang, Peng

    2013-01-01

    Background In alphabetic languages, emerging evidence from behavioral and neuroimaging studies shows the rapid and automatic activation of phonological information in visual word recognition. In the mapping from orthography to phonology, unlike most alphabetic languages in which there is a natural correspondence between the visual and phonological forms, in logographic Chinese, the mapping between visual and phonological forms is rather arbitrary and depends on learning and experience. The issue of whether the phonological information is rapidly and automatically extracted in Chinese characters by the brain has not yet been thoroughly addressed. Methodology/Principal Findings We continuously presented Chinese characters differing in orthography and meaning to adult native Mandarin Chinese speakers to construct a constant varying visual stream. In the stream, most stimuli were homophones of Chinese characters: The phonological features embedded in these visual characters were the same, including consonants, vowels and the lexical tone. Occasionally, the rule of phonology was randomly violated by characters whose phonological features differed in the lexical tone. Conclusions/Significance We showed that the violation of the lexical tone phonology evoked an early, robust visual response, as revealed by whole-head electrical recordings of the visual mismatch negativity (vMMN), indicating the rapid extraction of phonological information embedded in Chinese characters. Source analysis revealed that the vMMN was involved in neural activations of the visual cortex, suggesting that the visual sensory memory is sensitive to phonological information embedded in visual words at an early processing stage. PMID:23437235

  18. ERP manifestations of processing printed words at different psycholinguistic levels: time course and scalp distribution.

    PubMed

    Bentin, S; Mouchetant-Rostaing, Y; Giard, M H; Echallier, J F; Pernier, J

    1999-05-01

    The aim of the present study was to examine the time course and scalp distribution of electrophysiological manifestations of the visual word recognition mechanism. Event-related potentials (ERPs) elicited by visually presented lists of words were recorded while subjects were involved in a series of oddball tasks. The distinction between the designated target and nontarget stimuli was manipulated to induce a different level of processing in each session (visual, phonological/phonetic, phonological/lexical, and semantic). The ERPs of main interest in this study were those elicited by nontarget stimuli. In the visual task the targets were twice as big as the nontargets. Words, pseudowords, strings of consonants, strings of alphanumeric symbols, and strings of forms elicited a sharp negative peak at 170 msec (N170); their distribution was limited to the occipito-temporal sites. For the left hemisphere electrode sites, the N170 was larger for orthographic than for nonorthographic stimuli and vice versa for the right hemisphere. The ERPs elicited by all orthographic stimuli formed a clearly distinct cluster that was different from the ERPs elicited by nonorthographic stimuli. In the phonological/phonetic decision task the targets were words and pseudowords rhyming with the French word vitrail, whereas the nontargets were words, pseudowords, and strings of consonants that did not rhyme with vitrail. The most conspicuous potential was a negative peak at 320 msec, which was similarly elicited by pronounceable stimuli but not by nonpronounceable stimuli. The N320 was bilaterally distributed over the middle temporal lobe and was significantly larger over the left than over the right hemisphere. In the phonological/lexical processing task we compared the ERPs elicited by strings of consonants (among which words were selected), pseudowords (among which words were selected), and by words (among which pseudowords were selected). The most conspicuous potential in these tasks was a negative potential peaking at 350 msec (N350) elicited by phonologically legal but not by phonologically illegal stimuli. The distribution of the N350 was similar to that of the N320, but it was broader and including temporo-parietal areas that were not activated in the "rhyme" task. Finally, in the semantic task the targets were abstract words, and the nontargets were concrete words, pseudowords, and strings of consonants. The negative potential in this task peaked at 450 msec. Unlike the lexical decision, the negative peak in this task significantly distinguished not only between phonologically legal and illegal words but also between meaningful (words) and meaningless (pseudowords) phonologically legal structures. The distribution of the N450 included the areas activated in the lexical decision task but also areas in the fronto-central regions. The present data corroborated the functional neuroanatomy of word recognition systems suggested by other neuroimaging methods and described their timecourse, supporting a cascade-type process that involves different but interconnected neural modules, each responsible for a different level of processing word-related information.

  19. Use of Closed-Circuit Television with a Severely Visually Impaired Young Child.

    ERIC Educational Resources Information Center

    Miller-Wood, D. J.; And Others

    1990-01-01

    A closed-circuit television system was used with a five-year-old girl with severely limited vision to develop visual skills, especially skills related to concept formation. At the end of training, the girl could recognize lines, forms, shapes, letters, numbers, and words and could read short sentences. (Author/JDD)

  20. The Role of Visual Form in Lexical Access: Evidence from Chinese Classifier Production

    ERIC Educational Resources Information Center

    Bi, Yanchao; Yu, Xi; Geng, Jingyi; Alario, F. -Xavier.

    2010-01-01

    The interface between the conceptual and lexical systems was investigated in a word production setting. We tested the effects of two conceptual dimensions--semantic category and visual shape--on the selection of Chinese nouns and classifiers. Participants named pictures with nouns ("rope") or classifier-noun phrases ("one-"classifier"-rope") in…

  1. Colateralization of Broca's Area and the Visual Word form Area in Left-Handers: fMRI Evidence

    ERIC Educational Resources Information Center

    Van der Haegen, Lise; Cai, Qing; Brysbaert, Marc

    2012-01-01

    Language production has been found to be lateralized in the left hemisphere (LH) for 95% of right-handed people and about 75% of left-handers. The prevalence of atypical right hemispheric (RH) or bilateral lateralization for reading and colateralization of production with word reading laterality has never been tested in a large sample. In this…

  2. Alternating-Script Priming in Japanese: Are Katakana and Hiragana Characters Interchangeable?

    ERIC Educational Resources Information Center

    Perea, Manuel; Nakayama, Mariko; Lupker, Stephen J.

    2017-01-01

    Models of written word recognition in languages using the Roman alphabet assume that a word's visual form is quickly mapped onto abstract units. This proposal is consistent with the finding that masked priming effects are of similar magnitude from lowercase, uppercase, and alternating-case primes (e.g., beard-BEARD, BEARD-BEARD, and BeArD-BEARD).…

  3. Lexical Access in Early Stages of Visual Word Processing: A Single-Trial Correlational MEG Study of Heteronym Recognition

    ERIC Educational Resources Information Center

    Solomyak, Olla; Marantz, Alec

    2009-01-01

    We present an MEG study of heteronym recognition, aiming to distinguish between two theories of lexical access: the "early access" theory, which entails that lexical access occurs at early (pre 200 ms) stages of processing, and the "late access" theory, which interprets this early activity as orthographic word-form identification rather than…

  4. Immediate effects of form-class constraints on spoken word recognition

    PubMed Central

    Magnuson, James S.; Tanenhaus, Michael K.; Aslin, Richard N.

    2008-01-01

    In many domains of cognitive processing there is strong support for bottom-up priority and delayed top-down (contextual) integration. We ask whether this applies to supra-lexical context that could potentially constrain lexical access. Previous findings of early context integration in word recognition have typically used constraints that can be linked to pair-wise conceptual relations between words. Using an artificial lexicon, we found immediate integration of syntactic expectations based on pragmatic constraints linked to syntactic categories rather than words: phonologically similar “nouns” and “adjectives” did not compete when a combination of syntactic and visual information strongly predicted form class. These results suggest that predictive context is integrated continuously, and that previous findings supporting delayed context integration stem from weak contexts rather than delayed integration. PMID:18675408

  5. Neural correlates of visualizations of concrete and abstract words in preschool children: a developmental embodied approach

    PubMed Central

    D’Angiulli, Amedeo; Griffiths, Gordon; Marmolejo-Ramos, Fernando

    2015-01-01

    The neural correlates of visualization underlying word comprehension were examined in preschool children. On each trial, a concrete or abstract word was delivered binaurally (part 1: post-auditory visualization), followed by a four-picture array (a target plus three distractors; part 2: matching visualization). Children were to select the picture matching the word they heard in part 1. Event-related potentials (ERPs) locked to each stimulus presentation and task interval were averaged over sets of trials of increasing word abstractness. ERP time-course during both parts of the task showed that early activity (i.e., <300 ms) was predominant in response to concrete words, while activity in response to abstract words became evident only at intermediate (i.e., 300–699 ms) and late (i.e., 700–1000 ms) ERP intervals. Specifically, ERP topography showed that while early activity during post-auditory visualization was linked to left temporo-parietal areas for concrete words, early activity during matching visualization occurred mostly in occipito-parietal areas for concrete words, but more anteriorly in centro-parietal areas for abstract words. In intermediate ERPs, post-auditory visualization coincided with parieto-occipital and parieto-frontal activity in response to both concrete and abstract words, while in matching visualization a parieto-central activity was common to both types of words. In the late ERPs for both types of words, the post-auditory visualization involved right-hemispheric activity following a “post-anterior” pathway sequence: occipital, parietal, and temporal areas; conversely, matching visualization involved left-hemispheric activity following an “ant-posterior” pathway sequence: frontal, temporal, parietal, and occipital areas. These results suggest that, similarly, for concrete and abstract words, meaning in young children depends on variably complex visualization processes integrating visuo-auditory experiences and supramodal embodying representations. PMID:26175697

  6. Emotional words facilitate lexical but not early visual processing.

    PubMed

    Trauer, Sophie M; Kotz, Sonja A; Müller, Matthias M

    2015-12-12

    Emotional scenes and faces have shown to capture and bind visual resources at early sensory processing stages, i.e. in early visual cortex. However, emotional words have led to mixed results. In the current study ERPs were assessed simultaneously with steady-state visual evoked potentials (SSVEPs) to measure attention effects on early visual activity in emotional word processing. Neutral and negative words were flickered at 12.14 Hz whilst participants performed a Lexical Decision Task. Emotional word content did not modulate the 12.14 Hz SSVEP amplitude, neither did word lexicality. However, emotional words affected the ERP. Negative compared to neutral words as well as words compared to pseudowords lead to enhanced deflections in the P2 time range indicative of lexico-semantic access. The N400 was reduced for negative compared to neutral words and enhanced for pseudowords compared to words indicating facilitated semantic processing of emotional words. LPC amplitudes reflected word lexicality and thus the task-relevant response. In line with previous ERP and imaging evidence, the present results indicate that written emotional words are facilitated in processing only subsequent to visual analysis.

  7. Semantic information mediates visual attention during spoken word recognition in Chinese: Evidence from the printed-word version of the visual-world paradigm.

    PubMed

    Shen, Wei; Qu, Qingqing; Li, Xingshan

    2016-07-01

    In the present study, we investigated whether the activation of semantic information during spoken word recognition can mediate visual attention's deployment to printed Chinese words. We used a visual-world paradigm with printed words, in which participants listened to a spoken target word embedded in a neutral spoken sentence while looking at a visual display of printed words. We examined whether a semantic competitor effect could be observed in the printed-word version of the visual-world paradigm. In Experiment 1, the relationship between the spoken target words and the printed words was manipulated so that they were semantically related (a semantic competitor), phonologically related (a phonological competitor), or unrelated (distractors). We found that the probability of fixations on semantic competitors was significantly higher than that of fixations on the distractors. In Experiment 2, the orthographic similarity between the spoken target words and their semantic competitors was manipulated to further examine whether the semantic competitor effect was modulated by orthographic similarity. We found significant semantic competitor effects regardless of orthographic similarity. Our study not only reveals that semantic information can affect visual attention, it also provides important new insights into the methodology employed to investigate the semantic processing of spoken words during spoken word recognition using the printed-word version of the visual-world paradigm.

  8. The Role of Left Occipitotemporal Cortex in Reading: Reconciling Stimulus, Task, and Lexicality Effects

    PubMed Central

    Humphries, Colin; Desai, Rutvik H.; Seidenberg, Mark S.; Osmon, David C.; Stengel, Ben C.; Binder, Jeffrey R.

    2013-01-01

    Although the left posterior occipitotemporal sulcus (pOTS) has been called a visual word form area, debate persists over the selectivity of this region for reading relative to general nonorthographic visual object processing. We used high-resolution functional magnetic resonance imaging to study left pOTS responses to combinatorial orthographic and object shape information. Participants performed naming and visual discrimination tasks designed to encourage or suppress phonological encoding. During the naming task, all participants showed subregions within left pOTS that were more sensitive to combinatorial orthographic information than to object information. This difference disappeared, however, when phonological processing demands were removed. Responses were stronger to pseudowords than to words, but this effect also disappeared when phonological processing demands were removed. Subregions within the left pOTS are preferentially activated when visual input must be mapped to a phonological representation (i.e., a name) and particularly when component parts of the visual input must be mapped to corresponding phonological elements (consonant or vowel phonemes). Results indicate a specialized role for subregions within the left pOTS in the isomorphic mapping of familiar combinatorial visual patterns to phonological forms. This process distinguishes reading from picture naming and accounts for a wide range of previously reported stimulus and task effects in left pOTS. PMID:22505661

  9. The orthographic sensitivity to written Chinese in the occipital-temporal cortex.

    PubMed

    Liu, Haicheng; Jiang, Yi; Zhang, Bo; Ma, Lifei; He, Sheng; Weng, Xuchu

    2013-06-01

    Previous studies have identified an area in the left lateral fusiform cortex that is highly responsive to written words and has been named the visual word form area (VWFA). However, there is disagreement on the specific functional role of this area in word recognition. Chinese characters, which are dramatically different from Roman alphabets in the visual form and in the form to phonological mapping, provide a unique opportunity to investigate the properties of the VWFA. Specifically, to clarify the orthographic sensitivity in the mid-fusiform cortex, we compared fMRI response amplitudes (Exp. 1) as well as the spatial patterns of response across multiple voxels (Exp. 2) between Chinese characters and stimuli derived from Chinese characters with different orthographic properties. The fMRI response amplitude results suggest the existence of orthographic sensitivity in the VWFA. The results from multi-voxel pattern analysis indicate that spatial distribution of the responses across voxels in the occipitotemporal cortex contained discriminative information between the different types of character-related stimuli. These results together suggest that the orthographic rules are likely represented in a distributed neural network with the VWFA containing the most specific information regarding a stimulus' orthographic regularity.

  10. Visualizing Intercultural Literacy: Engaging Critically with Diversity and Migration in the Classroom through an Image-Based Approach

    ERIC Educational Resources Information Center

    Arizpe, Evelyn; Bagelman, Caroline; Devlin, Alison M.; Farrell, Maureen; McAdam, Julie E.

    2014-01-01

    Accessible forms of language, learning and literacy, as well as strategies that support intercultural communication are needed for the diverse population of refugee, asylum seeker and migrant children within schools. The research project "Journeys from Images to Words" explored the potential of visual texts to address these issues.…

  11. The Effect of Modality Shifts on Practive Interference in Long-Term Memory.

    ERIC Educational Resources Information Center

    Dean, Raymond S.; And Others

    1983-01-01

    In experiment one, subjects learned a word list in blocked or random forms of auditory/visual change. In experiment two, high- and low-conceptual rigid subjects read passages in shift conditions or nonshift, exclusively in auditory or visual modes. A shift in modality provided a powerful release from proactive interference. (Author/CM)

  12. Beyond the visual word form area: the orthography-semantics interface in spelling and reading.

    PubMed

    Purcell, Jeremy J; Shea, Jennifer; Rapp, Brenda

    2014-01-01

    Lexical orthographic information provides the basis for recovering the meanings of words in reading and for generating correct word spellings in writing. Research has provided evidence that an area of the left ventral temporal cortex, a subregion of what is often referred to as the visual word form area (VWFA), plays a significant role specifically in lexical orthographic processing. The current investigation goes beyond this previous work by examining the neurotopography of the interface of lexical orthography with semantics. We apply a novel lesion mapping approach with three individuals with acquired dysgraphia and dyslexia who suffered lesions to left ventral temporal cortex. To map cognitive processes to their neural substrates, this lesion mapping approach applies similar logical constraints to those used in cognitive neuropsychological research. Using this approach, this investigation: (a) identifies a region anterior to the VWFA that is important in the interface of orthographic information with semantics for reading and spelling; (b) determines that, within this orthography-semantics interface region (OSIR), access to orthography from semantics (spelling) is topographically distinct from access to semantics from orthography (reading); (c) provides evidence that, within this region, there is modality-specific access to and from lexical semantics for both spoken and written modalities, in both word production and comprehension. Overall, this study contributes to our understanding of the neural architecture at the lexical orthography-semantic-phonological interface within left ventral temporal cortex.

  13. Connectivity precedes function in the development of the visual word form area.

    PubMed

    Saygin, Zeynep M; Osher, David E; Norton, Elizabeth S; Youssoufian, Deanna A; Beach, Sara D; Feather, Jenelle; Gaab, Nadine; Gabrieli, John D E; Kanwisher, Nancy

    2016-09-01

    What determines the cortical location at which a given functionally specific region will arise in development? We tested the hypothesis that functionally specific regions develop in their characteristic locations because of pre-existing differences in the extrinsic connectivity of that region to the rest of the brain. We exploited the visual word form area (VWFA) as a test case, scanning children with diffusion and functional imaging at age 5, before they learned to read, and at age 8, after they learned to read. We found the VWFA developed functionally in this interval and that its location in a particular child at age 8 could be predicted from that child's connectivity fingerprints (but not functional responses) at age 5. These results suggest that early connectivity instructs the functional development of the VWFA, possibly reflecting a general mechanism of cortical development.

  14. Affective Congruence between Sound and Meaning of Words Facilitates Semantic Decision.

    PubMed

    Aryani, Arash; Jacobs, Arthur M

    2018-05-31

    A similarity between the form and meaning of a word (i.e., iconicity) may help language users to more readily access its meaning through direct form-meaning mapping. Previous work has supported this view by providing empirical evidence for this facilitatory effect in sign language, as well as for onomatopoetic words (e.g., cuckoo) and ideophones (e.g., zigzag). Thus, it remains largely unknown whether the beneficial role of iconicity in making semantic decisions can be considered a general feature in spoken language applying also to "ordinary" words in the lexicon. By capitalizing on the affective domain, and in particular arousal, we organized words in two distinctive groups of iconic vs. non-iconic based on the congruence vs. incongruence of their lexical (meaning) and sublexical (sound) arousal. In a two-alternative forced choice task, we asked participants to evaluate the arousal of printed words that were lexically either high or low arousing. In line with our hypothesis, iconic words were evaluated more quickly and more accurately than their non-iconic counterparts. These results indicate a processing advantage for iconic words, suggesting that language users are sensitive to sound-meaning mappings even when words are presented visually and read silently.

  15. Developmental differences in masked form priming are not driven by vocabulary growth.

    PubMed

    Bhide, Adeetee; Schlaggar, Bradley L; Barnes, Kelly Anne

    2014-01-01

    As children develop into skilled readers, they are able to more quickly and accurately distinguish between words with similar visual forms (i.e., they develop precise lexical representations). The masked form priming lexical decision task is used to test the precision of lexical representations. In this paradigm, a prime (which differs by one letter from the target) is briefly flashed before the target is presented. Participants make a lexical decision to the target. Primes can facilitate reaction time by partially activating the lexical entry for the target. If a prime is unable to facilitate reaction time, it is assumed that participants have a precise orthographic representation of the target and thus the prime is not a close enough match to activate its lexical entry. Previous developmental work has shown that children and adults' lexical decision times are facilitated by form primes preceding words from small neighborhoods (i.e., very few words can be formed by changing one letter in the original word; low N words), but only children are facilitated by form primes preceding words from large neighborhoods (high N words). It has been hypothesized that written vocabulary growth drives the increase in the precision of the orthographic representations; children may not know all of the neighbors of the high N words, making the words effectively low N for them. We tested this hypothesis by (1) equating the effective orthographic neighborhood size of the targets for children and adults and (2) testing whether age or vocabulary size was a better predictor of the extent of form priming. We found priming differences even when controlling for effective neighborhood size. Furthermore, age was a better predictor of form priming effects than was vocabulary size. Our findings provide no support for the hypothesis that growth in written vocabulary size gives rise to more precise lexical representations. We propose that the development of spelling ability may be a more important factor.

  16. Category Membership and Semantic Coding in the Cerebral Hemispheres.

    PubMed

    Turner, Casey E; Kellogg, Ronald T

    2016-01-01

    Although a gradient of category membership seems to form the internal structure of semantic categories, it is unclear whether the 2 hemispheres of the brain differ in terms of this gradient. The 2 experiments reported here examined this empirical question and explored alternative theoretical interpretations. Participants viewed category names centrally and determined whether a closely related or distantly related word presented to either the left visual field/right hemisphere (LVF/RH) or the right visual field/left hemisphere (RVF/LH) was a member of the category. Distantly related words were categorized more slowly in the LVF/RH relative to the RVF/LH, with no difference for words close to the prototype. The finding resolved past mixed results showing an unambiguous typicality effect for both visual field presentations. Furthermore, we examined items near the fuzzy border that were sometimes rejected as nonmembers of the category and found both hemispheres use the same category boundary. In Experiment 2, we presented 2 target words to be categorized, with the expectation of augmenting the speed advantage for the RVF/LH if the 2 hemispheres differ structurally. Instead the results showed a weakening of the hemispheric difference, arguing against a structural in favor of a processing explanation.

  17. The neural basis of visual word form processing: a multivariate investigation.

    PubMed

    Nestor, Adrian; Behrmann, Marlene; Plaut, David C

    2013-07-01

    Current research on the neurobiological bases of reading points to the privileged role of a ventral cortical network in visual word processing. However, the properties of this network and, in particular, its selectivity for orthographic stimuli such as words and pseudowords remain topics of significant debate. Here, we approached this issue from a novel perspective by applying pattern-based analyses to functional magnetic resonance imaging data. Specifically, we examined whether, where and how, orthographic stimuli elicit distinct patterns of activation in the human cortex. First, at the category level, multivariate mapping found extensive sensitivity throughout the ventral cortex for words relative to false-font strings. Secondly, at the identity level, the multi-voxel pattern classification provided direct evidence that different pseudowords are encoded by distinct neural patterns. Thirdly, a comparison of pseudoword and face identification revealed that both stimulus types exploit common neural resources within the ventral cortical network. These results provide novel evidence regarding the involvement of the left ventral cortex in orthographic stimulus processing and shed light on its selectivity and discriminability profile. In particular, our findings support the existence of sublexical orthographic representations within the left ventral cortex while arguing for the continuity of reading with other visual recognition skills.

  18. Form–meaning links in the development of visual word recognition

    PubMed Central

    Nation, Kate

    2009-01-01

    Learning to read takes time and it requires explicit instruction. Three decades of research has taught us a good deal about how children learn about the links between orthography and phonology during word reading development. However, we have learned less about the links that children build between orthographic form and meaning. This is surprising given that the goal of reading development must be for children to develop an orthographic system that allows meanings to be accessed quickly, reliably and efficiently from orthography. This review considers whether meaning-related information is used when children read words aloud, and asks what we know about how and when children make connections between form and meaning during the course of reading development. PMID:19933139

  19. Accessing orthographic representations from speech: the role of left ventral occipitotemporal cortex in spelling.

    PubMed

    Ludersdorfer, Philipp; Kronbichler, Martin; Wimmer, Heinz

    2015-04-01

    The present fMRI study used a spelling task to investigate the hypothesis that the left ventral occipitotemporal cortex (vOT) hosts neuronal representations of whole written words. Such an orthographic word lexicon is posited by cognitive dual-route theories of reading and spelling. In the scanner, participants performed a spelling task in which they had to indicate if a visually presented letter is present in the written form of an auditorily presented word. The main experimental manipulation distinguished between an orthographic word spelling condition in which correct spelling decisions had to be based on orthographic whole-word representations, a word spelling condition in which reliance on orthographic whole-word representations was optional and a phonological pseudoword spelling condition in which no reliance on such representations was possible. To evaluate spelling-specific activations the spelling conditions were contrasted with control conditions that also presented auditory words and pseudowords, but participants had to indicate if a visually presented letter corresponded to the gender of the speaker. We identified a left vOT cluster activated for the critical orthographic word spelling condition relative to both the control condition and the phonological pseudoword spelling condition. Our results suggest that activation of left vOT during spelling can be attributed to the retrieval of orthographic whole-word representations and, thus, support the position that the left vOT potentially represents the neuronal equivalent of the cognitive orthographic word lexicon. © 2014 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.

  20. The role of syllabic structure in French visual word recognition.

    PubMed

    Rouibah, A; Taft, M

    2001-03-01

    Two experiments are reported in which the processing units involved in the reading of French polysyllabic words are examined. A comparison was made between units following the maximal onset principle (i.e., the spoken syllable) and units following the maximal coda principle (i.e., the basic orthographic syllabic structure [BOSS]). In the first experiment, it took longer to recognize that a syllable was the beginning of a word (e.g., the FOE of FOETUS) than to make the same judgment of a BOSS (e.g., FOET). The fact that a BOSS plus one letter (e.g., FOETU) also took longer to judge than the BOSS indicated that the maximal coda principle applies to the units of processing in French. The second experiment confirmed this, using a lexical decision task with the different units being demarcated on the basis of color. It was concluded that the syllabic structure that is so clearly manifested in the spoken form of French is not involved in visual word recognition.

  1. Impaired Visual Expertise for Print in French Adults with Dyslexia as Shown by N170 Tuning

    ERIC Educational Resources Information Center

    Mahe, Gwendoline; Bonnefond, Anne; Gavens, Nathalie; Dufour, Andre; Doignon-Camus, Nadege

    2012-01-01

    Efficient reading relies on expertise in the visual word form area, with abnormalities in the functional specialization of this area observed in individuals with developmental dyslexia. We have investigated event related potentials in print tuning in adults with dyslexia, based on their N170 response at 135-255 ms. Control and dyslexic adults…

  2. Can a Picture Ruin a Thousand Words? The Effects of Visual Resources in Exam Questions

    ERIC Educational Resources Information Center

    Crisp, Victoria; Sweiry, Ezekiel

    2006-01-01

    Background: When an exam question is read, a mental representation of the task is formed in each student's mind. This processing can be affected by features such as visual resources (e.g. pictures, diagrams, photographs, tables), which can come to dominate the mental representation due to their salience. Purpose: The aim of this research was to…

  3. Development of a Math-Learning App for Students with Visual Impairments

    ERIC Educational Resources Information Center

    Beal, Carole R.; Rosenblum, L. Penny

    2015-01-01

    The project was conducted to make an online tutoring program for math word problem solving accessible to students with visual impairments (VI). An online survey of teachers of students with VI (TVIs) guided the decision to provide the math content in the form of an iPad app, accompanied by print and braille materials. The app includes audio…

  4. Skill dependent audiovisual integration in the fusiform induces repetition suppression.

    PubMed

    McNorgan, Chris; Booth, James R

    2015-02-01

    Learning to read entails mapping existing phonological representations to novel orthographic representations and is thus an ideal context for investigating experience driven audiovisual integration. Because two dominant brain-based theories of reading development hinge on the sensitivity of the visual-object processing stream to phonological information, we were interested in how reading skill relates to audiovisual integration in this area. Thirty-two children between 8 and 13 years of age spanning a range of reading skill participated in a functional magnetic resonance imaging experiment. Participants completed a rhyme judgment task to word pairs presented unimodally (auditory- or visual-only) and cross-modally (auditory followed by visual). Skill-dependent sub-additive audiovisual modulation was found in left fusiform gyrus, extending into the putative visual word form area, and was correlated with behavioral orthographic priming. These results suggest learning to read promotes facilitatory audiovisual integration in the ventral visual-object processing stream and may optimize this region for orthographic processing. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. Skill Dependent Audiovisual Integration in the Fusiform Induces Repetition Suppression

    PubMed Central

    McNorgan, Chris; Booth, James R.

    2015-01-01

    Learning to read entails mapping existing phonological representations to novel orthographic representations and is thus an ideal context for investigating experience driven audiovisual integration. Because two dominant brain-based theories of reading development hinge on the sensitivity of the visual-object processing stream to phonological information, we were interested in how reading skill relates to audiovisual integration in this area. Thirty-two children between 8 and 13 years of age spanning a range of reading skill participated in a functional magnetic resonance imaging experiment. Participants completed a rhyme judgment task to word pairs presented unimodally (auditory- or visual-only) and cross-modally (auditory followed by visual). Skill-dependent sub-additive audiovisual modulation was found in left fusiform gyrus, extending into the putative visual word form area, and was correlated with behavioral orthographic priming. These results suggest learning to read promotes facilitatory audiovisual integration in the ventral visual-object processing stream and may optimize this region for orthographic processing. PMID:25585276

  6. Complete abolition of reading and writing ability with a third ventricle colloid cyst: implications for surgical intervention and proposed neural substrates of visual recognition and visual imaging ability.

    PubMed

    Barker, Lynne Ann; Morton, Nicholas; Romanowski, Charles A J; Gosden, Kevin

    2013-10-24

    We report a rare case of a patient unable to read (alexic) and write (agraphic) after a mild head injury. He had preserved speech and comprehension, could spell aloud, identify words spelt aloud and copy letter features. He was unable to visualise letters but showed no problems with digits. Neuropsychological testing revealed general visual memory, processing speed and imaging deficits. Imaging data revealed an 8 mm colloid cyst of the third ventricle that splayed the fornix. Little is known about functions mediated by fornical connectivity, but this region is thought to contribute to memory recall. Other regions thought to mediate letter recognition and letter imagery, visual word form area and visual pathways were intact. We remediated reading and writing by multimodal letter retraining. The study raises issues about the neural substrates of reading, role of fornical tracts to selective memory in the absence of other pathology, and effective remediation strategies for selective functional deficits.

  7. Altered Activation and Functional Asymmetry of Exner's Area but not the Visual Word Form Area in a Child with Sudden-onset, Persistent Mirror Writing.

    PubMed

    Linke, Annika; Roach-Fox, Elizabeth; Vriezen, Ellen; Prasad, Asuri Narayan; Cusack, Rhodri

    2018-06-02

    Mirror writing is often produced by healthy children during early acquisition of literacy, and has been observed in adults following neurological disorders or insults. The neural mechanisms responsible for involuntary mirror writing remain debated, but in healthy children, it is typically attributed to the delayed development of a process of overcoming mirror invariance while learning to read and write. We present an unusual case of sudden-onset, persistent mirror writing in a previously typical seven-year-old girl. Using her dominant right hand only, she copied and spontaneously produced all letters, words and sentences, as well as some numbers and objects, in mirror image. Additionally, she frequently misidentified letter orientations in perceptual assessments. Clinical, neuropsychological, and functional neuroimaging studies were carried out over sixteen months. Neurologic and ophthalmologic examinations and a standard clinical MRI scan of the head were normal. Neuropsychological testing revealed average scores on most tests of intellectual function, language function, verbal learning and memory. Visual perception and visual reasoning were average, with the exception of below average form constancy, and mild difficulties on some visual memory tests. Activation and functional connectivity of the reading and writing network was assessed with fMRI. During a reading task, the VWFA showed a strong response to words in mirror but not in normal letter orientation - similar to what has been observed in typically developing children previously - but activation was atypically reduced in right primary visual cortex and Exner's Area. Resting-state connectivity within the reading and writing network was similar to that of age-matched controls, but hemispheric asymmetry between the balance of motor-to-visual input was found for Exner's Area. In summary, this unusual case suggests that a disruption to visual-motor integration rather than to the VWFA can contribute to sudden-onset, persistent mirror writing in the absence of clinically detectable neurological insult. Copyright © 2018. Published by Elsevier Ltd.

  8. Towards a Universal Model of Reading

    PubMed Central

    Frost, Ram

    2013-01-01

    In the last decade, reading research has seen a paradigmatic shift. A new wave of computational models of orthographic processing that offer various forms of noisy position or context-sensitive coding, have revolutionized the field of visual word recognition. The influx of such models stems mainly from consistent findings, coming mostly from European languages, regarding an apparent insensitivity of skilled readers to letter-order. Underlying the current revolution is the theoretical assumption that the insensitivity of readers to letter order reflects the special way in which the human brain encodes the position of letters in printed words. The present paper discusses the theoretical shortcomings and misconceptions of this approach to visual word recognition. A systematic review of data obtained from a variety of languages demonstrates that letter-order insensitivity is not a general property of the cognitive system, neither it is a property of the brain in encoding letters. Rather, it is a variant and idiosyncratic characteristic of some languages, mostly European, reflecting a strategy of optimizing encoding resources, given the specific structure of words. Since the main goal of reading research is to develop theories that describe the fundamental and invariant phenomena of reading across orthographies, an alternative approach to model visual word recognition is offered. The dimensions of a possible universal model of reading, which outlines the common cognitive operations involved in orthographic processing in all writing systems, are discussed. PMID:22929057

  9. Towards a universal model of reading.

    PubMed

    Frost, Ram

    2012-10-01

    In the last decade, reading research has seen a paradigmatic shift. A new wave of computational models of orthographic processing that offer various forms of noisy position or context-sensitive coding have revolutionized the field of visual word recognition. The influx of such models stems mainly from consistent findings, coming mostly from European languages, regarding an apparent insensitivity of skilled readers to letter order. Underlying the current revolution is the theoretical assumption that the insensitivity of readers to letter order reflects the special way in which the human brain encodes the position of letters in printed words. The present article discusses the theoretical shortcomings and misconceptions of this approach to visual word recognition. A systematic review of data obtained from a variety of languages demonstrates that letter-order insensitivity is neither a general property of the cognitive system nor a property of the brain in encoding letters. Rather, it is a variant and idiosyncratic characteristic of some languages, mostly European, reflecting a strategy of optimizing encoding resources, given the specific structure of words. Since the main goal of reading research is to develop theories that describe the fundamental and invariant phenomena of reading across orthographies, an alternative approach to model visual word recognition is offered. The dimensions of a possible universal model of reading, which outlines the common cognitive operations involved in orthographic processing in all writing systems, are discussed.

  10. Visual attention based bag-of-words model for image classification

    NASA Astrophysics Data System (ADS)

    Wang, Qiwei; Wan, Shouhong; Yue, Lihua; Wang, Che

    2014-04-01

    Bag-of-words is a classical method for image classification. The core problem is how to count the frequency of the visual words and what visual words to select. In this paper, we propose a visual attention based bag-of-words model (VABOW model) for image classification task. The VABOW model utilizes visual attention method to generate a saliency map, and uses the saliency map as a weighted matrix to instruct the statistic process for the frequency of the visual words. On the other hand, the VABOW model combines shape, color and texture cues and uses L1 regularization logistic regression method to select the most relevant and most efficient features. We compare our approach with traditional bag-of-words based method on two datasets, and the result shows that our VABOW model outperforms the state-of-the-art method for image classification.

  11. Phonological-orthographic consistency for Japanese words and its impact on visual and auditory word recognition.

    PubMed

    Hino, Yasushi; Kusunose, Yuu; Miyamura, Shinobu; Lupker, Stephen J

    2017-01-01

    In most models of word processing, the degrees of consistency in the mappings between orthographic, phonological, and semantic representations are hypothesized to affect reading time. Following Hino, Miyamura, and Lupker's (2011) examination of the orthographic-phonological (O-P) and orthographic-semantic (O-S) consistency for 1,114 Japanese words (339 katakana and 775 kanji words), in the present research, we initially attempted to measure the phonological-orthographic (P-O) consistency for those same words. In contrast to the O-P and O-S consistencies, which were equivalent for kanji and katakana words, the P-O relationships were much more inconsistent for the kanji words than for the katakana words. The impact of kanji words' P-O consistency was then examined in both visual and auditory word recognition tasks. Although there was no effect of P-O consistency in the standard visual lexical-decision task, significant effects were detected in a lexical-decision task with auditory stimuli, in a perceptual identification task using masked visual stimuli, and in a lexical-decision task with degraded visual stimuli. The implications of these results are discussed in terms of the impact of P-O consistency in auditory and visual word recognition. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  12. Immediate lexical integration of novel word forms

    PubMed Central

    Kapnoula, Efthymia C.; McMurray, Bob

    2014-01-01

    It is well known that familiar words inhibit each other during spoken word recognition. However, we do not know how and under what circumstances newly learned words become integrated with the lexicon in order to engage in this competition. Previous work on word learning has highlighted the importance of offline consolidation (Gaskell & Dumay, 2003) and meaning (Leach & Samuel, 2007) to establish this integration. In two experiments we test the necessity of these factors by examining the inhibition between newly learned items and familiar words immediately after learning. Participants learned a set of nonwords without meanings in active (Exp 1) or passive (Exp 2) exposure paradigms. After training, participants performed a visual world paradigm task to assess inhibition from these newly learned items. An analysis of participants’ fixations suggested that the newly learned words were able to engage in competition with known words without any consolidation. PMID:25460382

  13. Immediate lexical integration of novel word forms.

    PubMed

    Kapnoula, Efthymia C; Packard, Stephanie; Gupta, Prahlad; McMurray, Bob

    2015-01-01

    It is well known that familiar words inhibit each other during spoken word recognition. However, we do not know how and under what circumstances newly learned words become integrated with the lexicon in order to engage in this competition. Previous work on word learning has highlighted the importance of offline consolidation (Gaskell & Dumay, 2003) and meaning (Leach & Samuel, 2007) to establish this integration. In two experiments we test the necessity of these factors by examining the inhibition between newly learned items and familiar words immediately after learning. Participants learned a set of nonwords without meanings in active (Experiment 1) or passive (Experiment 2) exposure paradigms. After training, participants performed a visual world paradigm task to assess inhibition from these newly learned items. An analysis of participants' fixations suggested that the newly learned words were able to engage in competition with known words without any consolidation. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Illusory conjunctions in simultanagnosia: coarse coding of visual feature location?

    PubMed

    McCrea, Simon M; Buxbaum, Laurel J; Coslett, H Branch

    2006-01-01

    Simultanagnosia is a disorder characterized by an inability to see more than one object at a time. We report a simultanagnosic patient (ED) with bilateral posterior infarctions who produced frequent illusory conjunctions on tasks involving form and surface features (e.g., a red T) and form alone. ED also produced "blend" errors in which features of one familiar perceptual unit appeared to migrate to another familiar perceptual unit (e.g., "RO" read as "PQ"). ED often misread scrambled letter strings as a familiar word (e.g., "hmoe" read as "home"). Finally, ED's success in reporting two letters in an array was inversely related to the distance between the letters. These findings are consistent with the hypothesis that ED's illusory reflect coarse coding of visual feature location that is ameliorated in part by top-down information from object and word recognition systems; the findings are also consistent, however, with Treisman's Feature Integration Theory. Finally, the data provide additional support for the claim that the dorsal parieto-occipital cortex is implicated in the binding of visual feature information.

  15. Learning during processing Word learning doesn’t wait for word recognition to finish

    PubMed Central

    Apfelbaum, Keith S.; McMurray, Bob

    2017-01-01

    Previous research on associative learning has uncovered detailed aspects of the process, including what types of things are learned, how they are learned, and where in the brain such learning occurs. However, perceptual processes, such as stimulus recognition and identification, take time to unfold. Previous studies of learning have not addressed when, during the course of these dynamic recognition processes, learned representations are formed and updated. If learned representations are formed and updated while recognition is ongoing, the result of learning may incorporate spurious, partial information. For example, during word recognition, words take time to be identified, and competing words are often active in parallel. If learning proceeds before this competition resolves, representations may be influenced by the preliminary activations present at the time of learning. In three experiments using word learning as a model domain, we provide evidence that learning reflects the ongoing dynamics of auditory and visual processing during a learning event. These results show that learning can occur before stimulus recognition processes are complete; learning does not wait for ongoing perceptual processing to complete. PMID:27471082

  16. Dictionary Pruning with Visual Word Significance for Medical Image Retrieval

    PubMed Central

    Zhang, Fan; Song, Yang; Cai, Weidong; Hauptmann, Alexander G.; Liu, Sidong; Pujol, Sonia; Kikinis, Ron; Fulham, Michael J; Feng, David Dagan; Chen, Mei

    2016-01-01

    Content-based medical image retrieval (CBMIR) is an active research area for disease diagnosis and treatment but it can be problematic given the small visual variations between anatomical structures. We propose a retrieval method based on a bag-of-visual-words (BoVW) to identify discriminative characteristics between different medical images with Pruned Dictionary based on Latent Semantic Topic description. We refer to this as the PD-LST retrieval. Our method has two main components. First, we calculate a topic-word significance value for each visual word given a certain latent topic to evaluate how the word is connected to this latent topic. The latent topics are learnt, based on the relationship between the images and words, and are employed to bridge the gap between low-level visual features and high-level semantics. These latent topics describe the images and words semantically and can thus facilitate more meaningful comparisons between the words. Second, we compute an overall-word significance value to evaluate the significance of a visual word within the entire dictionary. We designed an iterative ranking method to measure overall-word significance by considering the relationship between all latent topics and words. The words with higher values are considered meaningful with more significant discriminative power in differentiating medical images. We evaluated our method on two public medical imaging datasets and it showed improved retrieval accuracy and efficiency. PMID:27688597

  17. Dictionary Pruning with Visual Word Significance for Medical Image Retrieval.

    PubMed

    Zhang, Fan; Song, Yang; Cai, Weidong; Hauptmann, Alexander G; Liu, Sidong; Pujol, Sonia; Kikinis, Ron; Fulham, Michael J; Feng, David Dagan; Chen, Mei

    2016-02-12

    Content-based medical image retrieval (CBMIR) is an active research area for disease diagnosis and treatment but it can be problematic given the small visual variations between anatomical structures. We propose a retrieval method based on a bag-of-visual-words (BoVW) to identify discriminative characteristics between different medical images with Pruned Dictionary based on Latent Semantic Topic description. We refer to this as the PD-LST retrieval. Our method has two main components. First, we calculate a topic-word significance value for each visual word given a certain latent topic to evaluate how the word is connected to this latent topic. The latent topics are learnt, based on the relationship between the images and words, and are employed to bridge the gap between low-level visual features and high-level semantics. These latent topics describe the images and words semantically and can thus facilitate more meaningful comparisons between the words. Second, we compute an overall-word significance value to evaluate the significance of a visual word within the entire dictionary. We designed an iterative ranking method to measure overall-word significance by considering the relationship between all latent topics and words. The words with higher values are considered meaningful with more significant discriminative power in differentiating medical images. We evaluated our method on two public medical imaging datasets and it showed improved retrieval accuracy and efficiency.

  18. Glioblastoma Presenting with Pure Alexia and Palinopsia Involving the Left Inferior Occipital Gyrus and Visual Word Form Area Evaluated with Functional Magnetic Resonance Imaging and Diffusion Tensor Imaging Tractography.

    PubMed

    Huang, Meng; Baskin, David S; Fung, Steve

    2016-05-01

    Rapid word recognition and reading fluency is a specialized cortical process governed by the visual word form area (VWFA), which is localized to the dominant posterior lateral occipitotemporal sulcus/fusiform gyrus. A lesion of the VWFA results in pure alexia without agraphia characterized by letter-by-letter reading. Palinopsia is a visual processing distortion characterized by persistent afterimages and has been reported in lesions involving the nondominant occipitotemporal cortex. A 67-year-old right-handed woman with no neurologic history presented to our emergency department with acute cortical ischemic symptoms that began with a transient episode of receptive aphasia. She also reported inability to read, albeit with retained writing ability. She also saw afterimages of objects. During her stroke workup, an intra-axial circumscribed enhancing mass lesion was discovered involving her dominant posterolateral occipitotemporal lobe. Given the eloquent brain involvement, she underwent preoperative functional magnetic resonance imaging with diffusion tensor imaging tractography and awake craniotomy to maximize resection and preserve function. Many organic lesions involving these regions have been reported in the literature, but to the best of our knowledge, glioblastoma involving the VWFA resulting in both clinical syndromes of pure alexia and palinopsia with superimposed functional magnetic resonance imaging and fiber tract mapping has never been reported before. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Learning to read an alphabet of human faces produces left-lateralized training effects in the fusiform gyrus.

    PubMed

    Moore, Michelle W; Durisko, Corrine; Perfetti, Charles A; Fiez, Julie A

    2014-04-01

    Numerous functional neuroimaging studies have shown that most orthographic stimuli, such as printed English words, produce a left-lateralized response within the fusiform gyrus (FG) at a characteristic location termed the visual word form area (VWFA). We developed an experimental alphabet (FaceFont) comprising 35 face-phoneme pairs to disentangle phonological and perceptual influences on the lateralization of orthographic processing within the FG. Using functional imaging, we found that a region in the vicinity of the VWFA responded to FaceFont words more strongly in trained versus untrained participants, whereas no differences were observed in the right FG. The trained response magnitudes in the left FG region correlated with behavioral reading performance, providing strong evidence that the neural tissue recruited by training supported the newly acquired reading skill. These results indicate that the left lateralization of the orthographic processing is not restricted to stimuli with particular visual-perceptual features. Instead, lateralization may occur because the anatomical projections in the vicinity of the VWFA provide a unique interconnection between the visual system and left-lateralized language areas involved in the representation of speech.

  20. Defining a Conceptual Topography of Word Concreteness: Clustering Properties of Emotion, Sensation, and Magnitude among 750 English Words

    PubMed Central

    Troche, Joshua; Crutch, Sebastian J.; Reilly, Jamie

    2017-01-01

    Cognitive science has a longstanding interest in the ways that people acquire and use abstract vs. concrete words (e.g., truth vs. piano). One dominant theory holds that abstract and concrete words are subserved by two parallel semantic systems. We recently proposed an alternative account of abstract-concrete word representation premised upon a unitary, high dimensional semantic space wherein word meaning is nested. We hypothesize that a range of cognitive and perceptual dimensions (e.g., emotion, time, space, color, size, visual form) bound this space, forming a conceptual topography. Here we report a normative study where we examined the clustering properties of a sample of English words (N = 750) spanning a spectrum of concreteness in a continuous manner from highly abstract to highly concrete. Participants (N = 328) rated each target word on a range of 14 cognitive dimensions (e.g., color, emotion, valence, polarity, motion, space). The dimensions reduced to three factors: Endogenous factor, Exogenous factor, and Magnitude factor. Concepts were plotted in a unified, multimodal space with concrete and abstract concepts along a continuous continuum. We discuss theoretical implications and practical applications of this dataset. These word norms are freely available for download and use at http://www.reilly-coglab.com/data/. PMID:29075224

  1. Defining a Conceptual Topography of Word Concreteness: Clustering Properties of Emotion, Sensation, and Magnitude among 750 English Words.

    PubMed

    Troche, Joshua; Crutch, Sebastian J; Reilly, Jamie

    2017-01-01

    Cognitive science has a longstanding interest in the ways that people acquire and use abstract vs. concrete words (e.g., truth vs. piano). One dominant theory holds that abstract and concrete words are subserved by two parallel semantic systems. We recently proposed an alternative account of abstract-concrete word representation premised upon a unitary, high dimensional semantic space wherein word meaning is nested. We hypothesize that a range of cognitive and perceptual dimensions (e.g., emotion, time, space, color, size, visual form) bound this space, forming a conceptual topography. Here we report a normative study where we examined the clustering properties of a sample of English words ( N = 750) spanning a spectrum of concreteness in a continuous manner from highly abstract to highly concrete. Participants ( N = 328) rated each target word on a range of 14 cognitive dimensions (e.g., color, emotion, valence, polarity, motion, space). The dimensions reduced to three factors: Endogenous factor, Exogenous factor, and Magnitude factor. Concepts were plotted in a unified, multimodal space with concrete and abstract concepts along a continuous continuum. We discuss theoretical implications and practical applications of this dataset. These word norms are freely available for download and use at http://www.reilly-coglab.com/data/.

  2. War and peace: morphemes and full forms in a noninteractive activation parallel dual-route model.

    PubMed

    Baayen, H; Schreuder, R

    This article introduces a computational tool for modeling the process of morphological segmentation in visual and auditory word recognition in the framework of a parallel dual-route model. Copyright 1999 Academic Press.

  3. Cross-cultural effect on the brain revisited: universal structures plus writing system variation.

    PubMed

    Bolger, Donald J; Perfetti, Charles A; Schneider, Walter

    2005-05-01

    Recognizing printed words requires the mapping of graphic forms, which vary with writing systems, to linguistic forms, which vary with languages. Using a newly developed meta-analytic approach, aggregated Gaussian-estimated sources (AGES; Chein et al. [2002]: Psychol Behav 77:635-639), we examined the neuroimaging results for word reading within and across writing systems and languages. To find commonalities, we compiled 25 studies in English and other Western European languages that use an alphabetic writing system, 9 studies of native Chinese reading, 5 studies of Japanese Kana (syllabic) reading, and 4 studies of Kanji (morpho-syllabic) reading. Using the AGES approach, we created meta-images within each writing system, isolated reliable foci of activation, and compared findings across writing systems and languages. The results suggest that these writing systems utilize a common network of regions in word processing. Writing systems engage largely the same systems in terms of gross cortical regions, but localization within those regions suggests differences across writing systems. In particular, the region known as the visual word form area (VWFA) shows strikingly consistent localization across tasks and across writing systems. This region in the left mid-fusiform gyrus is critical to word recognition across writing systems and languages.

  4. A Graph-Embedding Approach to Hierarchical Visual Word Mergence.

    PubMed

    Wang, Lei; Liu, Lingqiao; Zhou, Luping

    2017-02-01

    Appropriately merging visual words are an effective dimension reduction method for the bag-of-visual-words model in image classification. The approach of hierarchically merging visual words has been extensively employed, because it gives a fully determined merging hierarchy. Existing supervised hierarchical merging methods take different approaches and realize the merging process with various formulations. In this paper, we propose a unified hierarchical merging approach built upon the graph-embedding framework. Our approach is able to merge visual words for any scenario, where a preferred structure and an undesired structure are defined, and, therefore, can effectively attend to all kinds of requirements for the word-merging process. In terms of computational efficiency, we show that our algorithm can seamlessly integrate a fast search strategy developed in our previous work and, thus, well maintain the state-of-the-art merging speed. To the best of our survey, the proposed approach is the first one that addresses the hierarchical visual word mergence in such a flexible and unified manner. As demonstrated, it can maintain excellent image classification performance even after a significant dimension reduction, and outperform all the existing comparable visual word-merging methods. In a broad sense, our work provides an open platform for applying, evaluating, and developing new criteria for hierarchical word-merging tasks.

  5. Lexical enhancement during prime-target integration: ERP evidence from matched-case identity priming.

    PubMed

    Vergara-Martínez, Marta; Gómez, Pablo; Jiménez, María; Perea, Manuel

    2015-06-01

    A number of experiments have revealed that matched-case identity PRIME-TARGET pairs are responded to faster than mismatched-case identity prime-TARGET pairs for pseudowords (e.g., JUDPE-JUDPE < judpe-JUDPE), but not for words (JUDGE-JUDGE = judge-JUDGE). These findings suggest that prime-target integration processes are enhanced when the stimuli tap onto lexical representations, overriding physical differences between the stimuli (e.g., case). To track the time course of this phenomenon, we conducted an event-related potential (ERP) masked-priming lexical decision experiment that manipulated matched versus mismatched case identity in words and pseudowords. The behavioral results replicated previous research. The ERP waves revealed that matched-case identity-priming effects were found at a very early time epoch (N/P150 effects) for words and pseudowords. Importantly, around 200 ms after target onset (N250), these differences disappeared for words but not for pseudowords. These findings suggest that different-case word forms (lower- and uppercase) tap into the same abstract representation, leading to prime-target integration very early in processing. In contrast, different-case pseudoword forms are processed as two different representations. This word-pseudoword dissociation has important implications for neural accounts of visual-word recognition.

  6. Accessing orthographic representations from speech: The role of left ventral occipitotemporal cortex in spelling

    PubMed Central

    Ludersdorfer, Philipp; Kronbichler, Martin; Wimmer, Heinz

    2015-01-01

    The present fMRI study used a spelling task to investigate the hypothesis that the left ventral occipitotemporal cortex (vOT) hosts neuronal representations of whole written words. Such an orthographic word lexicon is posited by cognitive dual-route theories of reading and spelling. In the scanner, participants performed a spelling task in which they had to indicate if a visually presented letter is present in the written form of an auditorily presented word. The main experimental manipulation distinguished between an orthographic word spelling condition in which correct spelling decisions had to be based on orthographic whole-word representations, a word spelling condition in which reliance on orthographic whole-word representations was optional and a phonological pseudoword spelling condition in which no reliance on such representations was possible. To evaluate spelling-specific activations the spelling conditions were contrasted with control conditions that also presented auditory words and pseudowords, but participants had to indicate if a visually presented letter corresponded to the gender of the speaker. We identified a left vOT cluster activated for the critical orthographic word spelling condition relative to both the control condition and the phonological pseudoword spelling condition. Our results suggest that activation of left vOT during spelling can be attributed to the retrieval of orthographic whole-word representations and, thus, support the position that the left vOT potentially represents the neuronal equivalent of the cognitive orthographic word lexicon. Hum Brain Mapp, 36:1393–1406, 2015. © 2014 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc. PMID:25504890

  7. W-tree indexing for fast visual word generation.

    PubMed

    Shi, Miaojing; Xu, Ruixin; Tao, Dacheng; Xu, Chao

    2013-03-01

    The bag-of-visual-words representation has been widely used in image retrieval and visual recognition. The most time-consuming step in obtaining this representation is the visual word generation, i.e., assigning visual words to the corresponding local features in a high-dimensional space. Recently, structures based on multibranch trees and forests have been adopted to reduce the time cost. However, these approaches cannot perform well without a large number of backtrackings. In this paper, by considering the spatial correlation of local features, we can significantly speed up the time consuming visual word generation process while maintaining accuracy. In particular, visual words associated with certain structures frequently co-occur; hence, we can build a co-occurrence table for each visual word for a large-scale data set. By associating each visual word with a probability according to the corresponding co-occurrence table, we can assign a probabilistic weight to each node of a certain index structure (e.g., a KD-tree and a K-means tree), in order to re-direct the searching path to be close to its global optimum within a small number of backtrackings. We carefully study the proposed scheme by comparing it with the fast library for approximate nearest neighbors and the random KD-trees on the Oxford data set. Thorough experimental results suggest the efficiency and effectiveness of the new scheme.

  8. Is nevtral NEUTRAL? Visual similarity effects in the early phases of written-word recognition.

    PubMed

    Marcet, Ana; Perea, Manuel

    2017-08-01

    For simplicity, contemporary models of written-word recognition and reading have unspecified feature/letter levels-they predict that the visually similar substituted-letter nonword PEQPLE is as effective at activating the word PEOPLE as the visually dissimilar substituted-letter nonword PEYPLE. Previous empirical evidence on the effects of visual similarly across letters during written-word recognition is scarce and nonconclusive. To examine whether visual similarity across letters plays a role early in word processing, we conducted two masked priming lexical decision experiments (stimulus-onset asynchrony = 50 ms). The substituted-letter primes were visually very similar to the target letters (u/v in Experiment 1 and i/j in Experiment 2; e.g., nevtral-NEUTRAL). For comparison purposes, we included an identity prime condition (neutral-NEUTRAL) and a dissimilar-letter prime condition (neztral-NEUTRAL). Results showed that the similar-letter prime condition produced faster word identification times than the dissimilar-letter prime condition. We discuss how models of written-word recognition should be amended to capture visual similarity effects across letters.

  9. How does interhemispheric communication in visual word recognition work? Deciding between early and late integration accounts of the split fovea theory.

    PubMed

    Van der Haegen, Lise; Brysbaert, Marc; Davis, Colin J

    2009-02-01

    It has recently been shown that interhemispheric communication is needed for the processing of foveally presented words. In this study, we examine whether the integration of information happens at an early stage, before word recognition proper starts, or whether the integration is part of the recognition process itself. Two lexical decision experiments are reported in which words were presented at different fixation positions. In Experiment 1, a masked form priming task was used with primes that had two adjacent letters transposed. The results showed that although the fixation position had a substantial influence on the transposed letter priming effect, the priming was not smaller when the transposed letters were sent to different hemispheres than when they were projected to the same hemisphere. In Experiment 2, stimuli were presented that either had high frequency hemifield competitors or could be identified unambiguously on the basis of the information in one hemifield. Again, the lexical decision times did not vary as a function of hemifield competitors. These results are consistent with the early integration account, as presented in the SERIOL model of visual word recognition.

  10. Modality exclusivity norms for 400 nouns: the relationship between perceptual experience and surface word form.

    PubMed

    Lynott, Dermot; Connell, Louise

    2013-06-01

    We present modality exclusivity norms for 400 randomly selected noun concepts, for which participants provided perceptual strength ratings across five sensory modalities (i.e., hearing, taste, touch, smell, and vision). A comparison with previous norms showed that noun concepts are more multimodal than adjective concepts, as nouns tend to subsume multiple adjectival property concepts (e.g., perceptual experience of the concept baby involves auditory, haptic, olfactory, and visual properties, and hence leads to multimodal perceptual strength). To show the value of these norms, we then used them to test a prediction of the sound symbolism hypothesis: Analysis revealed a systematic relationship between strength of perceptual experience in the referent concept and surface word form, such that distinctive perceptual experience tends to attract distinctive lexical labels. In other words, modality-specific norms of perceptual strength are useful for exploring not just the nature of grounded concepts, but also the nature of form-meaning relationships. These norms will be of benefit to those interested in the representational nature of concepts, the roles of perceptual information in word processing and in grounded cognition more generally, and the relationship between form and meaning in language development and evolution.

  11. Newly learned word forms are abstract and integrated immediately after acquisition

    PubMed Central

    Kapnoula, Efthymia C.; McMurray, Bob

    2015-01-01

    A hotly debated question in word learning concerns the conditions under which newly learned words compete or interfere with familiar words during spoken word recognition. This has recently been described as a key marker of the integration of a new word into the lexicon and was thought to require consolidation Dumay & Gaskell, (Psychological Science, 18, 35–39, 2007; Gaskell & Dumay, Cognition, 89, 105–132, 2003). Recently, however, Kapnoula, Packard, Gupta, and McMurray, (Cognition, 134, 85–99, 2015) showed that interference can be observed immediately after a word is first learned, implying very rapid integration of new words into the lexicon. It is an open question whether these kinds of effects derive from episodic traces of novel words or from more abstract and lexicalized representations. Here we addressed this question by testing inhibition for newly learned words using training and test stimuli presented in different talker voices. During training, participants were exposed to a set of nonwords spoken by a female speaker. Immediately after training, we assessed the ability of the novel word forms to inhibit familiar words, using a variant of the visual world paradigm. Crucially, the test items were produced by a male speaker. An analysis of fixations showed that even with a change in voice, newly learned words interfered with the recognition of similar known words. These findings show that lexical competition effects from newly learned words spread across different talker voices, which suggests that newly learned words can be sufficiently lexicalized, and abstract with respect to talker voice, without consolidation. PMID:26202702

  12. More visual mind wandering occurrence during visual task performance: Modality of the concurrent task affects how the mind wanders.

    PubMed

    Choi, HeeSun; Geden, Michael; Feng, Jing

    2017-01-01

    Mind wandering has been considered as a mental process that is either independent from the concurrent task or regulated like a secondary task. These accounts predict that the form of mind wandering (i.e., images or words) should be either unaffected by or different from the modality form (i.e., visual or auditory) of the concurrent task. Findings from this study challenge these accounts. We measured the rate and the form of mind wandering in three task conditions: fixation, visual 2-back, and auditory 2-back. Contrary to the general expectation, we found that mind wandering was more likely in the same form as the task. This result can be interpreted in light of recent findings on overlapping brain activations during internally- and externally-oriented processes. Our result highlights the importance to consider the unique interplay between the internal and external mental processes and to measure mind wandering as a multifaceted rather than a unitary construct.

  13. More visual mind wandering occurrence during visual task performance: Modality of the concurrent task affects how the mind wanders

    PubMed Central

    Choi, HeeSun; Geden, Michael

    2017-01-01

    Mind wandering has been considered as a mental process that is either independent from the concurrent task or regulated like a secondary task. These accounts predict that the form of mind wandering (i.e., images or words) should be either unaffected by or different from the modality form (i.e., visual or auditory) of the concurrent task. Findings from this study challenge these accounts. We measured the rate and the form of mind wandering in three task conditions: fixation, visual 2-back, and auditory 2-back. Contrary to the general expectation, we found that mind wandering was more likely in the same form as the task. This result can be interpreted in light of recent findings on overlapping brain activations during internally- and externally-oriented processes. Our result highlights the importance to consider the unique interplay between the internal and external mental processes and to measure mind wandering as a multifaceted rather than a unitary construct. PMID:29240817

  14. The Role of Native-Language Phonology in the Auditory Word Identification and Visual Word Recognition of Russian-English Bilinguals

    ERIC Educational Resources Information Center

    Shafiro, Valeriy; Kharkhurin, Anatoliy V.

    2009-01-01

    Abstract Does native language phonology influence visual word processing in a second language? This question was investigated in two experiments with two groups of Russian-English bilinguals, differing in their English experience, and a monolingual English control group. Experiment 1 tested visual word recognition following semantic…

  15. Feedback Visualization in a Grammar-Based E-Learning System for German: A Preliminary User Evaluation with the COMPASS System

    ERIC Educational Resources Information Center

    Harbusch, Karin; Hausdörfer, Annette

    2016-01-01

    COMPASS is an e-learning system that can visualize grammar errors during sentence production in German as a first or second language. Via drag-and-drop dialogues, it allows users to freely select word forms from a lexicon and to combine them into phrases and sentences. The system's core component is a natural-language generator that, for every new…

  16. Abstraction and perceptual individuation in primed word identification are modulated by distortion and repetition: a dissociation.

    PubMed

    Sciama, Sonia C; Dowker, Ann

    2007-11-01

    One experiment investigated the effects of distortion and multiple prime repetition (super-repetition) on repetition priming using divided-visual-field word identification at test and mixed-case words (e.g., goAT). The experiment measured form-specificity (the effect of matching lettercase at study and test) for two non-conceptual study tasks. For an ideal typeface, super-repetition increased form-independent priming leaving form-specificity constant. The opposite pattern was found for a distorted typeface; super-repetition increased form-specificity, leaving form-independent priming constant. These priming effects did not depend on the study task or test hemifield for either typeface. An additional finding was that only the ideal typeface showed the usual advantage of right hemifield presentation. These results demonstrate that super-repetition produced abstraction for the ideal typeface and perceptual individuation for the distorted typeface; abstraction and perceptual individuation dissociated. We suggest that there is a fundamental duality between perceptual individuation and abstraction consistent with Tulving's (1984) distinction between episodic and semantic memory. This could reflect a duality of system or process.

  17. Eye-fixation behavior, lexical storage, and visual word recognition in a split processing model.

    PubMed

    Shillcock, R; Ellison, T M; Monaghan, P

    2000-10-01

    Some of the implications of a model of visual word recognition in which processing is conditioned by the anatomical splitting of the visual field between the two hemispheres of the brain are explored. The authors investigate the optimal processing of visually presented words within such an architecture, and, for a realistically sized lexicon of English, characterize a computationally optimal fixation point in reading. They demonstrate that this approach motivates a range of behavior observed in reading isolated words and text, including the optimal viewing position and its relationship with the preferred viewing location, the failure to fixate smaller words, asymmetries in hemisphere-specific processing, and the priority given to the exterior letters of words. The authors also show that split architectures facilitate the uptake of all the letter-position information necessary for efficient word recognition and that this information may be less specific than is normally assumed. A split model of word recognition captures a range of behavior in reading that is greater than that covered by existing models of visual word recognition.

  18. Interfering Neighbours: The Impact of Novel Word Learning on the Identification of Visually Similar Words

    ERIC Educational Resources Information Center

    Bowers, Jeffrey S.; Davis, Colin J.; Hanley, Derek A.

    2005-01-01

    We assessed the impact of visual similarity on written word identification by having participants learn new words (e.g. BANARA) that were neighbours of familiar words that previously had no neighbours (e.g. BANANA). Repeated exposure to these new words made it more difficult to semantically categorize the familiar words. There was some evidence of…

  19. The Effects of Visual Attention Span and Phonological Decoding in Reading Comprehension in Dyslexia: A Path Analysis.

    PubMed

    Chen, Chen; Schneps, Matthew H; Masyn, Katherine E; Thomson, Jennifer M

    2016-11-01

    Increasing evidence has shown visual attention span to be a factor, distinct from phonological skills, that explains single-word identification (pseudo-word/word reading) performance in dyslexia. Yet, little is known about how well visual attention span explains text comprehension. Observing reading comprehension in a sample of 105 high school students with dyslexia, we used a pathway analysis to examine the direct and indirect path between visual attention span and reading comprehension while controlling for other factors such as phonological awareness, letter identification, short-term memory, IQ and age. Integrating phonemic decoding efficiency skills in the analytic model, this study aimed to disentangle how visual attention span and phonological skills work together in reading comprehension for readers with dyslexia. We found visual attention span to have a significant direct effect on more difficult reading comprehension but not on an easier level. It also had a significant direct effect on pseudo-word identification but not on word identification. In addition, we found that visual attention span indirectly explains reading comprehension through pseudo-word reading and word reading skills. This study supports the hypothesis that at least part of the dyslexic profile can be explained by visual attention abilities. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  20. The Processing of Visual and Phonological Configurations of Chinese One- and Two-Character Words in a Priming Task of Semantic Categorization.

    PubMed

    Ma, Bosen; Wang, Xiaoyun; Li, Degao

    2015-01-01

    To separate the contribution of phonological from that of visual-orthographic information in the recognition of a Chinese word that is composed of one or two Chinese characters, we conducted two experiments in a priming task of semantic categorization (PTSC), in which length (one- or two-character words), relation, prime (related or unrelated prime-target pairs), and SOA (47, 87, or 187 ms) were manipulated. The prime was similar to the target in meaning or in visual configuration in Experiment A and in meaning or in pronunciation in Experiment B. The results indicate that the two-character words were similar to the one-character words but were less demanding of cognitive resources than the one-character words in the processing of phonological, visual-orthographic, and semantic information. The phonological primes had a facilitating effect at the SOA of 47 ms but an inhibitory effect at the SOA of 187 ms on the participants' reaction times; the visual-orthographic primes only had an inhibitory influence on the participants' reaction times at the SOA of 187 ms. The visual configuration of a Chinese word of one or two Chinese characters has its own contribution in helping retrieve the word's meanings; similarly, the phonological configuration of a one- or two-character word plays its own role in triggering activations of the word's semantic representations.

  1. Models of Verbal Working Memory Capacity: What Does It Take to Make Them Work?

    PubMed Central

    Cowan, Nelson; Rouder, Jeffrey N.; Blume, Christopher L.; Saults, J. Scott

    2013-01-01

    Theories of working memory (WM) capacity limits will be more useful when we know what aspects of performance are governed by the limits and what aspects are governed by other memory mechanisms. Whereas considerable progress has been made on models of WM capacity limits for visual arrays of separate objects, less progress has been made in understanding verbal materials, especially when words are mentally combined to form multi-word units or chunks. Toward a more comprehensive theory of capacity limits, we examine models of forced-choice recognition of words within printed lists, using materials designed to produce multi-word chunks in memory (e.g., leather brief case). Several simple models were tested against data from a variety of list lengths and potential chunk sizes, with test conditions that only imperfectly elicited the inter-word associations. According to the most successful model, participants retained about 3 chunks on average in a capacity-limited region of WM, with some chunks being only subsets of the presented associative information (e.g., leather brief case retained with leather as one chunk and brief case as another). The addition to the model of an activated long-term memory (LTM) component unlimited in capacity was needed. A fixed capacity limit appears critical to account for immediate verbal recognition and other forms of WM. We advance a model-based approach that allows capacity to be assessed despite other important processing contributions. Starting with a psychological-process model of WM capacity developed to understand visual arrays, we arrive at a more unified and complete model. PMID:22486726

  2. Seeing visual word forms: spatial summation, eccentricity and spatial configuration.

    PubMed

    Kao, Chien-Hui; Chen, Chien-Chung

    2012-06-01

    We investigated observers' performance in detecting and discriminating visual word forms as a function of target size and retinal eccentricity. The contrast threshold of visual words was measured with a spatial two-alternative forced-choice paradigm and a PSI adaptive method. The observers were to indicate which of two sides contained a stimulus in the detection task, and which contained a real character (as opposed to a pseudo- or non-character) in the discrimination task. When the target size was sufficiently small, the detection threshold of a character decreased as its size increased, with a slope of -1/2 on log-log coordinates, up to a critical size at all eccentricities and for all stimulus types. The discrimination threshold decreased with target size with a slope of -1 up to a critical size that was dependent on stimulus type and eccentricity. Beyond that size, the threshold decreased with a slope of -1/2 on log-log coordinates before leveling out. The data was well fit by a spatial summation model that contains local receptive fields (RFs) and a summation across these filters within an attention window. Our result implies that detection is mediated by local RFs smaller than any tested stimuli and thus detection performance is dominated by summation across receptive fields. On the other hand, discrimination is dominated by a summation within a local RF in the fovea but a cross RF summation in the periphery. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. Structural connectivity patterns associated with the putative visual word form area and children's reading ability.

    PubMed

    Fan, Qiuyun; Anderson, Adam W; Davis, Nicole; Cutting, Laurie E

    2014-10-24

    With the advent of neuroimaging techniques, especially functional MRI (fMRI), studies have mapped brain regions that are associated with good and poor reading, most centrally a region within the left occipito-temporal/fusiform region (L-OT/F) often referred to as the visual word form area (VWFA). Despite an abundance of fMRI studies of the putative VWFA, research about its structural connectivity has just started. Provided that the putative VWFA may be connected to distributed regions in the brain, it remains unclear how this network is engaged in constituting a well-tuned reading circuitry in the brain. Here we used diffusion MRI to study the structural connectivity patterns of the putative VWFA and surrounding areas within the L-OT/F in children with typically developing (TD) reading ability and with word recognition deficits (WRD; sometimes referred to as dyslexia). We found that L-OT/F connectivity varied along a posterior-anterior gradient, with specific structural connectivity patterns related to reading ability in the ROIs centered upon the putative VWFA. Findings suggest that the architecture of the putative VWFA connectivity is fundamentally different between TD and WRD, with TD showing greater connectivity to linguistic regions than WRD, and WRD showing greater connectivity to visual and parahippocampal regions than TD. Findings thus reveal clear structural abnormalities underlying the functional abnormalities in the putative VWFA in WRD. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Factors modulating the effect of divided attention during retrieval of words.

    PubMed

    Fernandes, Myra A; Moscovitch, Morris

    2002-07-01

    In this study, we examined variables modulating interference effects on episodic memory under divided attention conditions during retrieval for a list of unrelated words. In Experiment 1, we found that distracting tasks that required animacy or syllable decisions to visually presented words, without a memory load, produced large interference on free recall performance. In Experiment 2, a distracting task requiring phonemic decisions about nonsense words produced a far larger interference effect than one that required semantic decisions about pictures. In Experiment 3, we replicated the effect of the nonsense-word distracting task on memory and showed that an equally resource-demanding picture-based task produced significant interference with memory retrieval, although the effect was smaller in magnitude. Taken together, the results suggest that free recall is disrupted by competition for phonological or word-form representations during retrieval and, to a lesser extent, by competition for semantic representations.

  5. A task-dependent causal role for low-level visual processes in spoken word comprehension.

    PubMed

    Ostarek, Markus; Huettig, Falk

    2017-08-01

    It is well established that the comprehension of spoken words referring to object concepts relies on high-level visual areas in the ventral stream that build increasingly abstract representations. It is much less clear whether basic low-level visual representations are also involved. Here we asked in what task situations low-level visual representations contribute functionally to concrete word comprehension using an interference paradigm. We interfered with basic visual processing while participants performed a concreteness task (Experiment 1), a lexical-decision task (Experiment 2), and a word class judgment task (Experiment 3). We found that visual noise interfered more with concrete versus abstract word processing, but only when the task required visual information to be accessed. This suggests that basic visual processes can be causally involved in language comprehension, but that their recruitment is not automatic and rather depends on the type of information that is required in a given task situation. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  6. Image Location Estimation by Salient Region Matching.

    PubMed

    Qian, Xueming; Zhao, Yisi; Han, Junwei

    2015-11-01

    Nowadays, locations of images have been widely used in many application scenarios for large geo-tagged image corpora. As to images which are not geographically tagged, we estimate their locations with the help of the large geo-tagged image set by content-based image retrieval. In this paper, we exploit spatial information of useful visual words to improve image location estimation (or content-based image retrieval performances). We proposed to generate visual word groups by mean-shift clustering. To improve the retrieval performance, spatial constraint is utilized to code the relative position of visual words. We proposed to generate a position descriptor for each visual word and build fast indexing structure for visual word groups. Experiments show the effectiveness of our proposed approach.

  7. Morphable Word Clouds for Time-Varying Text Data Visualization.

    PubMed

    Chi, Ming-Te; Lin, Shih-Syun; Chen, Shiang-Yi; Lin, Chao-Hung; Lee, Tong-Yee

    2015-12-01

    A word cloud is a visual representation of a collection of text documents that uses various font sizes, colors, and spaces to arrange and depict significant words. The majority of previous studies on time-varying word clouds focuses on layout optimization and temporal trend visualization. However, they do not fully consider the spatial shapes and temporal motions of word clouds, which are important factors for attracting people's attention and are also important cues for human visual systems in capturing information from time-varying text data. This paper presents a novel method that uses rigid body dynamics to arrange multi-temporal word-tags in a specific shape sequence under various constraints. Each word-tag is regarded as a rigid body in dynamics. With the aid of geometric, aesthetic, and temporal coherence constraints, the proposed method can generate a temporally morphable word cloud that not only arranges word-tags in their corresponding shapes but also smoothly transforms the shapes of word clouds over time, thus yielding a pleasing time-varying visualization. Using the proposed frame-by-frame and morphable word clouds, people can observe the overall story of a time-varying text data from the shape transition, and people can also observe the details from the word clouds in frames. Experimental results on various data demonstrate the feasibility and flexibility of the proposed method in morphable word cloud generation. In addition, an application that uses the proposed word clouds in a simulated exhibition demonstrates the usefulness of the proposed method.

  8. Orthographic versus semantic matching in visual search for words within lists.

    PubMed

    Léger, Laure; Rouet, Jean-François; Ros, Christine; Vibert, Nicolas

    2012-03-01

    An eye-tracking experiment was performed to assess the influence of orthographic and semantic distractor words on visual search for words within lists. The target word (e.g., "raven") was either shown to participants before the search (literal search) or defined by its semantic category (e.g., "bird", categorical search). In both cases, the type of words included in the list affected visual search times and eye movement patterns. In the literal condition, the presence of orthographic distractors sharing initial and final letters with the target word strongly increased search times. Indeed, the orthographic distractors attracted participants' gaze and were fixated for longer times than other words in the list. The presence of semantic distractors related to the target word also increased search times, which suggests that significant automatic semantic processing of nontarget words took place. In the categorical condition, semantic distractors were expected to have a greater impact on the search task. As expected, the presence in the list of semantic associates of the target word led to target selection errors. However, semantic distractors did not significantly increase search times any more, whereas orthographic distractors still did. Hence, the visual characteristics of nontarget words can be strong predictors of the efficiency of visual search even when the exact target word is unknown. The respective impacts of orthographic and semantic distractors depended more on the characteristics of lists than on the nature of the search task.

  9. Talker and lexical effects on audiovisual word recognition by adults with cochlear implants.

    PubMed

    Kaiser, Adam R; Kirk, Karen Iler; Lachs, Lorin; Pisoni, David B

    2003-04-01

    The present study examined how postlingually deafened adults with cochlear implants combine visual information from lipreading with auditory cues in an open-set word recognition task. Adults with normal hearing served as a comparison group. Word recognition performance was assessed using lexically controlled word lists presented under auditory-only, visual-only, and combined audiovisual presentation formats. Effects of talker variability were studied by manipulating the number of talkers producing the stimulus tokens. Lexical competition was investigated using sets of lexically easy and lexically hard test words. To assess the degree of audiovisual integration, a measure of visual enhancement, R(a), was used to assess the gain in performance provided in the audiovisual presentation format relative to the maximum possible performance obtainable in the auditory-only format. Results showed that word recognition performance was highest for audiovisual presentation followed by auditory-only and then visual-only stimulus presentation. Performance was better for single-talker lists than for multiple-talker lists, particularly under the audiovisual presentation format. Word recognition performance was better for the lexically easy than for the lexically hard words regardless of presentation format. Visual enhancement scores were higher for single-talker conditions compared to multiple-talker conditions and tended to be somewhat better for lexically easy words than for lexically hard words. The pattern of results suggests that information from the auditory and visual modalities is used to access common, multimodal lexical representations in memory. The findings are discussed in terms of the complementary nature of auditory and visual sources of information that specify the same underlying gestures and articulatory events in speech.

  10. Talker and Lexical Effects on Audiovisual Word Recognition by Adults With Cochlear Implants

    PubMed Central

    Kaiser, Adam R.; Kirk, Karen Iler; Lachs, Lorin; Pisoni, David B.

    2012-01-01

    The present study examined how postlingually deafened adults with cochlear implants combine visual information from lipreading with auditory cues in an open-set word recognition task. Adults with normal hearing served as a comparison group. Word recognition performance was assessed using lexically controlled word lists presented under auditory-only, visual-only, and combined audiovisual presentation formats. Effects of talker variability were studied by manipulating the number of talkers producing the stimulus tokens. Lexical competition was investigated using sets of lexically easy and lexically hard test words. To assess the degree of audiovisual integration, a measure of visual enhancement, Ra, was used to assess the gain in performance provided in the audiovisual presentation format relative to the maximum possible performance obtainable in the auditory-only format. Results showed that word recognition performance was highest for audiovisual presentation followed by auditory-only and then visual-only stimulus presentation. Performance was better for single-talker lists than for multiple-talker lists, particularly under the audiovisual presentation format. Word recognition performance was better for the lexically easy than for the lexically hard words regardless of presentation format. Visual enhancement scores were higher for single-talker conditions compared to multiple-talker conditions and tended to be somewhat better for lexically easy words than for lexically hard words. The pattern of results suggests that information from the auditory and visual modalities is used to access common, multimodal lexical representations in memory. The findings are discussed in terms of the complementary nature of auditory and visual sources of information that specify the same underlying gestures and articulatory events in speech. PMID:14700380

  11. Does viotin activate violin more than viocin? On the use of visual cues during visual-word recognition.

    PubMed

    Perea, Manuel; Panadero, Victoria

    2014-01-01

    The vast majority of neural and computational models of visual-word recognition assume that lexical access is achieved via the activation of abstract letter identities. Thus, a word's overall shape should play no role in this process. In the present lexical decision experiment, we compared word-like pseudowords like viotín (same shape as its base word: violín) vs. viocín (different shape) in mature (college-aged skilled readers), immature (normally reading children), and immature/impaired (young readers with developmental dyslexia) word-recognition systems. Results revealed similar response times (and error rates) to consistent-shape and inconsistent-shape pseudowords for both adult skilled readers and normally reading children - this is consistent with current models of visual-word recognition. In contrast, young readers with developmental dyslexia made significantly more errors to viotín-like pseudowords than to viocín-like pseudowords. Thus, unlike normally reading children, young readers with developmental dyslexia are sensitive to a word's visual cues, presumably because of poor letter representations.

  12. Hemispheric differences in orthographic and semantic processing as revealed by event-related potentials

    PubMed Central

    Dickson, Danielle S.; Federmeier, Kara D.

    2015-01-01

    Differences in how the right and left hemispheres (RH, LH) apprehend visual words were examined using event-related potentials (ERPs) in a repetition paradigm with visual half-field (VF) presentation. In both hemispheres (RH/LVF, LH/RVF), initial presentation of items elicited similar and typical effects of orthographic neighborhood size, with larger N400s for orthographically regular items (words and pseudowords) than for irregular items (acronyms and meaningless illegal strings). However, hemispheric differences emerged on repetition effects. When items were repeated in the LH/RVF, orthographically regular items, relative to irregular items, elicited larger repetition effects on both the N250, a component reflecting processing at the level of visual form (orthography), and on the N400, which has been linked to semantic access. In contrast, in the RH/LVF, repetition effects were biased toward irregular items on the N250 and were similar in size across item types for the N400. The results suggest that processing in the LH is more strongly affected by wordform regularity than in the RH, either due to enhanced processing of familiar orthographic patterns or due to the fact that regular forms can be more readily mapped onto phonology. PMID:25278134

  13. Cross-Modal Binding in Developmental Dyslexia

    ERIC Educational Resources Information Center

    Jones, Manon W.; Branigan, Holly P.; Parra, Mario A.; Logie, Robert H.

    2013-01-01

    The ability to learn visual-phonological associations is a unique predictor of word reading, and individuals with developmental dyslexia show impaired ability in learning these associations. In this study, we compared developmentally dyslexic and nondyslexic adults on their ability to form cross-modal associations (or "bindings") based…

  14. Aural-Visual-Kinesthetic Imagery in Motion Media.

    ERIC Educational Resources Information Center

    Allan, David W.

    Motion media refers to film, television, and other forms of kinesthetic media including computerized multimedia technologies and virtual reality. Imagery reproduced by motion media carries a multisensory amalgamation of mental experiences. The blending of these experiences phenomenologically intersects with the reality and perception of words,…

  15. Semantic word category processing in semantic dementia and posterior cortical atrophy.

    PubMed

    Shebani, Zubaida; Patterson, Karalyn; Nestor, Peter J; Diaz-de-Grenu, Lara Z; Dawson, Kate; Pulvermüller, Friedemann

    2017-08-01

    There is general agreement that perisylvian language cortex plays a major role in lexical and semantic processing; but the contribution of additional, more widespread, brain areas in the processing of different semantic word categories remains controversial. We investigated word processing in two groups of patients whose neurodegenerative diseases preferentially affect specific parts of the brain, to determine whether their performance would vary as a function of semantic categories proposed to recruit those brain regions. Cohorts with (i) Semantic Dementia (SD), who have anterior temporal-lobe atrophy, and (ii) Posterior Cortical Atrophy (PCA), who have predominantly parieto-occipital atrophy, performed a lexical decision test on words from five different lexico-semantic categories: colour (e.g., yellow), form (oval), number (seven), spatial prepositions (under) and function words (also). Sets of pseudo-word foils matched the target words in length and bi-/tri-gram frequency. Word-frequency was matched between the two visual word categories (colour and form) and across the three other categories (number, prepositions, and function words). Age-matched healthy individuals served as controls. Although broad word processing deficits were apparent in both patient groups, the deficit was strongest for colour words in SD and for spatial prepositions in PCA. The patterns of performance on the lexical decision task demonstrate (a) general lexicosemantic processing deficits in both groups, though more prominent in SD than in PCA, and (b) differential involvement of anterior-temporal and posterior-parietal cortex in the processing of specific semantic categories of words. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  16. Learning to Read an Alphabet of Human Faces Produces Left-lateralized Training Effects in the Fusiform Gyrus

    PubMed Central

    Moore, Michelle W.; Durisko, Corrine; Perfetti, Charles A.; Fiez, Julie A.

    2014-01-01

    Numerous functional neuroimaging studies have shown that most orthographic stimuli, such as printed English words, produce a left-lateralized response within the fusiform gyrus (FG) at a characteristic location termed the visual word form area (VWFA). We developed an experimental alphabet (FaceFont) comprising 35 face–phoneme pairs to disentangle phonological and perceptual influences on the lateralization of orthographic processing within the FG. Using functional imaging, we found that a region in the vicinity of the VWFA responded to FaceFont words more strongly in trained versus untrained participants, whereas no differences were observed in the right FG. The trained response magnitudes in the left FG region correlated with behavioral reading performance, providing strong evidence that the neural tissue recruited by training supported the newly acquired reading skill. These results indicate that the left lateralization of the orthographic processing is not restricted to stimuli with particular visual-perceptual features. Instead, lateralization may occur because the anatomical projections in the vicinity of the VWFA provide a unique interconnection between the visual system and left-lateralized language areas involved in the representation of speech. PMID:24168219

  17. Effects of auditory and visual modalities in recall of words.

    PubMed

    Gadzella, B M; Whitehead, D A

    1975-02-01

    Ten experimental conditions were used to study the effects of auditory and visual (printed words, uncolored and colored pictures) modalities and their various combinations with college students. A recall paradigm was employed in which subjects responded in a written test. Analysis of data showed the auditory modality was superior to visual (pictures) ones but was not significantly different from visual (printed words) modality. In visual modalities, printed words were superior to colored pictures. Generally, conditions with multiple modes of representation of stimuli were significantly higher than for conditions with single modes. Multiple modalities, consisting of two or three modes, did not differ significantly from each other. It was concluded that any two modalities of the stimuli presented simultaneously were just as effective as three in recall of stimulus words.

  18. Disruption of functional networks in dyslexia: A whole-brain, data-driven analysis of connectivity

    PubMed Central

    Finn, Emily S.; Shen, Xilin; Holahan, John M.; Scheinost, Dustin; Lacadie, Cheryl; Papademetris, Xenophon; Shaywitz, Sally E.; Shaywitz, Bennett A.; Constable, R. Todd

    2013-01-01

    Background Functional connectivity analyses of fMRI data are a powerful tool for characterizing brain networks and how they are disrupted in neural disorders. However, many such analyses examine only one or a small number of a priori seed regions. Studies that consider the whole brain frequently rely on anatomic atlases to define network nodes, which may result in mixing distinct activation timecourses within a single node. Here, we improve upon previous methods by using a data-driven brain parcellation to compare connectivity profiles of dyslexic (DYS) versus non-impaired (NI) readers in the first whole-brain functional connectivity analysis of dyslexia. Methods Whole-brain connectivity was assessed in children (n = 75; 43 NI, 32 DYS) and adult (n = 104; 64 NI, 40 DYS) readers. Results Compared to NI readers, DYS readers showed divergent connectivity within the visual pathway and between visual association areas and prefrontal attention areas; increased right-hemisphere connectivity; reduced connectivity in the visual word-form area (part of the left fusiform gyrus specialized for printed words); and persistent connectivity to anterior language regions around the inferior frontal gyrus. Conclusions Together, findings suggest that NI readers are better able to integrate visual information and modulate their attention to visual stimuli, allowing them to recognize words based on their visual properties, while DYS readers recruit altered reading circuits and rely on laborious phonology-based “sounding out” strategies into adulthood. These results deepen our understanding of the neural basis of dyslexia and highlight the importance of synchrony between diverse brain regions for successful reading. PMID:24124929

  19. Processing of threat-related information outside the focus of visual attention.

    PubMed

    Calvo, Manuel G; Castillo, M Dolores

    2005-05-01

    This study investigates whether threat-related words are especially likely to be perceived in unattended locations of the visual field. Threat-related, positive, and neutral words were presented at fixation as probes in a lexical decision task. The probe word was preceded by 2 simultaneous prime words (1 foveal, i.e., at fixation; 1 parafoveal, i.e., 2.2 deg. of visual angle from fixation), which were presented for 150 ms, one of which was either identical or unrelated to the probe. Results showed significant facilitation in lexical response times only for the probe threat words when primed parafoveally by an identical word presented in the right visual field. We conclude that threat-related words have privileged access to processing outside the focus of attention. This reveals a cognitive bias in the preferential, parallel processing of information that is important for adaptation.

  20. Semantic transparency in free stems: The effect of Orthography-Semantics Consistency on word recognition.

    PubMed

    Marelli, Marco; Amenta, Simona; Crepaldi, Davide

    2015-01-01

    A largely overlooked side effect in most studies of morphological priming is a consistent main effect of semantic transparency across priming conditions. That is, participants are faster at recognizing stems from transparent sets (e.g., farm) in comparison to stems from opaque sets (e.g., fruit), regardless of the preceding primes. This suggests that semantic transparency may also be consistently associated with some property of the stem word. We propose that this property might be traced back to the consistency, throughout the lexicon, between the orthographic form of a word and its meaning, here named Orthography-Semantics Consistency (OSC), and that an imbalance in OSC scores might explain the "stem transparency" effect. We exploited distributional semantic models to quantitatively characterize OSC, and tested its effect on visual word identification relying on large-scale data taken from the British Lexicon Project (BLP). Results indicated that (a) the "stem transparency" effect is solid and reliable, insofar as it holds in BLP lexical decision times (Experiment 1); (b) an imbalance in terms of OSC can account for it (Experiment 2); and (c) more generally, OSC explains variance in a large item sample from the BLP, proving to be an effective predictor in visual word access (Experiment 3).

  1. Incidental orthographic learning during a color detection task.

    PubMed

    Protopapas, Athanassios; Mitsi, Anna; Koustoumbardis, Miltiadis; Tsitsopoulou, Sofia M; Leventi, Marianna; Seitz, Aaron R

    2017-09-01

    Orthographic learning refers to the acquisition of knowledge about specific spelling patterns forming words and about general biases and constraints on letter sequences. It is thought to occur by strengthening simultaneously activated visual and phonological representations during reading. Here we demonstrate that a visual perceptual learning procedure that leaves no time for articulation can result in orthographic learning evidenced in improved reading and spelling performance. We employed task-irrelevant perceptual learning (TIPL), in which the stimuli to be learned are paired with an easy task target. Assorted line drawings and difficult-to-spell words were presented in red color among sequences of other black-colored words and images presented in rapid succession, constituting a fast-TIPL procedure with color detection being the explicit task. In five experiments, Greek children in Grades 4-5 showed increased recognition of words and images that had appeared in red, both during and after the training procedure, regardless of within-training testing, and also when targets appeared in blue instead of red. Significant transfer to reading and spelling emerged only after increased training intensity. In a sixth experiment, children in Grades 2-3 showed generalization to words not presented during training that carried the same derivational affixes as in the training set. We suggest that reinforcement signals related to detection of the target stimuli contribute to the strengthening of orthography-phonology connections beyond earlier levels of visually-based orthographic representation learning. These results highlight the potential of perceptual learning procedures for the reinforcement of higher-level orthographic representations. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  2. Subliminal repetition primes help detection of phonemes in a picture: Evidence for a phonological level of the priming effects.

    PubMed

    Manoiloff, Laura; Segui, Juan; Hallé, Pierre

    2016-01-01

    In this research, we combine a cross-form word-picture visual masked priming procedure with an internal phoneme monitoring task to examine repetition priming effects. In this paradigm, participants have to respond to pictures whose names begin with a prespecified target phoneme. This task unambiguously requires retrieving the word-form of the target picture's name and implicitly orients participants' attention towards a phonological level of representation. The experiments were conducted within Spanish, whose highly transparent orthography presumably promotes fast and automatic phonological recoding of subliminal, masked visual word primes. Experiments 1 and 2 show that repetition primes speed up internal phoneme monitoring in the target, compared to primes beginning with a different phoneme from the target, or sharing only their first phoneme with the target. This suggests that repetition primes preactivate the phonological code of the entire target picture's name, hereby speeding up internal monitoring, which is necessarily based on such a code. To further qualify the nature of the phonological code underlying internal phoneme monitoring, a concurrent articulation task was used in Experiment 3. This task did not affect the repetition priming effect. We propose that internal phoneme monitoring is based on an abstract phonological code, prior to its translation into articulation.

  3. Does the Sound of a Barking Dog Activate its Corresponding Visual Form? An fMRI Investigation of Modality-Specific Semantic Access

    PubMed Central

    Reilly, Jamie; Garcia, Amanda; Binney, Richard J.

    2016-01-01

    Much remains to be learned about the neural architecture underlying word meaning. Fully distributed models of semantic memory predict that the sound of a barking dog will conjointly engage a network of distributed sensorimotor spokes. An alternative framework holds that modality-specific features additionally converge within transmodal hubs. Participants underwent functional MRI while covertly naming familiar objects versus newly learned novel objects from only one of their constituent semantic features (visual form, characteristic sound, or point-light motion representation). Relative to the novel object baseline, familiar concepts elicited greater activation within association regions specific to that presentation modality. Furthermore, visual form elicited activation within high-level auditory association cortex. Conversely, environmental sounds elicited activation in regions proximal to visual association cortex. Both conditions commonly engaged a putative hub region within lateral anterior temporal cortex. These results support hybrid semantic models in which local hubs and distributed spokes are dually engaged in service of semantic memory. PMID:27289210

  4. Evaluating the Benefits of Displaying Word Prediction Lists on a Personal Digital Assistant at the Keyboard Level

    ERIC Educational Resources Information Center

    Tam, Cynthia; Wells, David

    2009-01-01

    Visual-cognitive loads influence the effectiveness of word prediction technology. Adjusting parameters of word prediction programs can lessen visual-cognitive loads. This study evaluated the benefits of WordQ word prediction software for users' performance when the prediction window was moved to a personal digital assistant (PDA) device placed at…

  5. Got Rhythm...For Better and for Worse. Cross-Modal Effects of Auditory Rhythm on Visual Word Recognition

    ERIC Educational Resources Information Center

    Brochard, Renaud; Tassin, Maxime; Zagar, Daniel

    2013-01-01

    The present research aimed to investigate whether, as previously observed with pictures, background auditory rhythm would also influence visual word recognition. In a lexical decision task, participants were presented with bisyllabic visual words, segmented into two successive groups of letters, while an irrelevant strongly metric auditory…

  6. Visual word ambiguity.

    PubMed

    van Gemert, Jan C; Veenman, Cor J; Smeulders, Arnold W M; Geusebroek, Jan-Mark

    2010-07-01

    This paper studies automatic image classification by modeling soft assignment in the popular codebook model. The codebook model describes an image as a bag of discrete visual words selected from a vocabulary, where the frequency distributions of visual words in an image allow classification. One inherent component of the codebook model is the assignment of discrete visual words to continuous image features. Despite the clear mismatch of this hard assignment with the nature of continuous features, the approach has been successfully applied for some years. In this paper, we investigate four types of soft assignment of visual words to image features. We demonstrate that explicitly modeling visual word assignment ambiguity improves classification performance compared to the hard assignment of the traditional codebook model. The traditional codebook model is compared against our method for five well-known data sets: 15 natural scenes, Caltech-101, Caltech-256, and Pascal VOC 2007/2008. We demonstrate that large codebook vocabulary sizes completely deteriorate the performance of the traditional model, whereas the proposed model performs consistently. Moreover, we show that our method profits in high-dimensional feature spaces and reaps higher benefits when increasing the number of image categories.

  7. Phonological, visual, and semantic coding strategies and children's short-term picture memory span.

    PubMed

    Henry, Lucy A; Messer, David; Luger-Klein, Scarlett; Crane, Laura

    2012-01-01

    Three experiments addressed controversies in the previous literature on the development of phonological and other forms of short-term memory coding in children, using assessments of picture memory span that ruled out potentially confounding effects of verbal input and output. Picture materials were varied in terms of phonological similarity, visual similarity, semantic similarity, and word length. Older children (6/8-year-olds), but not younger children (4/5-year-olds), demonstrated robust and consistent phonological similarity and word length effects, indicating that they were using phonological coding strategies. This confirmed findings initially reported by Conrad (1971), but subsequently questioned by other authors. However, in contrast to some previous research, little evidence was found for a distinct visual coding stage at 4 years, casting doubt on assumptions that this is a developmental stage that consistently precedes phonological coding. There was some evidence for a dual visual and phonological coding stage prior to exclusive use of phonological coding at around 5-6 years. Evidence for semantic similarity effects was limited, suggesting that semantic coding is not a key method by which young children recall lists of pictures.

  8. Phonetic Detail in the Developing Lexicon

    ERIC Educational Resources Information Center

    Swingley, Daniel

    2003-01-01

    Although infants show remarkable sensitivity to linguistically relevant phonetic variation in speech, young children sometimes appear not to make use of this sensitivity. Here, children' s knowledge of the sound-forms of familiar words was assessed using a visual fixation task. Dutch 19-month-olds were shown pairs of pictures and heard correct…

  9. Radical Thoughts on Simplifying Square Roots

    ERIC Educational Resources Information Center

    Schultz, Kyle T.; Bismarck, Stephen F.

    2013-01-01

    A picture is worth a thousand words. This statement is especially true in mathematics teaching and learning. Visual representations such as pictures, diagrams, charts, and tables can illuminate ideas that can be elusive when displayed in symbolic form only. The prevalence of representation as a mathematical process in such documents as…

  10. Heteromodal Cortical Areas Encode Sensory-Motor Features of Word Meaning.

    PubMed

    Fernandino, Leonardo; Humphries, Colin J; Conant, Lisa L; Seidenberg, Mark S; Binder, Jeffrey R

    2016-09-21

    The capacity to process information in conceptual form is a fundamental aspect of human cognition, yet little is known about how this type of information is encoded in the brain. Although the role of sensory and motor cortical areas has been a focus of recent debate, neuroimaging studies of concept representation consistently implicate a network of heteromodal areas that seem to support concept retrieval in general rather than knowledge related to any particular sensory-motor content. We used predictive machine learning on fMRI data to investigate the hypothesis that cortical areas in this "general semantic network" (GSN) encode multimodal information derived from basic sensory-motor processes, possibly functioning as convergence-divergence zones for distributed concept representation. An encoding model based on five conceptual attributes directly related to sensory-motor experience (sound, color, shape, manipulability, and visual motion) was used to predict brain activation patterns associated with individual lexical concepts in a semantic decision task. When the analysis was restricted to voxels in the GSN, the model was able to identify the activation patterns corresponding to individual concrete concepts significantly above chance. In contrast, a model based on five perceptual attributes of the word form performed at chance level. This pattern was reversed when the analysis was restricted to areas involved in the perceptual analysis of written word forms. These results indicate that heteromodal areas involved in semantic processing encode information about the relative importance of different sensory-motor attributes of concepts, possibly by storing particular combinations of sensory and motor features. The present study used a predictive encoding model of word semantics to decode conceptual information from neural activity in heteromodal cortical areas. The model is based on five sensory-motor attributes of word meaning (color, shape, sound, visual motion, and manipulability) and encodes the relative importance of each attribute to the meaning of a word. This is the first demonstration that heteromodal areas involved in semantic processing can discriminate between different concepts based on sensory-motor information alone. This finding indicates that the brain represents concepts as multimodal combinations of sensory and motor representations. Copyright © 2016 the authors 0270-6474/16/369763-07$15.00/0.

  11. Adult Word Recognition and Visual Sequential Memory

    ERIC Educational Resources Information Center

    Holmes, V. M.

    2012-01-01

    Two experiments were conducted investigating the role of visual sequential memory skill in the word recognition efficiency of undergraduate university students. Word recognition was assessed in a lexical decision task using regularly and strangely spelt words, and nonwords that were either standard orthographically legal strings or items made from…

  12. A Critical Boundary to the Left-Hemisphere Advantage in Visual-Word Processing

    ERIC Educational Resources Information Center

    Deason, R.G.; Marsolek, C.J.

    2005-01-01

    Two experiments explored boundary conditions for the ubiquitous left-hemisphere advantage in visual-word recognition. Subjects perceptually identified words presented directly to the left or right hemisphere. Strong left-hemisphere advantages were observed for UPPERCASE and lowercase words. However, only a weak effect was observed for…

  13. Visual attention shift to printed words during spoken word recognition in Chinese: The role of phonological information.

    PubMed

    Shen, Wei; Qu, Qingqing; Tong, Xiuhong

    2018-05-01

    The aim of this study was to investigate the extent to which phonological information mediates the visual attention shift to printed Chinese words in spoken word recognition by using an eye-movement technique with a printed-word paradigm. In this paradigm, participants are visually presented with four printed words on a computer screen, which include a target word, a phonological competitor, and two distractors. Participants are then required to select the target word using a computer mouse, and the eye movements are recorded. In Experiment 1, phonological information was manipulated at the full-phonological overlap; in Experiment 2, phonological information at the partial-phonological overlap was manipulated; and in Experiment 3, the phonological competitors were manipulated to share either fulloverlap or partial-overlap with targets directly. Results of the three experiments showed that the phonological competitor effects were observed at both the full-phonological overlap and partial-phonological overlap conditions. That is, phonological competitors attracted more fixations than distractors, which suggested that phonological information mediates the visual attention shift during spoken word recognition. More importantly, we found that the mediating role of phonological information varies as a function of the phonological similarity between target words and phonological competitors.

  14. The anatomy of language: contributions from functional neuroimaging

    PubMed Central

    PRICE, CATHY J.

    2000-01-01

    This article illustrates how functional neuroimaging can be used to test the validity of neurological and cognitive models of language. Three models of language are described: the 19th Century neurological model which describes both the anatomy and cognitive components of auditory and visual word processing, and 2 20th Century cognitive models that are not constrained by anatomy but emphasise 2 different routes to reading that are not present in the neurological model. A series of functional imaging studies are then presented which show that, as predicted by the 19th Century neurologists, auditory and visual word repetition engage the left posterior superior temporal and posterior inferior frontal cortices. More specifically, the roles Wernicke and Broca assigned to these regions lie respectively in the posterior superior temporal sulcus and the anterior insula. In addition, a region in the left posterior inferior temporal cortex is activated for word retrieval, thereby providing a second route to reading, as predicted by the 20th Century cognitive models. This region and its function may have been missed by the 19th Century neurologists because selective damage is rare. The angular gyrus, previously linked to the visual word form system, is shown to be part of a distributed semantic system that can be accessed by objects and faces as well as speech. Other components of the semantic system include several regions in the inferior and middle temporal lobes. From these functional imaging results, a new anatomically constrained model of word processing is proposed which reconciles the anatomical ambitions of the 19th Century neurologists and the cognitive finesse of the 20th Century cognitive models. The review focuses on single word processing and does not attempt to discuss how words are combined to generate sentences or how several languages are learned and interchanged. Progress in unravelling these and other related issues will depend on the integration of behavioural, computational and neurophysiological approaches, including neuroimaging. PMID:11117622

  15. When a Picture Isn't Worth 1000 Words: Learners Struggle to Find Meaning in Data Visualizations

    ERIC Educational Resources Information Center

    Stofer, Kathryn A.

    2016-01-01

    The oft-repeated phrase "a picture is worth a thousand words" supposes that an image can replace a profusion of words to more easily express complex ideas. For scientific visualizations that represent profusions of numerical data, however, an untranslated academic visualization suffers the same pitfalls untranslated jargon does. Previous…

  16. Experience-Based Probabilities Modulate Expectations in a Gender-Coded Artificial Language

    PubMed Central

    Öttl, Anton; Behne, Dawn M.

    2016-01-01

    The current study combines artificial language learning with visual world eyetracking to investigate acquisition of representations associating spoken words and visual referents using morphologically complex pseudowords. Pseudowords were constructed to consistently encode referential gender by means of suffixation for a set of imaginary figures that could be either male or female. During training, the frequency of exposure to pseudowords and their imaginary figure referents were manipulated such that a given word and its referent would be more likely to occur in either the masculine form or the feminine form, or both forms would be equally likely. Results show that these experience-based probabilities affect the formation of new representations to the extent that participants were faster at recognizing a referent whose gender was consistent with the induced expectation than a referent whose gender was inconsistent with this expectation. Disambiguating gender information available from the suffix did not mask the induced expectations. Eyetracking data provide additional evidence that such expectations surface during online lexical processing. Taken together, these findings indicate that experience-based information is accessible during the earliest stages of processing, and are consistent with the view that language comprehension depends on the activation of perceptual memory traces. PMID:27602009

  17. What can graph theory tell us about word learning and lexical retrieval?

    PubMed

    Vitevitch, Michael S

    2008-04-01

    Graph theory and the new science of networks provide a mathematically rigorous approach to examine the development and organization of complex systems. These tools were applied to the mental lexicon to examine the organization of words in the lexicon and to explore how that structure might influence the acquisition and retrieval of phonological word-forms. Pajek, a program for large network analysis and visualization (V. Batagelj & A. Mvrar, 1998), was used to examine several characteristics of a network derived from a computerized database of the adult lexicon. Nodes in the network represented words, and a link connected two nodes if the words were phonological neighbors. The average path length and clustering coefficient suggest that the phonological network exhibits small-world characteristics. The degree distribution was fit better by an exponential rather than a power-law function. Finally, the network exhibited assortative mixing by degree. Some of these structural characteristics were also found in graphs that were formed by 2 simple stochastic processes suggesting that similar processes might influence the development of the lexicon. The graph theoretic perspective may provide novel insights about the mental lexicon and lead to future studies that help us better understand language development and processing.

  18. Short-term retention of pictures and words: evidence for dual coding systems.

    PubMed

    Pellegrino, J W; Siegel, A W; Dhawan, M

    1975-03-01

    The recall of picture and word triads was examined in three experiments that manipulated the type of distraction in a Brown-Peterson short-term retention task. In all three experiments recall of pictures was superior to words under auditory distraction conditions. Visual distraction produced high performance levels with both types of stimuli, whereas combined auditory and visual distraction significantly reduced picture recall without further affecting word recall. The results were interpreted in terms of the dual coding hypothesis and indicated that pictures are encoded into separate visual and acoustic processing systems while words are primarily acoustically encoded.

  19. Evidence for the activation of sensorimotor information during visual word recognition: the body-object interaction effect.

    PubMed

    Siakaluk, Paul D; Pexman, Penny M; Aguilera, Laura; Owen, William J; Sears, Christopher R

    2008-01-01

    We examined the effects of sensorimotor experience in two visual word recognition tasks. Body-object interaction (BOI) ratings were collected for a large set of words. These ratings assess perceptions of the ease with which a human body can physically interact with a word's referent. A set of high BOI words (e.g., mask) and a set of low BOI words (e.g., ship) were created, matched on imageability and concreteness. Facilitatory BOI effects were observed in lexical decision and phonological lexical decision tasks: responses were faster for high BOI words than for low BOI words. We discuss how our findings may be accounted for by (a) semantic feedback within the visual word recognition system, and (b) an embodied view of cognition (e.g., Barsalou's perceptual symbol systems theory), which proposes that semantic knowledge is grounded in sensorimotor interactions with the environment.

  20. Sensory experience ratings (SERs) for 1,659 French words: Relationships with other psycholinguistic variables and visual word recognition.

    PubMed

    Bonin, Patrick; Méot, Alain; Ferrand, Ludovic; Bugaïska, Aurélia

    2015-09-01

    We collected sensory experience ratings (SERs) for 1,659 French words in adults. Sensory experience for words is a recently introduced variable that corresponds to the degree to which words elicit sensory and perceptual experiences (Juhasz & Yap Behavior Research Methods, 45, 160-168, 2013; Juhasz, Yap, Dicke, Taylor, & Gullick Quarterly Journal of Experimental Psychology, 64, 1683-1691, 2011). The relationships of the sensory experience norms with other psycholinguistic variables (e.g., imageability and age of acquisition) were analyzed. We also investigated the degree to which SER predicted performance in visual word recognition tasks (lexical decision, word naming, and progressive demasking). The analyses indicated that SER reliably predicted response times in lexical decision, but not in word naming or progressive demasking. The findings are discussed in relation to the status of SER, the role of semantic code activation in visual word recognition, and the embodied view of cognition.

  1. Non-linear processing of a linear speech stream: The influence of morphological structure on the recognition of spoken Arabic words.

    PubMed

    Gwilliams, L; Marantz, A

    2015-08-01

    Although the significance of morphological structure is established in visual word processing, its role in auditory processing remains unclear. Using magnetoencephalography we probe the significance of the root morpheme for spoken Arabic words with two experimental manipulations. First we compare a model of auditory processing that calculates probable lexical outcomes based on whole-word competitors, versus a model that only considers the root as relevant to lexical identification. Second, we assess violations to the root-specific Obligatory Contour Principle (OCP), which disallows root-initial consonant gemination. Our results show root prediction to significantly correlate with neural activity in superior temporal regions, independent of predictions based on whole-word competitors. Furthermore, words that violated the OCP constraint were significantly easier to dismiss as valid words than probability-matched counterparts. The findings suggest that lexical auditory processing is dependent upon morphological structure, and that the root forms a principal unit through which spoken words are recognised. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  2. Semantic and visual memory codes in learning disabled readers.

    PubMed

    Swanson, H L

    1984-02-01

    Two experiments investigated whether learning disabled readers' impaired recall is due to multiple coding deficiencies. In Experiment 1, learning disabled and skilled readers viewed nonsense pictures without names or with either relevant or irrelevant names with respect to the distinctive characteristics of the picture. Both types of names improved recall of nondisabled readers, while learning disabled readers exhibited better recall for unnamed pictures. No significant difference in recall was found between name training (relevant, irrelevant) conditions within reading groups. In Experiment 2, both reading groups participated in recall training for complex visual forms labeled with unrelated words, hierarchically related words, or without labels. A subsequent reproduction transfer task showed a facilitation in performance in skilled readers due to labeling, with learning disabled readers exhibiting better reproduction for unnamed pictures. Measures of output organization (clustering) indicated that recall is related to the development of superordinate categories. The results suggest that learning disabled children's reading difficulties are due to an inability to activate a semantic representation that interconnects visual and verbal codes.

  3. The Effect of the Balance of Orthographic Neighborhood Distribution in Visual Word Recognition

    ERIC Educational Resources Information Center

    Robert, Christelle; Mathey, Stephanie; Zagar, Daniel

    2007-01-01

    The present study investigated whether the balance of neighborhood distribution (i.e., the way orthographic neighbors are spread across letter positions) influences visual word recognition. Three word conditions were compared. Word neighbors were either concentrated on one letter position (e.g.,nasse/basse-lasse-tasse-masse) or were unequally…

  4. Identifiable Orthographically Similar Word Primes Interfere in Visual Word Identification

    ERIC Educational Resources Information Center

    Burt, Jennifer S.

    2009-01-01

    University students participated in five experiments concerning the effects of unmasked, orthographically similar, primes on visual word recognition in the lexical decision task (LDT) and naming tasks. The modal prime-target stimulus onset asynchrony (SOA) was 350 ms. When primes were words that were orthographic neighbors of the targets, and…

  5. Evidence for Early Morphological Decomposition in Visual Word Recognition

    ERIC Educational Resources Information Center

    Solomyak, Olla; Marantz, Alec

    2010-01-01

    We employ a single-trial correlational MEG analysis technique to investigate early processing in the visual recognition of morphologically complex words. Three classes of affixed words were presented in a lexical decision task: free stems (e.g., taxable), bound roots (e.g., tolerable), and unique root words (e.g., vulnerable, the root of which…

  6. Intrusive effects of implicitly processed information on explicit memory.

    PubMed

    Sentz, Dustin F; Kirkhart, Matthew W; LoPresto, Charles; Sobelman, Steven

    2002-02-01

    This study described the interference of implicitly processed information on the memory for explicitly processed information. Participants studied a list of words either auditorily or visually under instructions to remember the words (explicit study). They were then visually presented another word list under instructions which facilitate implicit but not explicit processing. Following a distractor task, memory for the explicit study list was tested with either a visual or auditory recognition task that included new words, words from the explicit study list, and words implicitly processed. Analysis indicated participants both failed to recognize words from the explicit study list and falsely recognized words that were implicitly processed as originating from the explicit study list. However, this effect only occurred when the testing modality was visual, thereby matching the modality for the implicitly processed information, regardless of the modality of the explicit study list. This "modality effect" for explicit memory was interpreted as poor source memory for implicitly processed information and in light of the procedures used. as well as illustrating an example of "remembering causing forgetting."

  7. Sublexical ambiguity effect in reading Chinese disyllabic compounds.

    PubMed

    Huang, Hsu-Wen; Lee, Chia-Ying; Tsai, Jie-Li; Tzeng, Ovid J-L

    2011-05-01

    For Chinese compounds, neighbors can share either both orthographic forms and meanings, or orthographic forms only. In this study, central presentation and visual half-field (VF) presentation methods were used in conjunction with ERP measures to investigate how readers solve the sublexical semantic ambiguity of the first constituent character in reading a disyllabic compound. The sublexical ambiguity of the first character was manipulated while the orthographic neighborhood sizes of the first and second character (NS1, NS2) were controlled. Subjective rating of number of meanings corresponding to a character was used as an index of sublexical ambiguity. Results showed that low sublexical ambiguity words elicited a more negative N400 than high sublexical ambiguity words when words were centrally presented. Similar patterns were found when words were presented to the left VF. Interestingly, different patterns were observed for pseudowords. With left VF presentation, high sublexical ambiguity psudowords showed a more negative N400 than low sublexical ambiguity pseudowords. In contrast, with right VF presentation, low sublexical ambiguity pseudowords showed a more negative N400 than high sublexical ambiguity pseudowords. These findings indicate that a level of morphological representation between form and meaning needs to be established and refined in Chinese. In addition, hemispheric asymmetries in the use of word information in ambiguity resolution should be taken into account, even at sublexical level. 2011 Elsevier Inc. All rights reserved.

  8. Lexical precision in skilled readers: Individual differences in masked neighbor priming.

    PubMed

    Andrews, Sally; Hersch, Jolyn

    2010-05-01

    Two experiments investigated the relationship between masked form priming and individual differences in reading and spelling proficiency among university students. Experiment 1 assessed neighbor priming for 4-letter word targets from high- and low-density neighborhoods in 97 university students. The overall results replicated previous evidence of facilitatory neighborhood priming only for low-neighborhood words. However, analyses including measures of reading and spelling proficiency as covariates revealed that better spellers showed inhibitory priming for high-neighborhood words, while poorer spellers showed facilitatory priming. Experiment 2, with 123 participants, replicated the finding of stronger inhibitory neighbor priming in better spellers using 5-letter words and distinguished facilitatory and inhibitory components of priming by comparing neighbor primes with ambiguous and unambiguous partial-word primes (e.g., crow#, cr#wd, crown CROWD). The results indicate that spelling ability is selectively associated with inhibitory effects of lexical competition. The implications for theories of visual word recognition and the lexical quality hypothesis of reading skill are discussed.

  9. Event-Related Potential Evidence in Chinese Children: Type of Literacy Training Modulates Neural Orthographic Sensitivity

    ERIC Educational Resources Information Center

    Zhao, Pei; Zhao, Jing; Weng, Xuchu; Li, Su

    2018-01-01

    Visual word N170 is an index of perceptual expertise for visual words across different writing systems. Recent developmental studies have shown the early emergence of visual word N170 and its close association with individual's reading ability. In the current study, we investigated whether fine-tuning N170 for Chinese characters could emerge after…

  10. Balloons and bavoons versus spikes and shikes: ERPs reveal shared neural processes for shape-sound-meaning congruence in words, and shape-sound congruence in pseudowords.

    PubMed

    Sučević, Jelena; Savić, Andrej M; Popović, Mirjana B; Styles, Suzy J; Ković, Vanja

    2015-01-01

    There is something about the sound of a pseudoword like takete that goes better with a spiky, than a curvy shape (Köhler, 1929:1947). Yet despite decades of research into sound symbolism, the role of this effect on real words in the lexicons of natural languages remains controversial. We report one behavioural and one ERP study investigating whether sound symbolism is active during normal language processing for real words in a speaker's native language, in the same way as for novel word forms. The results indicate that sound-symbolic congruence has a number of influences on natural language processing: Written forms presented in a congruent visual context generate more errors during lexical access, as well as a chain of differences in the ERP. These effects have a very early onset (40-80 ms, 100-160 ms, 280-320 ms) and are later overshadowed by familiar types of semantic processing, indicating that sound symbolism represents an early sensory-co-activation effect. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Visual Word Recognition Across the Adult Lifespan

    PubMed Central

    Cohen-Shikora, Emily R.; Balota, David A.

    2016-01-01

    The current study examines visual word recognition in a large sample (N = 148) across the adult lifespan and across a large set of stimuli (N = 1187) in three different lexical processing tasks (pronunciation, lexical decision, and animacy judgments). Although the focus of the present study is on the influence of word frequency, a diverse set of other variables are examined as the system ages and acquires more experience with language. Computational models and conceptual theories of visual word recognition and aging make differing predictions for age-related changes in the system. However, these have been difficult to assess because prior studies have produced inconsistent results, possibly due to sample differences, analytic procedures, and/or task-specific processes. The current study confronts these potential differences by using three different tasks, treating age and word variables as continuous, and exploring the influence of individual differences such as vocabulary, vision, and working memory. The primary finding is remarkable stability in the influence of a diverse set of variables on visual word recognition across the adult age spectrum. This pattern is discussed in reference to previous inconsistent findings in the literature and implications for current models of visual word recognition. PMID:27336629

  12. Hemispheric differences in orthographic and semantic processing as revealed by event-related potentials.

    PubMed

    Dickson, Danielle S; Federmeier, Kara D

    2014-11-01

    Differences in how the right and left hemispheres (RH, LH) apprehend visual words were examined using event-related potentials (ERPs) in a repetition paradigm with visual half-field (VF) presentation. In both hemispheres (RH/LVF, LH/RVF), initial presentation of items elicited similar and typical effects of orthographic neighborhood size, with larger N400s for orthographically regular items (words and pseudowords) than for irregular items (acronyms and meaningless illegal strings). However, hemispheric differences emerged on repetition effects. When items were repeated in the LH/RVF, orthographically regular items, relative to irregular items, elicited larger repetition effects on both the N250, a component reflecting processing at the level of visual form (orthography), and on the N400, which has been linked to semantic access. In contrast, in the RH/LVF, repetition effects were biased toward irregular items on the N250 and were similar in size across item types for the N400. The results suggest that processing in the LH is more strongly affected by wordform regularity than in the RH, either due to enhanced processing of familiar orthographic patterns or due to the fact that regular forms can be more readily mapped onto phonology. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Disruption of functional networks in dyslexia: a whole-brain, data-driven analysis of connectivity.

    PubMed

    Finn, Emily S; Shen, Xilin; Holahan, John M; Scheinost, Dustin; Lacadie, Cheryl; Papademetris, Xenophon; Shaywitz, Sally E; Shaywitz, Bennett A; Constable, R Todd

    2014-09-01

    Functional connectivity analyses of functional magnetic resonance imaging data are a powerful tool for characterizing brain networks and how they are disrupted in neural disorders. However, many such analyses examine only one or a small number of a priori seed regions. Studies that consider the whole brain frequently rely on anatomic atlases to define network nodes, which might result in mixing distinct activation time-courses within a single node. Here, we improve upon previous methods by using a data-driven brain parcellation to compare connectivity profiles of dyslexic (DYS) versus non-impaired (NI) readers in the first whole-brain functional connectivity analysis of dyslexia. Whole-brain connectivity was assessed in children (n = 75; 43 NI, 32 DYS) and adult (n = 104; 64 NI, 40 DYS) readers. Compared to NI readers, DYS readers showed divergent connectivity within the visual pathway and between visual association areas and prefrontal attention areas; increased right-hemisphere connectivity; reduced connectivity in the visual word-form area (part of the left fusiform gyrus specialized for printed words); and persistent connectivity to anterior language regions around the inferior frontal gyrus. Together, findings suggest that NI readers are better able to integrate visual information and modulate their attention to visual stimuli, allowing them to recognize words on the basis of their visual properties, whereas DYS readers recruit altered reading circuits and rely on laborious phonology-based "sounding out" strategies into adulthood. These results deepen our understanding of the neural basis of dyslexia and highlight the importance of synchrony between diverse brain regions for successful reading. © 2013 Society of Biological Psychiatry Published by Society of Biological Psychiatry All rights reserved.

  14. Character Decomposition and Transposition Processes of Chinese Compound Words in Rapid Serial Visual Presentation.

    PubMed

    Cao, Hong-Wen; Yang, Ke-Yu; Yan, Hong-Mei

    2017-01-01

    Character order information is encoded at the initial stage of Chinese word processing, however, its time course remains underspecified. In this study, we assess the exact time course of the character decomposition and transposition processes of two-character Chinese compound words (canonical, transposed, or reversible words) compared with pseudowords using dual-target rapid serial visual presentation (RSVP) of stimuli appearing at 30 ms per character with no inter-stimulus interval. The results indicate that Chinese readers can identify words with character transpositions in rapid succession; however, a transposition cost is involved in identifying transposed words compared to canonical words. In RSVP reading, character order of words is more likely to be reversed during the period from 30 to 180 ms for canonical and reversible words, but the period from 30 to 240 ms for transposed words. Taken together, the findings demonstrate that the holistic representation of the base word is activated, however, the order of the two constituent characters is not strictly processed during the very early stage of visual word processing.

  15. Latency of modality-specific reactivation of auditory and visual information during episodic memory retrieval.

    PubMed

    Ueno, Daisuke; Masumoto, Kouhei; Sutani, Kouichi; Iwaki, Sunao

    2015-04-15

    This study used magnetoencephalography (MEG) to examine the latency of modality-specific reactivation in the visual and auditory cortices during a recognition task to determine the effects of reactivation on episodic memory retrieval. Nine right-handed healthy young adults participated in the experiment. The experiment consisted of a word-encoding phase and two recognition phases. Three encoding conditions were included: encoding words alone (word-only) and encoding words presented with either related pictures (visual) or related sounds (auditory). The recognition task was conducted in the MEG scanner 15 min after the completion of the encoding phase. After the recognition test, a source-recognition task was given, in which participants were required to choose whether each recognition word was not presented or was presented with which information during the encoding phase. Word recognition in the auditory condition was higher than that in the word-only condition. Confidence-of-recognition scores (d') and the source-recognition test showed superior performance in both the visual and the auditory conditions compared with the word-only condition. An equivalent current dipoles analysis of MEG data indicated that higher equivalent current dipole amplitudes in the right fusiform gyrus occurred during the visual condition and in the superior temporal auditory cortices during the auditory condition, both 450-550 ms after onset of the recognition stimuli. Results suggest that reactivation of visual and auditory brain regions during recognition binds language with modality-specific information and that reactivation enhances confidence in one's recognition performance.

  16. Fixation-related FMRI analysis in the domain of reading research: using self-paced eye movements as markers for hemodynamic brain responses during visual letter string processing.

    PubMed

    Richlan, Fabio; Gagl, Benjamin; Hawelka, Stefan; Braun, Mario; Schurz, Matthias; Kronbichler, Martin; Hutzler, Florian

    2014-10-01

    The present study investigated the feasibility of using self-paced eye movements during reading (measured by an eye tracker) as markers for calculating hemodynamic brain responses measured by functional magnetic resonance imaging (fMRI). Specifically, we were interested in whether the fixation-related fMRI analysis approach was sensitive enough to detect activation differences between reading material (words and pseudowords) and nonreading material (line and unfamiliar Hebrew strings). Reliable reading-related activation was identified in left hemisphere superior temporal, middle temporal, and occipito-temporal regions including the visual word form area (VWFA). The results of the present study are encouraging insofar as fixation-related analysis could be used in future fMRI studies to clarify some of the inconsistent findings in the literature regarding the VWFA. Our study is the first step in investigating specific visual word recognition processes during self-paced natural sentence reading via simultaneous eye tracking and fMRI, thus aiming at an ecologically valid measurement of reading processes. We provided the proof of concept and methodological framework for the analysis of fixation-related fMRI activation in the domain of reading research. © The Author 2013. Published by Oxford University Press.

  17. Processing of visual semantic information to concrete words: temporal dynamics and neural mechanisms indicated by event-related brain potentials( ).

    PubMed

    van Schie, Hein T; Wijers, Albertus A; Mars, Rogier B; Benjamins, Jeroen S; Stowe, Laurie A

    2005-05-01

    Event-related brain potentials were used to study the retrieval of visual semantic information to concrete words, and to investigate possible structural overlap between visual object working memory and concreteness effects in word processing. Subjects performed an object working memory task that involved 5 s retention of simple 4-angled polygons (load 1), complex 10-angled polygons (load 2), and a no-load baseline condition. During the polygon retention interval subjects were presented with a lexical decision task to auditory presented concrete (imageable) and abstract (nonimageable) words, and pseudowords. ERP results are consistent with the use of object working memory for the visualisation of concrete words. Our data indicate a two-step processing model of visual semantics in which visual descriptive information of concrete words is first encoded in semantic memory (indicated by an anterior N400 and posterior occipital positivity), and is subsequently visualised via the network for object working memory (reflected by a left frontal positive slow wave and a bilateral occipital slow wave negativity). Results are discussed in the light of contemporary models of semantic memory.

  18. Evidence for the Activation of Sensorimotor Information during Visual Word Recognition: The Body-Object Interaction Effect

    ERIC Educational Resources Information Center

    Siakaluk, Paul D.; Pexman, Penny M.; Aguilera, Laura; Owen, William J.; Sears, Christopher R.

    2008-01-01

    We examined the effects of sensorimotor experience in two visual word recognition tasks. Body-object interaction (BOI) ratings were collected for a large set of words. These ratings assess perceptions of the ease with which a human body can physically interact with a word's referent. A set of high BOI words (e.g., "mask") and a set of low BOI…

  19. Searching for the right word: Hybrid visual and memory search for words

    PubMed Central

    Boettcher, Sage E. P.; Wolfe, Jeremy M.

    2016-01-01

    In “Hybrid Search” (Wolfe 2012) observers search through visual space for any of multiple targets held in memory. With photorealistic objects as stimuli, response times (RTs) increase linearly with the visual set size and logarithmically with memory set size even when over 100 items are committed to memory. It is well established that pictures of objects are particularly easy to memorize (Brady, Konkle, Alvarez, & Olivia, 2008). Would hybrid search performance be similar if the targets were words or phrases where word order can be important and where the processes of memorization might be different? In Experiment One, observers memorized 2, 4, 8, or 16 words in 4 different blocks. After passing a memory test, confirming memorization of the list, observers searched for these words in visual displays containing 2 to 16 words. Replicating Wolfe (2012), RTs increased linearly with the visual set size and logarithmically with the length of the word list. The word lists of Experiment One were random. In Experiment Two, words were drawn from phrases that observers reported knowing by heart (E.G. “London Bridge is falling down”). Observers were asked to provide four phrases ranging in length from 2 words to a phrase of no less than 20 words (range 21–86). Words longer than 2 characters from the phrase constituted the target list. Distractor words were matched for length and frequency. Even with these strongly ordered lists, results again replicated the curvilinear function of memory set size seen in hybrid search. One might expect serial position effects; perhaps reducing RTs for the first (primacy) and/or last (recency) members of a list (Atkinson & Shiffrin 1968; Murdock, 1962). Surprisingly we showed no reliable effects of word order. Thus, in “London Bridge is falling down”, “London” and “down” are found no faster than “falling”. PMID:25788035

  20. Reading laterally: the cerebral hemispheric use of spatial frequencies in visual word recognition.

    PubMed

    Tadros, Karine; Dupuis-Roy, Nicolas; Fiset, Daniel; Arguin, Martin; Gosselin, Frédéric

    2013-01-04

    It is generally accepted that the left hemisphere (LH) is more capable for reading than the right hemisphere (RH). Left hemifield presentations (initially processed by the RH) lead to a globally higher error rate, slower word identification, and a significantly stronger word length effect (i.e., slower reaction times for longer words). Because the visuo-perceptual mechanisms of the brain for word recognition are primarily localized in the LH (Cohen et al., 2003), it is possible that this part of the brain possesses better spatial frequency (SF) tuning for processing the visual properties of words than the RH. The main objective of this study is to determine the SF tuning functions of the LH and RH for word recognition. Each word image was randomly sampled in the SF domain using the SF bubbles method (Willenbockel et al., 2010) and was presented laterally to the left or right visual hemifield. As expected, the LH requires less visual information than the RH to reach the same level of performance, illustrating the well-known LH advantage for word recognition. Globally, the SF tuning of both hemispheres is similar. However, these seemingly identical tuning functions hide important differences. Most importantly, we argue that the RH requires higher SFs to identify longer words because of crowding.

  1. Linguistic processing in visual and modality-nonspecific brain areas: PET recordings during selective attention.

    PubMed

    Vorobyev, Victor A; Alho, Kimmo; Medvedev, Svyatoslav V; Pakhomov, Sergey V; Roudas, Marina S; Rutkovskaya, Julia M; Tervaniemi, Mari; Van Zuijen, Titia L; Näätänen, Risto

    2004-07-01

    Positron emission tomography (PET) was used to investigate the neural basis of selective processing of linguistic material during concurrent presentation of multiple stimulus streams ("cocktail-party effect"). Fifteen healthy right-handed adult males were to attend to one of three simultaneously presented messages: one presented visually, one to the left ear, and one to the right ear. During the control condition, subjects attended to visually presented consonant letter strings and ignored auditory messages. This paper reports the modality-nonspecific language processing and visual word-form processing, whereas the auditory attention effects have been reported elsewhere [Cogn. Brain Res. 17 (2003) 201]. The left-hemisphere areas activated by both the selective processing of text and speech were as follows: the inferior prefrontal (Brodmann's area, BA 45, 47), anterior temporal (BA 38), posterior insular (BA 13), inferior (BA 20) and middle temporal (BA 21), occipital (BA 18/30) cortices, the caudate nucleus, and the amygdala. In addition, bilateral activations were observed in the medial occipito-temporal cortex and the cerebellum. Decreases of activation during both text and speech processing were found in the parietal (BA 7, 40), frontal (BA 6, 8, 44) and occipito-temporal (BA 37) regions of the right hemisphere. Furthermore, the present data suggest that the left occipito-temporal cortex (BA 18, 20, 37, 21) can be subdivided into three functionally distinct regions in the posterior-anterior direction on the basis of their activation during attentive processing of sublexical orthography, visual word form, and supramodal higher-level aspects of language.

  2. Interference Effects on the Recall of Pictures, Printed Words, and Spoken Words.

    ERIC Educational Resources Information Center

    Burton, John K.; Bruning, Roger H.

    1982-01-01

    Nouns were presented in triads as pictures, printed words, or spoken words and followed by various types of interference. Measures of short- and long-term memory were obtained. In short-term memory, pictorial superiority occurred with acoustic, and visual and acoustic, but not visual interference. Long-term memory showed superior recall for…

  3. Teaching the Meaning of Words to Children with Visual Impairments

    ERIC Educational Resources Information Center

    Vervloed, Mathijs P. J.; Loijens, Nancy E. A.; Waller, Sarah E.

    2014-01-01

    In the report presented here, the authors describe a pilot intervention study that was intended to teach children with visual impairments the meaning of far-away words, and that used their mothers as mediators. The aim was to teach both labels and deep word knowledge, which is the comprehension of the full meaning of words, illustrated through…

  4. Eye Movement Behaviour during Reading of Japanese Sentences: Effects of Word Length and Visual Complexity

    ERIC Educational Resources Information Center

    White, Sarah J.; Hirotani, Masako; Liversedge, Simon P.

    2012-01-01

    Two experiments are presented that examine how the visual characteristics of Japanese words influence eye movement behaviour during reading. In Experiment 1, reading behaviour was compared for words comprising either one or two kanji characters. The one-character words were significantly less likely to be fixated on first-pass, and had…

  5. The Role of Derivative Suffix Productivity in the Visual Word Recognition of Complex Words

    ERIC Educational Resources Information Center

    Lázaro, Miguel; Sainz, Javier; Illera, Víctor

    2015-01-01

    In this article we present two lexical decision experiments that examine the role of base frequency and of derivative suffix productivity in visual recognition of Spanish words. In the first experiment we find that complex words with productive derivative suffixes result in lower response times than those with unproductive derivative suffixes.…

  6. Effects of Visual and Auditory Perceptual Aptitudes and Letter Discrimination Pretraining on Word Recognition.

    ERIC Educational Resources Information Center

    Janssen, David Rainsford

    This study investigated alternate methods of letter discrimination pretraining and word recognition training in young children. Seventy kindergarten children were trained to recognize eight printed words in a vocabulary list by a mixed-list paired-associate method. Four of the stimulus words had visual response choices (pictures) and four had…

  7. Morphological Processing during Visual Word Recognition in Hebrew as a First and a Second Language

    ERIC Educational Resources Information Center

    Norman, Tal; Degani, Tamar; Peleg, Orna

    2017-01-01

    The present study examined whether sublexical morphological processing takes place during visual word-recognition in Hebrew, and whether morphological decomposition of written words depends on lexical activation of the complete word. Furthermore, it examined whether morphological processing is similar when reading Hebrew as a first language (L1)…

  8. Functions of graphemic and phonemic codes in visual word-recognition.

    PubMed

    Meyer, D E; Schvaneveldt, R W; Ruddy, M G

    1974-03-01

    Previous investigators have argued that printed words are recognized directly from visual representations and/or phonological representations obtained through phonemic recoding. The present research tested these hypotheses by manipulating graphemic and phonemic relations within various pairs of letter strings. Ss in two experiments classified the pairs as words or nonwords. Reaction times and error rates were relatively small for word pairs (e.g., BRIBE-TRIBE) that were both graphemically, and phonemically similar. Graphemic similarity alone inhibited performance on other word pairs (e.g., COUCH-TOUCH). These and other results suggest that phonological representations play a significant role in visual word recognition and that there is a dependence between successive phonemic-encoding operations. An encoding-bias model is proposed to explain the data.

  9. Evaluating the developmental trajectory of the episodic buffer component of working memory and its relation to word recognition in children.

    PubMed

    Wang, Shinmin; Allen, Richard J; Lee, Jun Ren; Hsieh, Chia-En

    2015-05-01

    The creation of temporary bound representation of information from different sources is one of the key abilities attributed to the episodic buffer component of working memory. Whereas the role of working memory in word learning has received substantial attention, very little is known about the link between the development of word recognition skills and the ability to bind information in the episodic buffer of working memory and how it may develop with age. This study examined the performance of Grade 2 children (8 years old), Grade 3 children (9 years old), and young adults on a task designed to measure their ability to bind visual and auditory-verbal information in working memory. Children's performance on this task significantly correlated with their word recognition skills even when chronological age, memory for individual elements, and other possible reading-related factors were taken into account. In addition, clear developmental trajectories were observed, with improvements in the ability to hold temporary bound information in working memory between Grades 2 and 3, and between the child and adult groups, that were independent from memory for the individual elements. These findings suggest that the capacity to temporarily bind novel auditory-verbal information to visual form in working memory is linked to the development of word recognition in children and improves with age. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. Visual words for lip-reading

    NASA Astrophysics Data System (ADS)

    Hassanat, Ahmad B. A.; Jassim, Sabah

    2010-04-01

    In this paper, the automatic lip reading problem is investigated, and an innovative approach to providing solutions to this problem has been proposed. This new VSR approach is dependent on the signature of the word itself, which is obtained from a hybrid feature extraction method dependent on geometric, appearance, and image transform features. The proposed VSR approach is termed "visual words". The visual words approach consists of two main parts, 1) Feature extraction/selection, and 2) Visual speech feature recognition. After localizing face and lips, several visual features for the lips where extracted. Such as the height and width of the mouth, mutual information and the quality measurement between the DWT of the current ROI and the DWT of the previous ROI, the ratio of vertical to horizontal features taken from DWT of ROI, The ratio of vertical edges to horizontal edges of ROI, the appearance of the tongue and the appearance of teeth. Each spoken word is represented by 8 signals, one of each feature. Those signals maintain the dynamic of the spoken word, which contains a good portion of information. The system is then trained on these features using the KNN and DTW. This approach has been evaluated using a large database for different people, and large experiment sets. The evaluation has proved the visual words efficiency, and shown that the VSR is a speaker dependent problem.

  11. Do handwritten words magnify lexical effects in visual word recognition?

    PubMed

    Perea, Manuel; Gil-López, Cristina; Beléndez, Victoria; Carreiras, Manuel

    2016-01-01

    An examination of how the word recognition system is able to process handwritten words is fundamental to formulate a comprehensive model of visual word recognition. Previous research has revealed that the magnitude of lexical effects (e.g., the word-frequency effect) is greater with handwritten words than with printed words. In the present lexical decision experiments, we examined whether the quality of handwritten words moderates the recruitment of top-down feedback, as reflected in word-frequency effects. Results showed a reading cost for difficult-to-read and easy-to-read handwritten words relative to printed words. But the critical finding was that difficult-to-read handwritten words, but not easy-to-read handwritten words, showed a greater word-frequency effect than printed words. Therefore, the inherent physical variability of handwritten words does not necessarily boost the magnitude of lexical effects.

  12. Visual Testing: An Experimental Assessment of the Encoding Specificity Hypothesis.

    ERIC Educational Resources Information Center

    DeMelo, Hermes T.; And Others

    This study of 96 high school biology students investigates the effectiveness of visual instruction composed of simple line drawings and printed words as compared to printed-words-only instruction, visual tests, and the interaction between visual or non-visual mode of instruction and mode of testing. The subjects were randomly assigned to be given…

  13. Independent Deficits of Visual Word and Motion Processing in Aging and Early Alzheimer's Disease

    PubMed Central

    Velarde, Carla; Perelstein, Elizabeth; Ressmann, Wendy; Duffy, Charles J.

    2013-01-01

    We tested whether visual processing impairments in aging and Alzheimer's disease (AD) reflect uniform posterior cortical decline, or independent disorders of visual processing for reading and navigation. Young and older normal controls were compared to early AD patients using psychophysical measures of visual word and motion processing. We find elevated perceptual thresholds for letters and word discrimination from young normal controls, to older normal controls, to early AD patients. Across subject groups, visual motion processing showed a similar pattern of increasing thresholds, with the greatest impact on radial pattern motion perception. Combined analyses show that letter, word, and motion processing impairments are independent of each other. Aging and AD may be accompanied by independent impairments of visual processing for reading and navigation. This suggests separate underlying disorders and highlights the need for comprehensive evaluations to detect early deficits. PMID:22647256

  14. Strengthening the Visual Element in Visual Media Materials.

    ERIC Educational Resources Information Center

    Wilhelm, R. Dwight

    1996-01-01

    Describes how to more effectively communicate the visual element in video and audiovisual materials. Discusses identifying a central topic, developing the visual content without words, preparing a storyboard, testing its effectiveness on people who are unacquainted with the production, and writing the script with as few words as possible. (AEF)

  15. What you say matters: exploring visual-verbal interactions in visual working memory.

    PubMed

    Mate, Judit; Allen, Richard J; Baqués, Josep

    2012-01-01

    The aim of this study was to explore whether the content of a simple concurrent verbal load task determines the extent of its interference on memory for coloured shapes. The task consisted of remembering four visual items while repeating aloud a pair of words that varied in terms of imageability and relatedness to the task set. At test, a cue appeared that was either the colour or the shape of one of the previously seen objects, with participants required to select the object's other feature from a visual array. During encoding and retention, there were four verbal load conditions: (a) a related, shape-colour pair (from outside the experimental set, i.e., "pink square"); (b) a pair of unrelated but visually imageable, concrete, words (i.e., "big elephant"); (c) a pair of unrelated and abstract words (i.e., "critical event"); and (d) no verbal load. Results showed differential effects of these verbal load conditions. In particular, imageable words (concrete and related conditions) interfered to a greater degree than abstract words. Possible implications for how visual working memory interacts with verbal memory and long-term memory are discussed.

  16. Words, Shape, Visual Search and Visual Working Memory in 3-Year-Old Children

    ERIC Educational Resources Information Center

    Vales, Catarina; Smith, Linda B.

    2015-01-01

    Do words cue children's visual attention, and if so, what are the relevant mechanisms? Across four experiments, 3-year-old children (N = 163) were tested in visual search tasks in which targets were cued with only a visual preview versus a visual preview and a spoken name. The experiments were designed to determine whether labels facilitated…

  17. Embodied attention and word learning by toddlers

    PubMed Central

    Yu, Chen; Smith, Linda B.

    2013-01-01

    Many theories of early word learning begin with the uncertainty inherent to learning a word from its co-occurrence with a visual scene. However, the relevant visual scene for infant word learning is neither from the adult theorist’s view nor the mature partner’s view, but is rather from the learner’s personal view. Here we show that when 18-month old infants interacted with objects in play with their parents, they created moments in which a single object was visually dominant. If parents named the object during these moments of bottom-up selectivity, later forced-choice tests showed that infants learned the name, but did not when naming occurred during a less visually selective moment. The momentary visual input for parents and toddlers was captured via head cameras placed low on each participant’s forehead as parents played with and named objects for their infant. Frame-by-frame analyses of the head camera images at and around naming moments were conducted to determine the visual properties at input that were associated with learning. The analyses indicated that learning occurred when bottom-up visual information was clean and uncluttered. The sensory-motor behaviors of infants and parents were also analyzed to determine how their actions on the objects may have created these optimal visual moments for learning. The results are discussed with respect to early word learning, embodied attention, and the social role of parents in early word learning. PMID:22878116

  18. The activation of segmental and tonal information in visual word recognition.

    PubMed

    Li, Chuchu; Lin, Candise Y; Wang, Min; Jiang, Nan

    2013-08-01

    Mandarin Chinese has a logographic script in which graphemes map onto syllables and morphemes. It is not clear whether Chinese readers activate phonological information during lexical access, although phonological information is not explicitly represented in Chinese orthography. In the present study, we examined the activation of phonological information, including segmental and tonal information in Chinese visual word recognition, using the Stroop paradigm. Native Mandarin speakers named the presentation color of Chinese characters in Mandarin. The visual stimuli were divided into five types: color characters (e.g., , hong2, "red"), homophones of the color characters (S+T+; e.g., , hong2, "flood"), different-tone homophones (S+T-; e.g., , hong1, "boom"), characters that shared the same tone but differed in segments with the color characters (S-T+; e.g., , ping2, "bottle"), and neutral characters (S-T-; e.g., , qian1, "leading through"). Classic Stroop facilitation was shown in all color-congruent trials, and interference was shown in the incongruent trials. Furthermore, the Stroop effect was stronger for S+T- than for S-T+ trials, and was similar between S+T+ and S+T- trials. These findings suggested that both tonal and segmental forms of information play roles in lexical constraints; however, segmental information has more weight than tonal information. We proposed a revised visual word recognition model in which the functions of both segmental and suprasegmental types of information and their relative weights are taken into account.

  19. The Impact of Sexual Media on Second Language Vocabulary Retrieval.

    PubMed

    Çetin, Yakup

    2015-12-01

    Both Islam and Christianity warn their adherents not to view or to display obscene matter. Aside from religious consequences in the afterlife for such behavior, this study was conducted to determine if viewing sexual media has a detrimental effect in earthly life. Adolescents (n = 64) 17-22 years were exposed to two types of visual stimuli containing sexual or neutral content for 30 min. The participants, seated in rooms with comfortable chairs and provided with snacks, were shown a selection of 18 German words via a PowerPoint slideshow, which included a picture, an audio recording, and the written form of each word. The experimental group, which was exposed to arousing visual stimuli with mild sexual content (movie trailers, music video clips, and TV commercials), remembered significantly fewer words than the control group, which viewed a nature documentary without sexual content. T-test scores revealed that exposure to sexually arousing media impaired memory for second language (L2) vocabulary. Apart from leading to dire consequences in the hereafter, the results of the study demonstrate that viewing obscene material also causes harm in this life.

  20. Reading impairment in schizophrenia: dysconnectivity within the visual system.

    PubMed

    Vinckier, Fabien; Cohen, Laurent; Oppenheim, Catherine; Salvador, Alexandre; Picard, Hernan; Amado, Isabelle; Krebs, Marie-Odile; Gaillard, Raphaël

    2014-01-01

    Patients with schizophrenia suffer from perceptual visual deficits. It remains unclear whether those deficits result from an isolated impairment of a localized brain process or from a more diffuse long-range dysconnectivity within the visual system. We aimed to explore, with a reading paradigm, the functioning of both ventral and dorsal visual pathways and their interaction in schizophrenia. Patients with schizophrenia and control subjects were studied using event-related functional MRI (fMRI) while reading words that were progressively degraded through word rotation or letter spacing. Reading intact or minimally degraded single words involves mainly the ventral visual pathway. Conversely, reading in non-optimal conditions involves both the ventral and the dorsal pathway. The reading paradigm thus allowed us to study the functioning of both pathways and their interaction. Behaviourally, patients with schizophrenia were selectively impaired at reading highly degraded words. While fMRI activation level was not different between patients and controls, functional connectivity between the ventral and dorsal visual pathways increased with word degradation in control subjects, but not in patients. Moreover, there was a negative correlation between the patients' behavioural sensitivity to stimulus degradation and dorso-ventral connectivity. This study suggests that perceptual visual deficits in schizophrenia could be related to dysconnectivity between dorsal and ventral visual pathways. © 2013 Published by Elsevier Ltd.

  1. Serial and semantic encoding of lists of words in schizophrenia patients with visual hallucinations.

    PubMed

    Brébion, Gildas; Ohlsen, Ruth I; Pilowsky, Lyn S; David, Anthony S

    2011-03-30

    Previous research has suggested that visual hallucinations in schizophrenia are associated with abnormal salience of visual mental images. Since visual imagery is used as a mnemonic strategy to learn lists of words, increased visual imagery might impede the other commonly used strategies of serial and semantic encoding. We had previously published data on the serial and semantic strategies implemented by patients when learning lists of concrete words with different levels of semantic organisation (Brébion et al., 2004). In this paper we present a re-analysis of these data, aiming at investigating the associations between learning strategies and visual hallucinations. Results show that the patients with visual hallucinations presented less serial clustering in the non-organisable list than the other patients. In the semantically organisable list with typical instances, they presented both less serial and less semantic clustering than the other patients. Thus, patients with visual hallucinations demonstrate reduced use of serial and semantic encoding in the lists made up of fairly familiar concrete words, which enable the formation of mental images. Although these results are preliminary, we propose that this different processing of the lists stems from the abnormal salience of the mental images such patients experience from the word stimuli. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  2. Visual hallucinations in schizophrenia: confusion between imagination and perception.

    PubMed

    Brébion, Gildas; Ohlsen, Ruth I; Pilowsky, Lyn S; David, Anthony S

    2008-05-01

    An association between hallucinations and reality-monitoring deficit has been repeatedly observed in patients with schizophrenia. Most data concern auditory/verbal hallucinations. The aim of this study was to investigate the association between visual hallucinations and a specific type of reality-monitoring deficit, namely confusion between imagined and perceived pictures. Forty-one patients with schizophrenia and 43 healthy control participants completed a reality-monitoring task. Thirty-two items were presented either as written words or as pictures. After the presentation phase, participants had to recognize the target words and pictures among distractors, and then remember their mode of presentation. All groups of participants recognized the pictures better than the words, except the patients with visual hallucinations, who presented the opposite pattern. The participants with visual hallucinations made more misattributions to pictures than did the others, and higher ratings of visual hallucinations were correlated with increased tendency to remember words as pictures. No association with auditory hallucinations was revealed. Our data suggest that visual hallucinations are associated with confusion between visual mental images and perception.

  3. ESTEEM: A Novel Framework for Qualitatively Evaluating and Visualizing Spatiotemporal Embeddings in Social Media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arendt, Dustin L.; Volkova, Svitlana

    Analyzing and visualizing large amounts of social media communications and contrasting short-term conversation changes over time and geo-locations is extremely important for commercial and government applications. Earlier approaches for large-scale text stream summarization used dynamic topic models and trending words. Instead, we rely on text embeddings – low-dimensional word representations in a continuous vector space where similar words are embedded nearby each other. This paper presents ESTEEM,1 a novel tool for visualizing and evaluating spatiotemporal embeddings learned from streaming social media texts. Our tool allows users to monitor and analyze query words and their closest neighbors with an interactive interface.more » We used state-of- the-art techniques to learn embeddings and developed a visualization to represent dynamically changing relations between words in social media over time and other dimensions. This is the first interactive visualization of streaming text representations learned from social media texts that also allows users to contrast differences across multiple dimensions of the data.« less

  4. The (lack of) effect of dynamic visual noise on the concreteness effect in short-term memory.

    PubMed

    Castellà, Judit; Campoy, Guillermo

    2018-05-17

    It has been suggested that the concreteness effect in short-term memory (STM) is a consequence of concrete words having more distinctive and richer semantic representations. The generation and storage of visual codes in STM could also play a crucial role on the effect because concrete words are more imaginable than abstract words. If this were the case, the introduction of a visual interference task would be expected to disrupt recall of concrete words. A Dynamic Visual Noise (DVN) display, which has been proven to eliminate the concreteness effect on long-term memory (LTM), was presented along encoding of concrete and abstract words in a STM serial recall task. Results showed a main effect of word type, with more item errors in abstract words, a main effect of DVN, which impaired global performance due to more order errors, but no interaction, suggesting that DVN did not have any impact on the concreteness effect. These findings are discussed in terms of LTM participation through redintegration processes and in terms of the language-based models of verbal STM.

  5. Sizing up the competition: quantifying the influence of the mental lexicon on auditory and visual spoken word recognition.

    PubMed

    Strand, Julia F; Sommers, Mitchell S

    2011-09-01

    Much research has explored how spoken word recognition is influenced by the architecture and dynamics of the mental lexicon (e.g., Luce and Pisoni, 1998; McClelland and Elman, 1986). A more recent question is whether the processes underlying word recognition are unique to the auditory domain, or whether visually perceived (lipread) speech may also be sensitive to the structure of the mental lexicon (Auer, 2002; Mattys, Bernstein, and Auer, 2002). The current research was designed to test the hypothesis that both aurally and visually perceived spoken words are isolated in the mental lexicon as a function of their modality-specific perceptual similarity to other words. Lexical competition (the extent to which perceptually similar words influence recognition of a stimulus word) was quantified using metrics that are well-established in the literature, as well as a statistical method for calculating perceptual confusability based on the phi-square statistic. Both auditory and visual spoken word recognition were influenced by modality-specific lexical competition as well as stimulus word frequency. These findings extend the scope of activation-competition models of spoken word recognition and reinforce the hypothesis (Auer, 2002; Mattys et al., 2002) that perceptual and cognitive properties underlying spoken word recognition are not specific to the auditory domain. In addition, the results support the use of the phi-square statistic as a better predictor of lexical competition than metrics currently used in models of spoken word recognition. © 2011 Acoustical Society of America

  6. Perception of Words and Non-Words in the Upper and Lower Visual Fields

    ERIC Educational Resources Information Center

    Darker, Iain T.; Jordan, Timothy R.

    2004-01-01

    The findings of previous investigations into word perception in the upper and the lower visual field (VF) are variable and may have incurred non-perceptual biases caused by the asymmetric distribution of information within a word, an advantage for saccadic eye-movements to targets in the upper VF and the possibility that stimuli were not projected…

  7. Phonological Contribution during Visual Word Recognition in Child Readers. An Intermodal Priming Study in Grades 3 and 5

    ERIC Educational Resources Information Center

    Sauval, Karinne; Casalis, Séverine; Perre, Laetitia

    2017-01-01

    This study investigated the phonological contribution during visual word recognition in child readers as a function of general reading expertise (third and fifth grades) and specific word exposure (frequent and less-frequent words). An intermodal priming in lexical decision task was performed. Auditory primes (identical and unrelated) were used in…

  8. The effect of compression and attention allocation on speech intelligibility. II

    NASA Astrophysics Data System (ADS)

    Choi, Sangsook; Carrell, Thomas

    2004-05-01

    Previous investigations of the effects of amplitude compression on measures of speech intelligibility have shown inconsistent results. Recently, a novel paradigm was used to investigate the possibility of more consistent findings with a measure of speech perception that is not based entirely on intelligibility (Choi and Carrell, 2003). That study exploited a dual-task paradigm using a pursuit rotor online visual-motor tracking task (Dlhopolsky, 2000) along with a word repetition task. Intensity-compressed words caused reduced performance on the tracking task as compared to uncompressed words when subjects engaged in a simultaneous word repetition task. This suggested an increased cognitive load when listeners processed compressed words. A stronger result might be obtained if a single resource (linguistic) is required rather than two (linguistic and visual-motor) resources. In the present experiment a visual lexical decision task and an auditory word repetition task were used. The visual stimuli for the lexical decision task were blurred and presented in a noise background. The compressed and uncompressed words for repetition were placed in speech-shaped noise. Participants with normal hearing and vision conducted word repetition and lexical decision tasks both independently and simultaneously. The pattern of results is discussed and compared to the previous study.

  9. [Medical Image Registration Method Based on a Semantic Model with Directional Visual Words].

    PubMed

    Jin, Yufei; Ma, Meng; Yang, Xin

    2016-04-01

    Medical image registration is very challenging due to the various imaging modality,image quality,wide inter-patients variability,and intra-patient variability with disease progressing of medical images,with strict requirement for robustness.Inspired by semantic model,especially the recent tremendous progress in computer vision tasks under bag-of-visual-word framework,we set up a novel semantic model to match medical images.Since most of medical images have poor contrast,small dynamic range,and involving only intensities and so on,the traditional visual word models do not perform very well.To benefit from the advantages from the relative works,we proposed a novel visual word model named directional visual words,which performs better on medical images.Then we applied this model to do medical registration.In our experiment,the critical anatomical structures were first manually specified by experts.Then we adopted the directional visual word,the strategy of spatial pyramid searching from coarse to fine,and the k-means algorithm to help us locating the positions of the key structures accurately.Sequentially,we shall register corresponding images by the areas around these positions.The results of the experiments which were performed on real cardiac images showed that our method could achieve high registration accuracy in some specific areas.

  10. Syllable Transposition Effects in Korean Word Recognition

    ERIC Educational Resources Information Center

    Lee, Chang H.; Kwon, Youan; Kim, Kyungil; Rastle, Kathleen

    2015-01-01

    Research on the impact of letter transpositions in visual word recognition has yielded important clues about the nature of orthographic representations. This study investigated the impact of syllable transpositions on the recognition of Korean multisyllabic words. Results showed that rejection latencies in visual lexical decision for…

  11. When a hit sounds like a kiss: An electrophysiological exploration of semantic processing in visual narrative.

    PubMed

    Manfredi, Mirella; Cohn, Neil; Kutas, Marta

    2017-06-01

    Researchers have long questioned whether information presented through different sensory modalities involves distinct or shared semantic systems. We investigated uni-sensory cross-modal processing by recording event-related brain potentials to words replacing the climactic event in a visual narrative sequence (comics). We compared Onomatopoeic words, which phonetically imitate action sounds (Pow!), with Descriptive words, which describe an action (Punch!), that were (in)congruent within their sequence contexts. Across two experiments, larger N400s appeared to Anomalous Onomatopoeic or Descriptive critical panels than to their congruent counterparts, reflecting a difficulty in semantic access/retrieval. Also, Descriptive words evinced a greater late frontal positivity compared to Onomatopoetic words, suggesting that, though plausible, they may be less predictable/expected in visual narratives. Our results indicate that uni-sensory cross-model integration of word/letter-symbol strings within visual narratives elicit ERP patterns typically observed for written sentence processing, thereby suggesting the engagement of similar domain-independent integration/interpretation mechanisms. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. When a hit sounds like a kiss: an electrophysiological exploration of semantic processing in visual narrative

    PubMed Central

    Manfredi, Mirella; Cohn, Neil; Kutas, Marta

    2017-01-01

    Researchers have long questioned whether information presented through different sensory modalities involves distinct or shared semantic systems. We investigated uni-sensory cross-modal processing by recording event-related brain potentials to words replacing the climactic event in a visual narrative sequence (comics). We compared Onomatopoeic words, which phonetically imitate action sounds (Pow!), with Descriptive words, which describe an action (Punch!), that were (in)congruent within their sequence contexts. Across two experiments, larger N400s appeared to Anomalous Onomatopoeic or Descriptive critical panels than to their congruent counterparts, reflecting a difficulty in semantic access/retrieval. Also, Descriptive words evinced a greater late frontal positivity compared to Onomatopoetic words, suggesting that, though plausible, they may be less predictable/expected in visual narratives. Our results indicate that uni-sensory cross-model integration of word/letter-symbol strings within visual narratives elicit ERP patterns typically observed for written sentence processing, thereby suggesting the engagement of similar domain-independent integration/interpretation mechanisms. PMID:28242517

  13. Music reading expertise modulates hemispheric lateralization in English word processing but not in Chinese character processing.

    PubMed

    Li, Sara Tze Kwan; Hsiao, Janet Hui-Wen

    2018-07-01

    Music notation and English word reading both involve mapping horizontally arranged visual components to components in sound, in contrast to reading in logographic languages such as Chinese. Accordingly, music-reading expertise may influence English word processing more than Chinese character processing. Here we showed that musicians named English words significantly faster than non-musicians when words were presented in the left visual field/right hemisphere (RH) or the center position, suggesting an advantage of RH processing due to music reading experience. This effect was not observed in Chinese character naming. A follow-up ERP study showed that in a sequential matching task, musicians had reduced RH N170 responses to English non-words under the processing of musical segments as compared with non-musicians, suggesting a shared visual processing mechanism in the RH between music notation and English non-word reading. This shared mechanism may be related to the letter-by-letter, serial visual processing that characterizes RH English word recognition (e.g., Lavidor & Ellis, 2001), which may consequently facilitate English word processing in the RH in musicians. Thus, music reading experience may have differential influences on the processing of different languages, depending on their similarities in the cognitive processes involved. Copyright © 2018 Elsevier B.V. All rights reserved.

  14. Rapid extraction of gist from visual text and its influence on word recognition.

    PubMed

    Asano, Michiko; Yokosawa, Kazuhiko

    2011-01-01

    Two experiments explored rapid extraction of gist from a visual text and its influence on word recognition. In both, a short text (sentence) containing a target word was presented for 200 ms and was followed by a target recognition task. Results showed that participants recognized contextually anomalous word targets less frequently than contextually consistent counterparts (Experiment 1). This context effect was obtained when sentences contained the same semantic content but with disrupted syntactic structure (Experiment 2). Results demonstrate that words in a briefly presented visual sentence are processed in parallel and that rapid extraction of sentence gist relies on a primitive representation of sentence context (termed protocontext) that is semantically activated by the simultaneous presentation of multiple words (i.e., a sentence) before syntactic processing.

  15. Effect of word familiarity on visually evoked magnetic fields.

    PubMed

    Harada, N; Iwaki, S; Nakagawa, S; Yamaguchi, M; Tonoike, M

    2004-11-30

    This study investigated the effect of word familiarity of visual stimuli on the word recognizing function of the human brain. Word familiarity is an index of the relative ease of word perception, and is characterized by facilitation and accuracy on word recognition. We studied the effect of word familiarity, using "Hiragana" (phonetic characters in Japanese orthography) characters as visual stimuli, on the elicitation of visually evoked magnetic fields with a word-naming task. The words were selected from a database of lexical properties of Japanese. The four "Hiragana" characters used were grouped and presented in 4 classes of degree of familiarity. The three components were observed in averaged waveforms of the root mean square (RMS) value on latencies at about 100 ms, 150 ms and 220 ms. The RMS value of the 220 ms component showed a significant positive correlation (F=(3/36); 5.501; p=0.035) with the value of familiarity. ECDs of the 220 ms component were observed in the intraparietal sulcus (IPS). Increments in the RMS value of the 220 ms component, which might reflect ideographical word recognition, retrieving "as a whole" were enhanced with increments of the value of familiarity. The interaction of characters, which increased with the value of familiarity, might function "as a large symbol"; and enhance a "pop-out" function with an escaping character inhibiting other characters and enhancing the segmentation of the character (as a figure) from the ground.

  16. Tracking Multiple Statistics: Simultaneous Learning of Object Names and Categories in English and Mandarin Speakers.

    PubMed

    Chen, Chi-Hsin; Gershkoff-Stowe, Lisa; Wu, Chih-Yi; Cheung, Hintat; Yu, Chen

    2017-08-01

    Two experiments were conducted to examine adult learners' ability to extract multiple statistics in simultaneously presented visual and auditory input. Experiment 1 used a cross-situational learning paradigm to test whether English speakers were able to use co-occurrences to learn word-to-object mappings and concurrently form object categories based on the commonalities across training stimuli. Experiment 2 replicated the first experiment and further examined whether speakers of Mandarin, a language in which final syllables of object names are more predictive of category membership than English, were able to learn words and form object categories when trained with the same type of structures. The results indicate that both groups of learners successfully extracted multiple levels of co-occurrence and used them to learn words and object categories simultaneously. However, marked individual differences in performance were also found, suggesting possible interference and competition in processing the two concurrent streams of regularities. Copyright © 2016 Cognitive Science Society, Inc.

  17. Visual noise disrupts conceptual integration in reading.

    PubMed

    Gao, Xuefei; Stine-Morrow, Elizabeth A L; Noh, Soo Rim; Eskew, Rhea T

    2011-02-01

    The Effortfulness Hypothesis suggests that sensory impairment (either simulated or age-related) may decrease capacity for semantic integration in language comprehension. We directly tested this hypothesis by measuring resource allocation to different levels of processing during reading (i.e., word vs. semantic analysis). College students read three sets of passages word-by-word, one at each of three levels of dynamic visual noise. There was a reliable interaction between processing level and noise, such that visual noise increased resources allocated to word-level processing, at the cost of attention paid to semantic analysis. Recall of the most important ideas also decreased with increasing visual noise. Results suggest that sensory challenge can impair higher-level cognitive functions in learning from text, supporting the Effortfulness Hypothesis.

  18. What Can Graph Theory Tell Us About Word Learning and Lexical Retrieval?

    PubMed Central

    Vitevitch, Michael S.

    2008-01-01

    Purpose Graph theory and the new science of networks provide a mathematically rigorous approach to examine the development and organization of complex systems. These tools were applied to the mental lexicon to examine the organization of words in the lexicon and to explore how that structure might influence the acquisition and retrieval of phonological word-forms. Method Pajek, a program for large network analysis and visualization (V. Batagelj & A. Mvrar, 1998), was used to examine several characteristics of a network derived from a computerized database of the adult lexicon. Nodes in the network represented words, and a link connected two nodes if the words were phonological neighbors. Results The average path length and clustering coefficient suggest that the phonological network exhibits small-world characteristics. The degree distribution was fit better by an exponential rather than a power-law function. Finally, the network exhibited assortative mixing by degree. Some of these structural characteristics were also found in graphs that were formed by 2 simple stochastic processes suggesting that similar processes might influence the development of the lexicon. Conclusions The graph theoretic perspective may provide novel insights about the mental lexicon and lead to future studies that help us better understand language development and processing. PMID:18367686

  19. Prediction During Natural Language Comprehension.

    PubMed

    Willems, Roel M; Frank, Stefan L; Nijhof, Annabel D; Hagoort, Peter; van den Bosch, Antal

    2016-06-01

    The notion of prediction is studied in cognitive neuroscience with increasing intensity. We investigated the neural basis of 2 distinct aspects of word prediction, derived from information theory, during story comprehension. We assessed the effect of entropy of next-word probability distributions as well as surprisal A computational model determined entropy and surprisal for each word in 3 literary stories. Twenty-four healthy participants listened to the same 3 stories while their brain activation was measured using fMRI. Reversed speech fragments were presented as a control condition. Brain areas sensitive to entropy were left ventral premotor cortex, left middle frontal gyrus, right inferior frontal gyrus, left inferior parietal lobule, and left supplementary motor area. Areas sensitive to surprisal were left inferior temporal sulcus ("visual word form area"), bilateral superior temporal gyrus, right amygdala, bilateral anterior temporal poles, and right inferior frontal sulcus. We conclude that prediction during language comprehension can occur at several levels of processing, including at the level of word form. Our study exemplifies the power of combining computational linguistics with cognitive neuroscience, and additionally underlines the feasibility of studying continuous spoken language materials with fMRI. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  20. Spatial layout of letters in nonwords affects visual short-term memory load: evidence from human electrophysiology.

    PubMed

    Prime, David; Dell'acqua, Roberto; Arguin, Martin; Gosselin, Frédéric; Jolicœur, Pierre

    2011-03-01

    The sustained posterior contralateral negativity (SPCN) was used to investigate the effect of spatial layout on the maintenance of letters in VSTM. SPCN amplitude was measured for words, nonwords, and scrambled nonwords. We reexamined the effects of spatial layout of letters on SPCN amplitude in a design that equated the mean frequency of use of each position. Scrambled letters that did not form words elicited a larger SPCN than either words or nonwords, indicating lower VSTM load for nonwords presented in a typical horizontal array than the load observed for the same letters presented in spatially scrambled locations. In contrast, prior research has shown that the spatial extent of arrays of simple stimuli did not influence the amplitude of the SPCN. Thus, the present results indicate the existence of encoding and VSTM maintenance mechanisms specific to letter and word processing. Copyright © 2010 Society for Psychophysiological Research.

  1. Rapid modulation of spoken word recognition by visual primes.

    PubMed

    Okano, Kana; Grainger, Jonathan; Holcomb, Phillip J

    2016-02-01

    In a masked cross-modal priming experiment with ERP recordings, spoken Japanese words were primed with words written in one of the two syllabary scripts of Japanese. An early priming effect, peaking at around 200ms after onset of the spoken word target, was seen in left lateral electrode sites for Katakana primes, and later effects were seen for both Hiragana and Katakana primes on the N400 ERP component. The early effect is thought to reflect the efficiency with which words in Katakana script make contact with sublexical phonological representations involved in spoken language comprehension, due to the particular way this script is used by Japanese readers. This demonstrates fast-acting influences of visual primes on the processing of auditory target words, and suggests that briefly presented visual primes can influence sublexical processing of auditory target words. The later N400 priming effects, on the other hand, most likely reflect cross-modal influences on activity at the level of whole-word phonology and semantics.

  2. Rapid modulation of spoken word recognition by visual primes

    PubMed Central

    Okano, Kana; Grainger, Jonathan; Holcomb, Phillip J.

    2015-01-01

    In a masked cross-modal priming experiment with ERP recordings, spoken Japanese words were primed with words written in one of the two syllabary scripts of Japanese. An early priming effect, peaking at around 200ms after onset of the spoken word target, was seen in left lateral electrode sites for Katakana primes, and later effects were seen for both Hiragana and Katakana primes on the N400 ERP component. The early effect is thought to reflect the efficiency with which words in Katakana script make contact with sublexical phonological representations involved in spoken language comprehension, due to the particular way this script is used by Japanese readers. This demonstrates fast-acting influences of visual primes on the processing of auditory target words, and suggests that briefly presented visual primes can influence sublexical processing of auditory target words. The later N400 priming effects, on the other hand, most likely reflect cross-modal influences on activity at the level of whole-word phonology and semantics. PMID:26516296

  3. Tracking Multiple Statistics: Simultaneous Learning of Object Names and Categories in English and Mandarin Speakers

    ERIC Educational Resources Information Center

    Chen, Chi-hsin; Gershkoff-Stowe, Lisa; Wu, Chih-Yi; Cheung, Hintat; Yu, Chen

    2017-01-01

    Two experiments were conducted to examine adult learners' ability to extract multiple statistics in simultaneously presented visual and auditory input. Experiment 1 used a cross-situational learning paradigm to test whether English speakers were able to use co-occurrences to learn word-to-object mappings and concurrently form object categories…

  4. Is There a "Fete" in "Fetish"? Effects of Orthographic Opacity on Morpho-Orthographic Segmentation in Visual Word Recognition

    ERIC Educational Resources Information Center

    McCormick, Samantha F.; Rastle, Kathleen; Davis, Matthew H.

    2008-01-01

    Recent research using masked priming has suggested that there is a form of morphological decomposition that is based solely on the appearance of morphological complexity and that operates independently of semantic information [Longtin, C.M., Segui, J., & Halle, P. A. (2003). Morphological priming without morphological relationship. "Language and…

  5. The Development of Long-Term Lexical Representations through Hebb Repetition Learning

    ERIC Educational Resources Information Center

    Szmalec, Arnaud; Page, Mike P. A.; Duyck, Wouter

    2012-01-01

    This study clarifies the involvement of short- and long-term memory in novel word-form learning, using the Hebb repetition paradigm. In Experiment 1, participants recalled sequences of visually presented syllables (e.g., "la"-"va"-"bu"-"sa"-"fa"-"ra"-"re"-"si"-"di"), with one particular (Hebb) sequence repeated on every third trial. Crucially,…

  6. Searching for the right word: Hybrid visual and memory search for words.

    PubMed

    Boettcher, Sage E P; Wolfe, Jeremy M

    2015-05-01

    In "hybrid search" (Wolfe Psychological Science, 23(7), 698-703, 2012), observers search through visual space for any of multiple targets held in memory. With photorealistic objects as the stimuli, response times (RTs) increase linearly with the visual set size and logarithmically with the memory set size, even when over 100 items are committed to memory. It is well-established that pictures of objects are particularly easy to memorize (Brady, Konkle, Alvarez, & Oliva Proceedings of the National Academy of Sciences, 105, 14325-14329, 2008). Would hybrid-search performance be similar if the targets were words or phrases, in which word order can be important, so that the processes of memorization might be different? In Experiment 1, observers memorized 2, 4, 8, or 16 words in four different blocks. After passing a memory test, confirming their memorization of the list, the observers searched for these words in visual displays containing two to 16 words. Replicating Wolfe (Psychological Science, 23(7), 698-703, 2012), the RTs increased linearly with the visual set size and logarithmically with the length of the word list. The word lists of Experiment 1 were random. In Experiment 2, words were drawn from phrases that observers reported knowing by heart (e.g., "London Bridge is falling down"). Observers were asked to provide four phrases, ranging in length from two words to no less than 20 words (range 21-86). All words longer than two characters from the phrase, constituted the target list. Distractor words were matched for length and frequency. Even with these strongly ordered lists, the results again replicated the curvilinear function of memory set size seen in hybrid search. One might expect to find serial position effects, perhaps reducing the RTs for the first (primacy) and/or the last (recency) members of a list (Atkinson & Shiffrin, 1968; Murdock Journal of Experimental Psychology, 64, 482-488, 1962). Surprisingly, we showed no reliable effects of word order. Thus, in "London Bridge is falling down," "London" and "down" were found no faster than "falling."

  7. The Modulation of Visual and Task Characteristics of a Writing System on Hemispheric Lateralization in Visual Word Recognition--A Computational Exploration

    ERIC Educational Resources Information Center

    Hsiao, Janet H.; Lam, Sze Man

    2013-01-01

    Through computational modeling, here we examine whether visual and task characteristics of writing systems alone can account for lateralization differences in visual word recognition between different languages without assuming influence from left hemisphere (LH) lateralized language processes. We apply a hemispheric processing model of face…

  8. The neural mechanisms of word order processing revisited: electrophysiological evidence from Japanese.

    PubMed

    Wolff, Susann; Schlesewsky, Matthias; Hirotani, Masako; Bornkessel-Schlesewsky, Ina

    2008-11-01

    We present two ERP studies on the processing of word order variations in Japanese, a language that is suited to shedding further light on the implications of word order freedom for neurocognitive approaches to sentence comprehension. Experiment 1 used auditory presentation and revealed that initial accusative objects elicit increased processing costs in comparison to initial subjects (in the form of a transient negativity) only when followed by a prosodic boundary. A similar effect was observed using visual presentation in Experiment 2, however only for accusative but not for dative objects. These results support a relational account of word order processing, in which the costs of comprehending an object-initial word order are determined by the linearization properties of the initial object in relation to the linearization properties of possible upcoming arguments. In the absence of a prosodic boundary, the possibility for subject omission in Japanese renders it likely that the initial accusative is the only argument in the clause. Hence, no upcoming arguments are expected and no linearization problem can arise. A prosodic boundary or visual segmentation, by contrast, indicate an object-before-subject word order, thereby leading to a mismatch between argument "prominence" (e.g. in terms of thematic roles) and linear order. This mismatch is alleviated when the initial object is highly prominent itself (e.g. in the case of a dative, which can bear the higher-ranking thematic role in a two argument relation). We argue that the processing mechanism at work here can be distinguished from more general aspects of "dependency processing" in object-initial sentences.

  9. The neural circuits recruited for the production of signs and fingerspelled words

    PubMed Central

    Emmorey, Karen; Mehta, Sonya; McCullough, Stephen; Grabowski, Thomas J.

    2016-01-01

    Signing differs from typical non-linguistic hand actions because movements are not visually guided, finger movements are complex (particularly for fingerspelling), and signs are not produced as holistic gestures. We used positron emission tomography to investigate the neural circuits involved in the production of American Sign Language (ASL). Different types of signs (one-handed (articulated in neutral space), two-handed (neutral space), and one-handed body-anchored signs) were elicited by asking deaf native signers to produce sign translations of English words. Participants also fingerspelled (one-handed) printed English words. For the baseline task, participants indicated whether a word contained a descending letter. Fingerspelling engaged ipsilateral motor cortex and cerebellar cortex in contrast to both one-handed signs and the descender baseline task, which may reflect greater timing demands and complexity of handshape sequences required for fingerspelling. Greater activation in the visual word form area was also observed for fingerspelled words compared to one-handed signs. Body-anchored signs engaged bilateral superior parietal cortex to a greater extent than the descender baseline task and neutral space signs, reflecting the motor control and proprioceptive monitoring required to direct the hand toward a specific location on the body. Less activation in parts of the motor circuit was observed for two-handed signs compared to one-handed signs, possibly because, for half of the signs, handshape and movement goals were spread across the two limbs. Finally, the conjunction analysis comparing each sign type with the descender baseline task revealed common activation in the supramarginal gyrus bilaterally, which we interpret as reflecting phonological retrieval and encoding processes. PMID:27459390

  10. Music and words in the visual cortex: The impact of musical expertise.

    PubMed

    Mongelli, Valeria; Dehaene, Stanislas; Vinckier, Fabien; Peretz, Isabelle; Bartolomeo, Paolo; Cohen, Laurent

    2017-01-01

    How does the human visual system accommodate expertise for two simultaneously acquired symbolic systems? We used fMRI to compare activations induced in the visual cortex by musical notation, written words and other classes of objects, in professional musicians and in musically naïve controls. First, irrespective of expertise, selective activations for music were posterior and lateral to activations for words in the left occipitotemporal cortex. This indicates that symbols characterized by different visual features engage distinct cortical areas. Second, musical expertise increased the volume of activations for music and led to an anterolateral displacement of word-related activations. In musicians, there was also a dramatic increase of the brain-scale networks connected to the music-selective visual areas. Those findings reveal that acquiring a double visual expertise involves an expansion of category-selective areas, the development of novel long-distance functional connectivity, and possibly some competition between categories for the colonization of cortical space. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Recognition and reading aloud of kana and kanji word: an fMRI study.

    PubMed

    Ino, Tadashi; Nakai, Ryusuke; Azuma, Takashi; Kimura, Toru; Fukuyama, Hidenao

    2009-03-16

    It has been proposed that different brain regions are recruited for processing two Japanese writing systems, namely, kanji (morphograms) and kana (syllabograms). However, this difference may depend upon what type of word was used and also on what type of task was performed. Using fMRI, we investigated brain activation for processing kanji and kana words with similar high familiarity in two tasks: word recognition and reading aloud. During both tasks, words and non-words were presented side by side, and the subjects were required to press a button corresponding to the real word in the word recognition task and were required to read aloud the real word in the reading aloud task. Brain activations were similar between kanji and kana during reading aloud task, whereas during word recognition task in which accurate identification and selection were required, kanji relative to kana activated regions of bilateral frontal, parietal and occipitotemporal cortices, all of which were related mainly to visual word-form analysis and visuospatial attention. Concerning the difference of brain activity between two tasks, differential activation was found only in the regions associated with task-specific sensorimotor processing for kana, whereas visuospatial attention network also showed greater activation during word recognition task than during reading aloud task for kanji. We conclude that the differences in brain activation between kanji and kana depend on the interaction between the script characteristics and the task demands.

  12. Assessing spoken word recognition in children who are deaf or hard of hearing: a translational approach.

    PubMed

    Kirk, Karen Iler; Prusick, Lindsay; French, Brian; Gotch, Chad; Eisenberg, Laurie S; Young, Nancy

    2012-06-01

    Under natural conditions, listeners use both auditory and visual speech cues to extract meaning from speech signals containing many sources of variability. However, traditional clinical tests of spoken word recognition routinely employ isolated words or sentences produced by a single talker in an auditory-only presentation format. The more central cognitive processes used during multimodal integration, perceptual normalization, and lexical discrimination that may contribute to individual variation in spoken word recognition performance are not assessed in conventional tests of this kind. In this article, we review our past and current research activities aimed at developing a series of new assessment tools designed to evaluate spoken word recognition in children who are deaf or hard of hearing. These measures are theoretically motivated by a current model of spoken word recognition and also incorporate "real-world" stimulus variability in the form of multiple talkers and presentation formats. The goal of this research is to enhance our ability to estimate real-world listening skills and to predict benefit from sensory aid use in children with varying degrees of hearing loss. American Academy of Audiology.

  13. Developmental changes in the inferior frontal cortex for selecting semantic representations

    PubMed Central

    Lee, Shu-Hui; Booth, James R.; Chen, Shiou-Yuan; Chou, Tai-Li

    2012-01-01

    Functional magnetic resonance imaging (fMRI) was used to examine the neural correlates of semantic judgments to Chinese words in a group of 10–15 year old Chinese children. Two semantic tasks were used: visual–visual versus visual–auditory presentation. The first word was visually presented (i.e. character) and the second word was either visually or auditorily presented, and the participant had to determine if these two words were related in meaning. Different from English, Chinese has many homophones in which each spoken word corresponds to many characters. The visual–auditory task, therefore, required greater engagement of cognitive control for the participants to select a semantically appropriate answer for the second homophonic word. Weaker association pairs produced greater activation in the mid-ventral region of left inferior frontal gyrus (BA 45) for both tasks. However, this effect was stronger for the visual–auditory task than for the visual–visual task and this difference was stronger for older compared to younger children. The findings suggest greater involvement of semantic selection mechanisms in the cross-modal task requiring the access of the appropriate meaning of homophonic spoken words, especially for older children. PMID:22337757

  14. Auditory Emotional Cues Enhance Visual Perception

    ERIC Educational Resources Information Center

    Zeelenberg, Rene; Bocanegra, Bruno R.

    2010-01-01

    Recent studies show that emotional stimuli impair performance to subsequently presented neutral stimuli. Here we show a cross-modal perceptual enhancement caused by emotional cues. Auditory cue words were followed by a visually presented neutral target word. Two-alternative forced-choice identification of the visual target was improved by…

  15. Large-scale functional networks connect differently for processing words and symbol strings.

    PubMed

    Liljeström, Mia; Vartiainen, Johanna; Kujala, Jan; Salmelin, Riitta

    2018-01-01

    Reconfigurations of synchronized large-scale networks are thought to be central neural mechanisms that support cognition and behavior in the human brain. Magnetoencephalography (MEG) recordings together with recent advances in network analysis now allow for sub-second snapshots of such networks. In the present study, we compared frequency-resolved functional connectivity patterns underlying reading of single words and visual recognition of symbol strings. Word reading emphasized coherence in a left-lateralized network with nodes in classical perisylvian language regions, whereas symbol processing recruited a bilateral network, including connections between frontal and parietal regions previously associated with spatial attention and visual working memory. Our results illustrate the flexible nature of functional networks, whereby processing of different form categories, written words vs. symbol strings, leads to the formation of large-scale functional networks that operate at distinct oscillatory frequencies and incorporate task-relevant regions. These results suggest that category-specific processing should be viewed not so much as a local process but as a distributed neural process implemented in signature networks. For words, increased coherence was detected particularly in the alpha (8-13 Hz) and high gamma (60-90 Hz) frequency bands, whereas increased coherence for symbol strings was observed in the high beta (21-29 Hz) and low gamma (30-45 Hz) frequency range. These findings attest to the role of coherence in specific frequency bands as a general mechanism for integrating stimulus-dependent information across brain regions.

  16. Early development of letter specialization in left fusiform is associated with better word reading and smaller fusiform face area.

    PubMed

    Centanni, Tracy M; Norton, Elizabeth S; Park, Anne; Beach, Sara D; Halverson, Kelly; Ozernov-Palchik, Ola; Gaab, Nadine; Gabrieli, John DE

    2018-03-05

    A functional region of left fusiform gyrus termed "the visual word form area" (VWFA) develops during reading acquisition to respond more strongly to printed words than to other visual stimuli. Here, we examined responses to letters among 5- and 6-year-old early kindergarten children (N = 48) with little or no school-based reading instruction who varied in their reading ability. We used functional magnetic resonance imaging (fMRI) to measure responses to individual letters, false fonts, and faces in left and right fusiform gyri. We then evaluated whether signal change and size (spatial extent) of letter-sensitive cortex (greater activation for letters versus faces) and letter-specific cortex (greater activation for letters versus false fonts) in these regions related to (a) standardized measures of word-reading ability and (b) signal change and size of face-sensitive cortex (fusiform face area or FFA; greater activation for faces versus letters). Greater letter specificity, but not letter sensitivity, in left fusiform gyrus correlated positively with word reading scores. Across children, in the left fusiform gyrus, greater size of letter-sensitive cortex correlated with lesser size of FFA. These findings are the first to suggest that in beginning readers, development of letter responsivity in left fusiform cortex is associated with both better reading ability and also a reduction of the size of left FFA that may result in right-hemisphere dominance for face perception. © 2018 John Wiley & Sons Ltd.

  17. Using Wordle as a Supplementary Research Tool

    ERIC Educational Resources Information Center

    McNaught, Carmel; Lam, Paul

    2010-01-01

    A word cloud is a special visualization of text in which the more frequently used words are effectively highlighted by occupying more prominence in the representation. We have used Wordle to produce word-cloud analyses of the spoken and written responses of informants in two research projects. The product demonstrates a fast and visually rich way…

  18. Age-of-Acquisition Effects in Visual Word Recognition: Evidence from Expert Vocabularies

    ERIC Educational Resources Information Center

    Stadthagen-Gonzalez, Hans; Bowers, Jeffrey S.; Damian, Markus F.

    2004-01-01

    Three experiments assessed the contributions of age-of-acquisition (AoA) and frequency to visual word recognition. Three databases were created from electronic journals in chemistry, psychology and geology in order to identify technical words that are extremely frequent in each discipline but acquired late in life. In Experiment 1, psychologists…

  19. Is Syntactic-Category Processing Obligatory in Visual Word Recognition? Evidence from Chinese

    ERIC Educational Resources Information Center

    Wong, Andus Wing-Kuen; Chen, Hsuan-Chih

    2012-01-01

    Three experiments were conducted to investigate how syntactic-category and semantic information is processed in visual word recognition. The stimuli were two-character Chinese words in which semantic and syntactic-category ambiguities were factorially manipulated. A lexical decision task was employed in Experiment 1, whereas a semantic relatedness…

  20. Individual Differences in Visual Word Recognition: Insights from the English Lexicon Project

    ERIC Educational Resources Information Center

    Yap, Melvin J.; Balota, David A.; Sibley, Daragh E.; Ratcliff, Roger

    2012-01-01

    Empirical work and models of visual word recognition have traditionally focused on group-level performance. Despite the emphasis on the prototypical reader, there is clear evidence that variation in reading skill modulates word recognition performance. In the present study, we examined differences among individuals who contributed to the English…

  1. MEGALEX: A megastudy of visual and auditory word recognition.

    PubMed

    Ferrand, Ludovic; Méot, Alain; Spinelli, Elsa; New, Boris; Pallier, Christophe; Bonin, Patrick; Dufau, Stéphane; Mathôt, Sebastiaan; Grainger, Jonathan

    2018-06-01

    Using the megastudy approach, we report a new database (MEGALEX) of visual and auditory lexical decision times and accuracy rates for tens of thousands of words. We collected visual lexical decision data for 28,466 French words and the same number of pseudowords, and auditory lexical decision data for 17,876 French words and the same number of pseudowords (synthesized tokens were used for the auditory modality). This constitutes the first large-scale database for auditory lexical decision, and the first database to enable a direct comparison of word recognition in different modalities. Different regression analyses were conducted to illustrate potential ways to exploit this megastudy database. First, we compared the proportions of variance accounted for by five word frequency measures. Second, we conducted item-level regression analyses to examine the relative importance of the lexical variables influencing performance in the different modalities (visual and auditory). Finally, we compared the similarities and differences between the two modalities. All data are freely available on our website ( https://sedufau.shinyapps.io/megalex/ ) and are searchable at www.lexique.org , inside the Open Lexique search engine.

  2. Syllables and bigrams: orthographic redundancy and syllabic units affect visual word recognition at different processing levels.

    PubMed

    Conrad, Markus; Carreiras, Manuel; Tamm, Sascha; Jacobs, Arthur M

    2009-04-01

    Over the last decade, there has been increasing evidence for syllabic processing during visual word recognition. If syllabic effects prove to be independent from orthographic redundancy, this would seriously challenge the ability of current computational models to account for the processing of polysyllabic words. Three experiments are presented to disentangle effects of the frequency of syllabic units and orthographic segments in lexical decision. In Experiment 1 the authors obtained an inhibitory syllable frequency effect that was unaffected by the presence or absence of a bigram trough at the syllable boundary. In Experiments 2 and 3 an inhibitory effect of initial syllable frequency but a facilitative effect of initial bigram frequency emerged when manipulating 1 of the 2 measures and controlling for the other in Spanish words starting with consonant-vowel syllables. The authors conclude that effects of syllable frequency and letter-cluster frequency are independent and arise at different processing levels of visual word recognition. Results are discussed within the framework of an interactive activation model of visual word recognition. (c) 2009 APA, all rights reserved.

  3. Mechanisms of attention in reading parafoveal words: a cross-linguistic study in children.

    PubMed

    Siéroff, Eric; Dahmen, Riadh; Fagard, Jacqueline

    2012-05-01

    The right visual field superiority (RVFS) for words may be explained by the cerebral lateralization for language, the scanning habits in relation to script direction, and spatial attention. The present study explored the influence of spatial attention on the RVFS in relation to scanning habits in school-age children. French second- and fourth-graders identified briefly presented French parafoveal words. Tunisian second- and fourth-graders identified Arabic words, and Tunisian fourth-graders identified French words. The distribution of spatial attention was evaluated by using a distracter in the visual field opposite the word. The results of the correct identification score showed that reading direction had only a partial effect on the identification of parafoveal words and the distribution of attention, with a clear RVFS and a larger effect of the distracter in the left visual field in French children reading French words, and an absence of asymmetry when Tunisian children read Arabic words. Fourth-grade Tunisian children also showed an RVFS when reading French words without an asymmetric distribution of attention, suggesting that their native language may have partially influenced reading strategies in the newly learned language. However, the mode of letter processing, evaluated by a qualitative error score, was only influenced by reading direction, with more sequential processing in the visual field where reading "begins." The distribution of attention when reading parafoveal words is better explained by the interaction between left hemisphere activation and strategies related to reading direction. We discuss these results in light of an attentional theory that dissociates selection and preparation.

  4. Looking and touching: What extant approaches reveal about the structure of early word knowledge

    PubMed Central

    Hendrickson, Kristi; Mitsven, Samantha; Poulin-Dubois, Diane; Zesiger, Pascal; Friend, Margaret

    2014-01-01

    The goal of the current study is to assess the temporal dynamics of vision and action to evaluate the underlying word representations that guide infants’ responses. Sixteen-month-old infants participated in a two-alternative forced-choice word-picture matching task. We conducted a moment-by-moment analysis of looking and reaching behaviors as they occurred in tandem to assess the speed with which a prompted word was processed (visual reaction time) as a function of the type of haptic response: Target, Distractor, or No Touch. Visual reaction times (visual RTs) were significantly slower during No Touches compared to Distractor and Target Touches, which were statistically indistinguishable. The finding that visual RTs were significantly faster during Distractor Touches compared to No Touches suggests that incorrect and absent haptic responses appear to index distinct knowledge states: incorrect responses are associated with partial knowledge whereas absent responses appear to reflect a true failure to map lexical items to their target referents. Further, we found that those children who were faster at processing words were also those children who exhibited better haptic performance. This research provides a methodological clarification on knowledge measured by the visual and haptic modalities and new evidence for a continuum of word knowledge in the second year of life. PMID:25444711

  5. Characteristics of Chinese-English bilingual dyslexia in right occipito-temporal lesion.

    PubMed

    Ting, Simon Kang Seng; Chia, Pei Shi; Chan, Yiong Huak; Kwek, Kevin Jun Hong; Tan, Wilnard; Hameed, Shahul; Tan, Eng-King

    2017-11-01

    Current literature suggests that right hemisphere lesions produce predominant spatial-related dyslexic error in English speakers. However, little is known regarding such lesions in Chinese speakers. In this paper, we describe the dyslexic characteristics of a Chinese-English bilingual patient with a right posterior cortical lesion. He was found to have profound spatial-related errors during his English word reading, in both real and non-words. During Chinese word reading, there was significantly less error compared to English, probably due to the ideographic nature of the Chinese language. He was also found to commit phonological-like visual errors in English, characterized by error responses that were visually similar to the actual word. There was no significant difference in visual errors during English word reading compared with Chinese. In general, our patient's performance in both languages appears to be consistent with the current literature on right posterior hemisphere lesions. Additionally, his performance also likely suggests that the right posterior cortical region participates in the visual analysis of orthographical word representation, both in ideographical and alphabetic languages, at least from a bilingual perspective. Future studies should further examine the role of the right posterior region in initial visual analysis of both languages. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. Individual Differences in Reported Visual Imagery and Memory Performance.

    ERIC Educational Resources Information Center

    McKelvie, Stuart J.; Demers, Elizabeth G.

    1979-01-01

    High- and low-visualizing males, identified by the self-report VVIQ, participated in a memory experiment involving abstract words, concrete words, and pictures. High-visualizers were superior on all items in short-term recall but superior only on pictures in long-term recall, supporting the VVIQ's validity. (Author/SJL)

  7. Dual Coding in Children.

    ERIC Educational Resources Information Center

    Burton, John K.; Wildman, Terry M.

    The purpose of this study was to test the applicability of the dual coding hypothesis to children's recall performance. The hypothesis predicts that visual interference will have a small effect on the recall of visually presented words or pictures, but that acoustic interference will cause a decline in recall of visually presented words and…

  8. Visual Speech Primes Open-Set Recognition of Spoken Words

    ERIC Educational Resources Information Center

    Buchwald, Adam B.; Winters, Stephen J.; Pisoni, David B.

    2009-01-01

    Visual speech perception has become a topic of considerable interest to speech researchers. Previous research has demonstrated that perceivers neurally encode and use speech information from the visual modality, and this information has been found to facilitate spoken word recognition in tasks such as lexical decision (Kim, Davis, & Krins,…

  9. Caffeine Improves Left Hemisphere Processing of Positive Words

    PubMed Central

    Kuchinke, Lars; Lux, Vanessa

    2012-01-01

    A positivity advantage is known in emotional word recognition in that positive words are consistently processed faster and with fewer errors compared to emotionally neutral words. A similar advantage is not evident for negative words. Results of divided visual field studies, where stimuli are presented in either the left or right visual field and are initially processed by the contra-lateral brain hemisphere, point to a specificity of the language-dominant left hemisphere. The present study examined this effect by showing that the intake of caffeine further enhanced the recognition performance of positive, but not negative or neutral stimuli compared to a placebo control group. Because this effect was only present in the right visual field/left hemisphere condition, and based on the close link between caffeine intake and dopaminergic transmission, this result points to a dopaminergic explanation of the positivity advantage in emotional word recognition. PMID:23144893

  10. Deployment of spatial attention to words in central and peripheral vision.

    PubMed

    Ducrot, Stéphanie; Grainger, Jonathan

    2007-05-01

    Four perceptual identification experiments examined the influence of spatial cues on the recognition of words presented in central vision (with fixation on either the first or last letter of the target word) and in peripheral vision (displaced left or right of a central fixation point). Stimulus location had a strong effect on word identification accuracy in both central and peripheral vision, showing a strong right visual field superiority that did not depend on eccentricity. Valid spatial cues improved word identification for peripherally presented targets but were largely ineffective for centrally presented targets. Effects of spatial cuing interacted with visual field effects in Experiment 1, with valid cues reducing the right visual field superiority for peripherally located targets, but this interaction was shown to depend on the type of neutral cue. These results provide further support for the role of attentional factors in visual field asymmetries obtained with targets in peripheral vision but not with centrally presented targets.

  11. I see/hear what you mean: semantic activation in visual word recognition depends on perceptual attention.

    PubMed

    Connell, Louise; Lynott, Dermot

    2014-04-01

    How does the meaning of a word affect how quickly we can recognize it? Accounts of visual word recognition allow semantic information to facilitate performance but have neglected the role of modality-specific perceptual attention in activating meaning. We predicted that modality-specific semantic information would differentially facilitate lexical decision and reading aloud, depending on how perceptual attention is implicitly directed by each task. Large-scale regression analyses showed the perceptual modalities involved in representing a word's referent concept influence how easily that word is recognized. Both lexical decision and reading-aloud tasks direct attention toward vision, and are faster and more accurate for strongly visual words. Reading aloud additionally directs attention toward audition and is faster and more accurate for strongly auditory words. Furthermore, the overall semantic effects are as large for reading aloud as lexical decision and are separable from age-of-acquisition effects. These findings suggest that implicitly directing perceptual attention toward a particular modality facilitates representing modality-specific perceptual information in the meaning of a word, which in turn contributes to the lexical decision or reading-aloud response.

  12. Encourage Students to Read through the Use of Data Visualization

    ERIC Educational Resources Information Center

    Bandeen, Heather M.; Sawin, Jason E.

    2012-01-01

    Instructors are always looking for new ways to engage students in reading assignments. The authors present a few techniques that rely on a web-based data visualization tool called Wordle (wordle.net). Wordle creates word frequency representations called word clouds. The larger a word appears within a cloud, the more frequently it occurs within a…

  13. Vitality Forms Expressed by Others Modulate Our Own Motor Response: A Kinematic Study

    PubMed Central

    Di Cesare, Giuseppe; De Stefani, Elisa; Gentilucci, Maurizio; De Marco, Doriana

    2017-01-01

    During social interaction, actions, and words may be expressed in different ways, for example, gently or rudely. A handshake can be gentle or vigorous and, similarly, tone of voice can be pleasant or rude. These aspects of social communication have been named vitality forms by Daniel Stern. Vitality forms represent how an action is performed and characterize all human interactions. In spite of their importance in social life, to date it is not clear whether the vitality forms expressed by the agent can influence the execution of a subsequent action performed by the receiver. To shed light on this matter, in the present study we carried out a kinematic study aiming to assess whether and how visual and auditory properties of vitality forms expressed by others influenced the motor response of participants. In particular, participants were presented with video-clips showing a male and a female actor performing a “giving request” (give me) or a “taking request” (take it) in visual, auditory, and mixed modalities (visual and auditory). Most importantly, requests were expressed with rude or gentle vitality forms. After the actor's request, participants performed a subsequent action. Results showed that vitality forms expressed by the actors influenced the kinematic parameters of the participants' actions regardless to the modality by which they are conveyed. PMID:29204114

  14. Image jitter enhances visual performance when spatial resolution is impaired.

    PubMed

    Watson, Lynne M; Strang, Niall C; Scobie, Fraser; Love, Gordon D; Seidel, Dirk; Manahilov, Velitchko

    2012-09-06

    Visibility of low-spatial frequency stimuli improves when their contrast is modulated at 5 to 10 Hz compared with stationary stimuli. Therefore, temporal modulations of visual objects could enhance the performance of low vision patients who primarily perceive images of low-spatial frequency content. We investigated the effect of retinal-image jitter on word recognition speed and facial emotion recognition in subjects with central visual impairment. Word recognition speed and accuracy of facial emotion discrimination were measured in volunteers with AMD under stationary and jittering conditions. Computer-driven and optoelectronic approaches were used to induce retinal-image jitter with duration of 100 or 166 ms and amplitude within the range of 0.5 to 2.6° visual angle. Word recognition speed was also measured for participants with simulated (Bangerter filters) visual impairment. Text jittering markedly enhanced word recognition speed for people with severe visual loss (101 ± 25%), while for those with moderate visual impairment, this effect was weaker (19 ± 9%). The ability of low vision patients to discriminate the facial emotions of jittering images improved by a factor of 2. A prototype of optoelectronic jitter goggles produced similar improvement in facial emotion discrimination. Word recognition speed in participants with simulated visual impairment was enhanced for interjitter intervals over 100 ms and reduced for shorter intervals. Results suggest that retinal-image jitter with optimal frequency and amplitude is an effective strategy for enhancing visual information processing in the absence of spatial detail. These findings will enable the development of novel tools to improve the quality of life of low vision patients.

  15. On compensatory strategies and computational models: the case of pure alexia.

    PubMed

    Shallice, Tim

    2014-01-01

    The article is concerned with inferences from the behaviour of neurological patients to models of normal function. It takes the letter-by-letter reading strategy common in pure alexic patients as an example of the methodological problems involved in making such inferences that compensatory strategies produce. The evidence is discussed on the possible use of three ways the letter-by-letter reading process might operate: "reversed spelling"; the use of the phonological input buffer as a temporary holding store during word building; and the use of serial input to the visual word-form system entirely within the visual-orthographic domain such as in the model of Plaut [1999. A connectionist approach to word reading and acquired dyslexia: Extension to sequential processing. Cognitive Science, 23, 543-568]. The compensatory strategy used by, at least, one pure alexic patient does not fit with the third of these possibilities. On the more general question, it is argued that even if compensatory strategies are being used, the behaviour of neurological patients can be useful for the development and assessment of first-generation information-processing models of normal function, but they are not likely to be useful for the development and assessment of second-generation computational models.

  16. A hierarchical word-merging algorithm with class separability measure.

    PubMed

    Wang, Lei; Zhou, Luping; Shen, Chunhua; Liu, Lingqiao; Liu, Huan

    2014-03-01

    In image recognition with the bag-of-features model, a small-sized visual codebook is usually preferred to obtain a low-dimensional histogram representation and high computational efficiency. Such a visual codebook has to be discriminative enough to achieve excellent recognition performance. To create a compact and discriminative codebook, in this paper we propose to merge the visual words in a large-sized initial codebook by maximally preserving class separability. We first show that this results in a difficult optimization problem. To deal with this situation, we devise a suboptimal but very efficient hierarchical word-merging algorithm, which optimally merges two words at each level of the hierarchy. By exploiting the characteristics of the class separability measure and designing a novel indexing structure, the proposed algorithm can hierarchically merge 10,000 visual words down to two words in merely 90 seconds. Also, to show the properties of the proposed algorithm and reveal its advantages, we conduct detailed theoretical analysis to compare it with another hierarchical word-merging algorithm that maximally preserves mutual information, obtaining interesting findings. Experimental studies are conducted to verify the effectiveness of the proposed algorithm on multiple benchmark data sets. As shown, it can efficiently produce more compact and discriminative codebooks than the state-of-the-art hierarchical word-merging algorithms, especially when the size of the codebook is significantly reduced.

  17. Spoken words can make the invisible visible-Testing the involvement of low-level visual representations in spoken word processing.

    PubMed

    Ostarek, Markus; Huettig, Falk

    2017-03-01

    The notion that processing spoken (object) words involves activation of category-specific representations in visual cortex is a key prediction of modality-specific theories of representation that contrasts with theories assuming dedicated conceptual representational systems abstracted away from sensorimotor systems. In the present study, we investigated whether participants can detect otherwise invisible pictures of objects when they are presented with the corresponding spoken word shortly before the picture appears. Our results showed facilitated detection for congruent ("bottle" → picture of a bottle) versus incongruent ("bottle" → picture of a banana) trials. A second experiment investigated the time-course of the effect by manipulating the timing of picture presentation relative to word onset and revealed that it arises as soon as 200-400 ms after word onset and decays at 600 ms after word onset. Together, these data strongly suggest that spoken words can rapidly activate low-level category-specific visual representations that affect the mere detection of a stimulus, that is, what we see. More generally, our findings fit best with the notion that spoken words activate modality-specific visual representations that are low level enough to provide information related to a given token and at the same time abstract enough to be relevant not only for previously seen tokens but also for generalizing to novel exemplars one has never seen before. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  18. Phonological and orthographic influences in the bouba-kiki effect.

    PubMed

    Cuskley, Christine; Simner, Julia; Kirby, Simon

    2017-01-01

    We examine a high-profile phenomenon known as the bouba-kiki effect, in which non-word names are assigned to abstract shapes in systematic ways (e.g. rounded shapes are preferentially labelled bouba over kiki). In a detailed evaluation of the literature, we show that most accounts of the effect point to predominantly or entirely iconic cross-sensory mappings between acoustic or articulatory properties of sound and shape as the mechanism underlying the effect. However, these accounts have tended to confound the acoustic or articulatory properties of non-words with another fundamental property: their written form. We compare traditional accounts of direct audio or articulatory-visual mapping with an account in which the effect is heavily influenced by matching between the shapes of graphemes and the abstract shape targets. The results of our two studies suggest that the dominant mechanism underlying the effect for literate subjects is matching based on aligning letter curvature and shape roundedness (i.e. non-words with curved letters are matched to round shapes). We show that letter curvature is strong enough to significantly influence word-shape associations even in auditory tasks, where written word forms are never presented to participants. However, we also find an additional phonological influence in that voiced sounds are preferentially linked with rounded shapes, although this arises only in a purely auditory word-shape association task. We conclude that many previous investigations of the bouba-kiki effect may not have given appropriate consideration or weight to the influence of orthography among literate subjects.

  19. Topic Transition in Educational Videos Using Visually Salient Words

    ERIC Educational Resources Information Center

    Gandhi, Ankit; Biswas, Arijit; Deshmukh, Om

    2015-01-01

    In this paper, we propose a visual saliency algorithm for automatically finding the topic transition points in an educational video. First, we propose a method for assigning a saliency score to each word extracted from an educational video. We design several mid-level features that are indicative of visual saliency. The optimal feature combination…

  20. Crossmodal Semantic Priming by Naturalistic Sounds and Spoken Words Enhances Visual Sensitivity

    ERIC Educational Resources Information Center

    Chen, Yi-Chuan; Spence, Charles

    2011-01-01

    We propose a multisensory framework based on Glaser and Glaser's (1989) general reading-naming interference model to account for the semantic priming effect by naturalistic sounds and spoken words on visual picture sensitivity. Four experiments were designed to investigate two key issues: First, can auditory stimuli enhance visual sensitivity when…

  1. Visual Imagery for Letters and Words. Final Report.

    ERIC Educational Resources Information Center

    Weber, Robert J.

    In a series of six experiments, undergraduate college students visually imagined letters or words and then classified as rapidly as possible the imagined letters for some physical property such as vertical height. This procedure allowed for a preliminary assessment of the temporal parameters of visual imagination. The results delineate a number of…

  2. An ERP investigation of visual word recognition in syllabary scripts.

    PubMed

    Okano, Kana; Grainger, Jonathan; Holcomb, Phillip J

    2013-06-01

    The bimodal interactive-activation model has been successfully applied to understanding the neurocognitive processes involved in reading words in alphabetic scripts, as reflected in the modulation of ERP components in masked repetition priming. In order to test the generalizability of this approach, in the present study we examined word recognition in a different writing system, the Japanese syllabary scripts hiragana and katakana. Native Japanese participants were presented with repeated or unrelated pairs of Japanese words in which the prime and target words were both in the same script (within-script priming, Exp. 1) or were in the opposite script (cross-script priming, Exp. 2). As in previous studies with alphabetic scripts, in both experiments the N250 (sublexical processing) and N400 (lexical-semantic processing) components were modulated by priming, although the time course was somewhat delayed. The earlier N/P150 effect (visual feature processing) was present only in "Experiment 1: Within-script priming", in which the prime and target words shared visual features. Overall, the results provide support for the hypothesis that visual word recognition involves a generalizable set of neurocognitive processes that operate in similar manners across different writing systems and languages, as well as pointing to the viability of the bimodal interactive-activation framework for modeling such processes.

  3. An ERP Investigation of Visual Word Recognition in Syllabary Scripts

    PubMed Central

    Okano, Kana; Grainger, Jonathan; Holcomb, Phillip J.

    2013-01-01

    The bi-modal interactive-activation model has been successfully applied to understanding the neuro-cognitive processes involved in reading words in alphabetic scripts, as reflected in the modulation of ERP components in masked repetition priming. In order to test the generalizability of this approach, the current study examined word recognition in a different writing system, the Japanese syllabary scripts Hiragana and Katakana. Native Japanese participants were presented with repeated or unrelated pairs of Japanese words where the prime and target words were both in the same script (within-script priming, Experiment 1) or were in the opposite script (cross-script priming, Experiment 2). As in previous studies with alphabetic scripts, in both experiments the N250 (sub-lexical processing) and N400 (lexical-semantic processing) components were modulated by priming, although the time-course was somewhat delayed. The earlier N/P150 effect (visual feature processing) was present only in Experiment 1 where prime and target words shared visual features. Overall, the results provide support for the hypothesis that visual word recognition involves a generalizable set of neuro-cognitive processes that operate in a similar manner across different writing systems and languages, as well as pointing to the viability of the bi-modal interactive activation framework for modeling such processes. PMID:23378278

  4. Abstract conceptual feature ratings predict gaze within written word arrays: evidence from a Visual Wor(l)d paradigm

    PubMed Central

    Primativo, Silvia; Reilly, Jamie; Crutch, Sebastian J

    2016-01-01

    The Abstract Conceptual Feature (ACF) framework predicts that word meaning is represented within a high-dimensional semantic space bounded by weighted contributions of perceptual, affective, and encyclopedic information. The ACF, like latent semantic analysis, is amenable to distance metrics between any two words. We applied predictions of the ACF framework to abstract words using eye tracking via an adaptation of the classical ‘visual word paradigm’. Healthy adults (N=20) selected the lexical item most related to a probe word in a 4-item written word array comprising the target and three distractors. The relation between the probe and each of the four words was determined using the semantic distance metrics derived from ACF ratings. Eye-movement data indicated that the word that was most semantically related to the probe received more and longer fixations relative to distractors. Importantly, in sets where participants did not provide an overt behavioral response, the fixation rates were none the less significantly higher for targets than distractors, closely resembling trials where an expected response was given. Furthermore, ACF ratings which are based on individual words predicted eye fixation metrics of probe-target similarity at least as well as latent semantic analysis ratings which are based on word co-occurrence. The results provide further validation of Euclidean distance metrics derived from ACF ratings as a measure of one facet of the semantic relatedness of abstract words and suggest that they represent a reasonable approximation of the organization of abstract conceptual space. The data are also compatible with the broad notion that multiple sources of information (not restricted to sensorimotor and emotion information) shape the organization of abstract concepts. Whilst the adapted ‘visual word paradigm’ is potentially a more metacognitive task than the classical visual world paradigm, we argue that it offers potential utility for studying abstract word comprehension. PMID:26901571

  5. Too little, too late: reduced visual span and speed characterize pure alexia.

    PubMed

    Starrfelt, Randi; Habekost, Thomas; Leff, Alexander P

    2009-12-01

    Whether normal word reading includes a stage of visual processing selectively dedicated to word or letter recognition is highly debated. Characterizing pure alexia, a seemingly selective disorder of reading, has been central to this debate. Two main theories claim either that 1) Pure alexia is caused by damage to a reading specific brain region in the left fusiform gyrus or 2) Pure alexia results from a general visual impairment that may particularly affect simultaneous processing of multiple items. We tested these competing theories in 4 patients with pure alexia using sensitive psychophysical measures and mathematical modeling. Recognition of single letters and digits in the central visual field was impaired in all patients. Visual apprehension span was also reduced for both letters and digits in all patients. The only cortical region lesioned across all 4 patients was the left fusiform gyrus, indicating that this region subserves a function broader than letter or word identification. We suggest that a seemingly pure disorder of reading can arise due to a general reduction of visual speed and span, and explain why this has a disproportionate impact on word reading while recognition of other visual stimuli are less obviously affected.

  6. Too Little, Too Late: Reduced Visual Span and Speed Characterize Pure Alexia

    PubMed Central

    Habekost, Thomas; Leff, Alexander P.

    2009-01-01

    Whether normal word reading includes a stage of visual processing selectively dedicated to word or letter recognition is highly debated. Characterizing pure alexia, a seemingly selective disorder of reading, has been central to this debate. Two main theories claim either that 1) Pure alexia is caused by damage to a reading specific brain region in the left fusiform gyrus or 2) Pure alexia results from a general visual impairment that may particularly affect simultaneous processing of multiple items. We tested these competing theories in 4 patients with pure alexia using sensitive psychophysical measures and mathematical modeling. Recognition of single letters and digits in the central visual field was impaired in all patients. Visual apprehension span was also reduced for both letters and digits in all patients. The only cortical region lesioned across all 4 patients was the left fusiform gyrus, indicating that this region subserves a function broader than letter or word identification. We suggest that a seemingly pure disorder of reading can arise due to a general reduction of visual speed and span, and explain why this has a disproportionate impact on word reading while recognition of other visual stimuli are less obviously affected. PMID:19366870

  7. What You See Isn’t Always What You Get: Auditory Word Signals Trump Consciously Perceived Words in Lexical Access

    PubMed Central

    Ostrand, Rachel; Blumstein, Sheila E.; Ferreira, Victor S.; Morgan, James L.

    2016-01-01

    Human speech perception often includes both an auditory and visual component. A conflict in these signals can result in the McGurk illusion, in which the listener perceives a fusion of the two streams, implying that information from both has been integrated. We report two experiments investigating whether auditory-visual integration of speech occurs before or after lexical access, and whether the visual signal influences lexical access at all. Subjects were presented with McGurk or Congruent primes and performed a lexical decision task on related or unrelated targets. Although subjects perceived the McGurk illusion, McGurk and Congruent primes with matching real-word auditory signals equivalently primed targets that were semantically related to the auditory signal, but not targets related to the McGurk percept. We conclude that the time course of auditory-visual integration is dependent on the lexicality of the auditory and visual input signals, and that listeners can lexically access one word and yet consciously perceive another. PMID:27011021

  8. Lexical orthography acquisition: Is handwriting better than spelling aloud?

    PubMed Central

    Bosse, Marie-Line; Chaves, Nathalie; Valdois, Sylviane

    2014-01-01

    Lexical orthography acquisition is currently described as the building of links between the visual forms and the auditory forms of whole words. However, a growing body of data suggests that a motor component could further be involved in orthographic acquisition. A few studies support the idea that reading plus handwriting is a better lexical orthographic learning situation than reading alone. However, these studies did not explore which of the cognitive processes involved in handwriting enhanced lexical orthographic acquisition. Some findings suggest that the specific movements memorized when learning to write may participate in the establishment of orthographic representations in memory. The aim of the present study was to assess this hypothesis using handwriting and spelling aloud as two learning conditions. In two experiments, fifth graders were asked to read complex pseudo-words embedded in short sentences. Immediately after reading, participants had to recall the pseudo-words' spellings either by spelling them aloud or by handwriting them down. One week later, orthographic acquisition was tested using two post-tests: a pseudo-word production task (spelling by hand in Experiment 1 or spelling aloud in Experiment 2) and a pseudo-word recognition task. Results showed no significant difference in pseudo-word recognition between the two learning conditions. In the pseudo-word production task, orthography learning improved when the learning and post-test conditions were similar, thus showing a massive encoding-retrieval match effect in the two experiments. However, a mixed model analysis of the pseudo-word production results revealed a significant learning condition effect which remained after control of the encoding-retrieval match effect. This later finding suggests that orthography learning is more efficient when mediated by handwriting than by spelling aloud, whatever the post-test production task. PMID:24575058

  9. Lexical orthography acquisition: Is handwriting better than spelling aloud?

    PubMed

    Bosse, Marie-Line; Chaves, Nathalie; Valdois, Sylviane

    2014-01-01

    Lexical orthography acquisition is currently described as the building of links between the visual forms and the auditory forms of whole words. However, a growing body of data suggests that a motor component could further be involved in orthographic acquisition. A few studies support the idea that reading plus handwriting is a better lexical orthographic learning situation than reading alone. However, these studies did not explore which of the cognitive processes involved in handwriting enhanced lexical orthographic acquisition. Some findings suggest that the specific movements memorized when learning to write may participate in the establishment of orthographic representations in memory. The aim of the present study was to assess this hypothesis using handwriting and spelling aloud as two learning conditions. In two experiments, fifth graders were asked to read complex pseudo-words embedded in short sentences. Immediately after reading, participants had to recall the pseudo-words' spellings either by spelling them aloud or by handwriting them down. One week later, orthographic acquisition was tested using two post-tests: a pseudo-word production task (spelling by hand in Experiment 1 or spelling aloud in Experiment 2) and a pseudo-word recognition task. Results showed no significant difference in pseudo-word recognition between the two learning conditions. In the pseudo-word production task, orthography learning improved when the learning and post-test conditions were similar, thus showing a massive encoding-retrieval match effect in the two experiments. However, a mixed model analysis of the pseudo-word production results revealed a significant learning condition effect which remained after control of the encoding-retrieval match effect. This later finding suggests that orthography learning is more efficient when mediated by handwriting than by spelling aloud, whatever the post-test production task.

  10. Evidence for a basal temporal visual language center: cortical stimulation producing pure alexia.

    PubMed

    Mani, J; Diehl, B; Piao, Z; Schuele, S S; Lapresto, E; Liu, P; Nair, D R; Dinner, D S; Lüders, H O

    2008-11-11

    Dejerine and Benson and Geschwind postulated disconnection of the dominant angular gyrus from both visual association cortices as the basis for pure alexia, emphasizing disruption of white matter tracts in the dominant temporooccipital region. Recently functional imaging studies provide evidence for direct participation of basal temporal and occipital cortices in the cognitive process of reading. The exact location and function of these areas remain a matter of debate. To confirm the participation of the basal temporal region in reading. Extraoperative electrical stimulation of the dominant hemisphere was performed in three subjects using subdural electrodes, as part of presurgical evaluation for refractory epilepsy. Pure alexia was reproduced during cortical stimulation of the dominant posterior fusiform and inferior temporal gyri in all three patients. Stimulation resulted in selective reading difficulty with intact auditory comprehension and writing. Reading difficulty involved sentences and words with intact letter by letter reading. Picture naming difficulties were also noted at some electrodes. This region is located posterior to and contiguous with the basal temporal language area (BTLA) where stimulation resulted in global language dysfunction in visual and auditory realms. The location corresponded with the visual word form area described on functional MRI. These observations support the existence of a visual language area in the dominant fusiform and occipitotemporal gyri, contiguous with basal temporal language area. A portion of visual language area was exclusively involved in lexical processing while the other part of this region processed both lexical and nonlexical symbols.

  11. Shared vs. specific brain activation changes in dyslexia after training of phonology, attention, or reading.

    PubMed

    Heim, Stefan; Pape-Neumann, Julia; van Ermingen-Marbach, Muna; Brinkhaus, Moti; Grande, Marion

    2015-07-01

    Whereas the neurobiological basis of developmental dyslexia has received substantial attention, only little is known about the processes in the brain during remediation. This holds in particular in light of recent findings on cognitive subtypes of dyslexia which suggest interactions between individual profiles, training methods, and also the task in the scanner. Therefore, we trained three groups of German dyslexic primary school children in the domains of phonology, attention, or visual word recognition. We compared neurofunctional changes after 4 weeks of training in these groups to those in untrained normal readers in a reading task and in a task of visual attention. The overall reading improvement in the dyslexic children was comparable over groups. It was accompanied by substantial increase of the activation level in the visual word form area (VWFA) during a reading task inside the scanner. Moreover, there were activation increases that were unique for each training group in the reading task. In contrast, when children performed the visual attention task, shared training effects were found in the left inferior frontal sulcus and gyrus, which varied in amplitude between the groups. Overall, the data reveal that different remediation programmes matched to individual profiles of dyslexia may improve reading ability and commonly affect the VWFA in dyslexia as a shared part of otherwise distinct networks.

  12. Evaluating a Split Processing Model of Visual Word Recognition: Effects of Orthographic Neighborhood Size

    ERIC Educational Resources Information Center

    Lavidor, Michal; Hayes, Adrian; Shillcock, Richard; Ellis, Andrew W.

    2004-01-01

    The split fovea theory proposes that visual word recognition of centrally presented words is mediated by the splitting of the foveal image, with letters to the left of fixation being projected to the right hemisphere (RH) and letters to the right of fixation being projected to the left hemisphere (LH). Two lexical decision experiments aimed to…

  13. Evaluating a Bilingual Text-Mining System with a Taxonomy of Key Words and Hierarchical Visualization for Understanding Learner-Generated Text

    ERIC Educational Resources Information Center

    Kong, Siu Cheung; Li, Ping; Song, Yanjie

    2018-01-01

    This study evaluated a bilingual text-mining system, which incorporated a bilingual taxonomy of key words and provided hierarchical visualization, for understanding learner-generated text in the learning management systems through automatic identification and counting of matching key words. A class of 27 in-service teachers studied a course…

  14. The Processing of Consonants and Vowels during Letter Identity and Letter Position Assignment in Visual-Word Recognition: An ERP Study

    ERIC Educational Resources Information Center

    Vergara-Martinez, Marta; Perea, Manuel; Marin, Alejandro; Carreiras, Manuel

    2011-01-01

    Recent research suggests that there is a processing distinction between consonants and vowels in visual-word recognition. Here we conjointly examine the time course of consonants and vowels in processes of letter identity and letter position assignment. Event related potentials (ERPs) were recorded while participants read words and pseudowords in…

  15. Syllabic Parsing in Children: A Developmental Study Using Visual Word-Spotting in Spanish

    ERIC Educational Resources Information Center

    Álvarez, Carlos J.; Garcia-Saavedra, Guacimara; Luque, Juan L.; Taft, Marcus

    2017-01-01

    Some inconsistency is observed in the results from studies of reading development regarding the role of the syllable in visual word recognition, perhaps due to a disparity between the tasks used. We adopted a word-spotting paradigm, with Spanish children of second grade (mean age: 7 years) and sixth grade (mean age: 11 years). The children were…

  16. Lexical-Semantic Processing and Reading: Relations between Semantic Priming, Visual Word Recognition and Reading Comprehension

    ERIC Educational Resources Information Center

    Nobre, Alexandre de Pontes; de Salles, Jerusa Fumagalli

    2016-01-01

    The aim of this study was to investigate relations between lexical-semantic processing and two components of reading: visual word recognition and reading comprehension. Sixty-eight children from private schools in Porto Alegre, Brazil, from 7 to 12 years, were evaluated. Reading was assessed with a word/nonword reading task and a reading…

  17. Charting the Functional Relevance of Broca's Area for Visual Word Recognition and Picture Naming in Dutch Using fMRI-Guided TMS

    ERIC Educational Resources Information Center

    Wheat, Katherine L.; Cornelissen, Piers L.; Sack, Alexander T.; Schuhmann, Teresa; Goebel, Rainer; Blomert, Leo

    2013-01-01

    Magnetoencephalography (MEG) has shown pseudohomophone priming effects at Broca's area (specifically pars opercularis of left inferior frontal gyrus and precentral gyrus; LIFGpo/PCG) within [approximately]100 ms of viewing a word. This is consistent with Broca's area involvement in fast phonological access during visual word recognition. Here we…

  18. Reading Habits, Perceptual Learning, and Recognition of Printed Words

    ERIC Educational Resources Information Center

    Nazir, Tatjana A.; Ben-Boutayab, Nadia; Decoppet, Nathalie; Deutsch, Avital; Frost, Ram

    2004-01-01

    The present work aims at demonstrating that visual training associated with the act of reading modifies the way we perceive printed words. As reading does not train all parts of the retina in the same way but favors regions on the side in the direction of scanning, visual word recognition should be better at retinal locations that are frequently…

  19. Near-instant automatic access to visually presented words in the human neocortex: neuromagnetic evidence.

    PubMed

    Shtyrov, Yury; MacGregor, Lucy J

    2016-05-24

    Rapid and efficient processing of external information by the brain is vital to survival in a highly dynamic environment. The key channel humans use to exchange information is language, but the neural underpinnings of its processing are still not fully understood. We investigated the spatio-temporal dynamics of neural access to word representations in the brain by scrutinising the brain's activity elicited in response to psycholinguistically, visually and phonologically matched groups of familiar words and meaningless pseudowords. Stimuli were briefly presented on the visual-field periphery to experimental participants whose attention was occupied with a non-linguistic visual feature-detection task. The neural activation elicited by these unattended orthographic stimuli was recorded using multi-channel whole-head magnetoencephalography, and the timecourse of lexically-specific neuromagnetic responses was assessed in sensor space as well as at the level of cortical sources, estimated using individual MR-based distributed source reconstruction. Our results demonstrate a neocortical signature of automatic near-instant access to word representations in the brain: activity in the perisylvian language network characterised by specific activation enhancement for familiar words, starting as early as ~70 ms after the onset of unattended word stimuli and underpinned by temporal and inferior-frontal cortices.

  20. The time-course of the cross-modal semantic modulation of visual picture processing by naturalistic sounds and spoken words.

    PubMed

    Chen, Yi-Chuan; Spence, Charles

    2013-01-01

    The time-course of cross-modal semantic interactions between pictures and either naturalistic sounds or spoken words was compared. Participants performed a speeded picture categorization task while hearing a task-irrelevant auditory stimulus presented at various stimulus onset asynchronies (SOAs) with respect to the visual picture. Both naturalistic sounds and spoken words gave rise to cross-modal semantic congruency effects (i.e., facilitation by semantically congruent sounds and inhibition by semantically incongruent sounds, as compared to a baseline noise condition) when the onset of the sound led that of the picture by 240 ms or more. Both naturalistic sounds and spoken words also gave rise to inhibition irrespective of their semantic congruency when presented within 106 ms of the onset of the picture. The peak of this cross-modal inhibitory effect occurred earlier for spoken words than for naturalistic sounds. These results therefore demonstrate that the semantic priming of visual picture categorization by auditory stimuli only occurs when the onset of the sound precedes that of the visual stimulus. The different time-courses observed for naturalistic sounds and spoken words likely reflect the different processing pathways to access the relevant semantic representations.

  1. Jasper Johns' Painted Words.

    ERIC Educational Resources Information Center

    Levinger, Esther

    1989-01-01

    States that the painted words in Jasper Johns' art act in two different capacities: concealed words partake in the artist's interrogation of visual perception; and visible painted words question classical representation. Argues that words are Johns' means of critiquing modernism. (RS)

  2. Implicit integration in a case of integrative visual agnosia.

    PubMed

    Aviezer, Hillel; Landau, Ayelet N; Robertson, Lynn C; Peterson, Mary A; Soroker, Nachum; Sacher, Yaron; Bonneh, Yoram; Bentin, Shlomo

    2007-05-15

    We present a case (SE) with integrative visual agnosia following ischemic stroke affecting the right dorsal and the left ventral pathways of the visual system. Despite his inability to identify global hierarchical letters [Navon, D. (1977). Forest before trees: The precedence of global features in visual perception. Cognitive Psychology, 9, 353-383], and his dense object agnosia, SE showed normal global-to-local interference when responding to local letters in Navon hierarchical stimuli and significant picture-word identity priming in a semantic decision task for words. Since priming was absent if these features were scrambled, it stands to reason that these effects were not due to priming by distinctive features. The contrast between priming effects induced by coherent and scrambled stimuli is consistent with implicit but not explicit integration of features into a unified whole. We went on to show that possible/impossible object decisions were facilitated by words in a word-picture priming task, suggesting that prompts could activate perceptually integrated images in a backward fashion. We conclude that the absence of SE's ability to identify visual objects except through tedious serial construction reflects a deficit in accessing an integrated visual representation through bottom-up visual processing alone. However, top-down generated images can help activate these visual representations through semantic links.

  3. Looking and touching: what extant approaches reveal about the structure of early word knowledge.

    PubMed

    Hendrickson, Kristi; Mitsven, Samantha; Poulin-Dubois, Diane; Zesiger, Pascal; Friend, Margaret

    2015-09-01

    The goal of the current study is to assess the temporal dynamics of vision and action to evaluate the underlying word representations that guide infants' responses. Sixteen-month-old infants participated in a two-alternative forced-choice word-picture matching task. We conducted a moment-by-moment analysis of looking and reaching behaviors as they occurred in tandem to assess the speed with which a prompted word was processed (visual reaction time) as a function of the type of haptic response: Target, Distractor, or No Touch. Visual reaction times (visual RTs) were significantly slower during No Touches compared to Distractor and Target Touches, which were statistically indistinguishable. The finding that visual RTs were significantly faster during Distractor Touches compared to No Touches suggests that incorrect and absent haptic responses appear to index distinct knowledge states: incorrect responses are associated with partial knowledge whereas absent responses appear to reflect a true failure to map lexical items to their target referents. Further, we found that those children who were faster at processing words were also those children who exhibited better haptic performance. This research provides a methodological clarification on knowledge measured by the visual and haptic modalities and new evidence for a continuum of word knowledge in the second year of life. © 2014 The Authors Developmental Science Published by John Wiley & Sons Ltd.

  4. A New Perspective on Visual Word Processing Efficiency

    PubMed Central

    Houpt, Joseph W.; Townsend, James T.; Donkin, Christopher

    2013-01-01

    As a fundamental part of our daily lives, visual word processing has received much attention in the psychological literature. Despite the well established advantage of perceiving letters in a word or in a pseudoword over letters alone or in random sequences using accuracy, a comparable effect using response times has been elusive. Some researchers continue to question whether the advantage due to word context is perceptual. We use the capacity coefficient, a well established, response time based measure of efficiency to provide evidence of word processing as a particularly efficient perceptual process to complement those results from the accuracy domain. PMID:24334151

  5. The strengths and weaknesses in verbal short-term memory and visual working memory in children with hearing impairment and additional language learning difficulties.

    PubMed

    Willis, Suzi; Goldbart, Juliet; Stansfield, Jois

    2014-07-01

    To compare verbal short-term memory and visual working memory abilities of six children with congenital hearing-impairment identified as having significant language learning difficulties with normative data from typically hearing children using standardized memory assessments. Six children with hearing loss aged 8-15 years were assessed on measures of verbal short-term memory (Non-word and word recall) and visual working memory annually over a two year period. All children had cognitive abilities within normal limits and used spoken language as the primary mode of communication. The language assessment scores at the beginning of the study revealed that all six participants exhibited delays of two years or more on standardized assessments of receptive and expressive vocabulary and spoken language. The children with hearing-impairment scores were significantly higher on the non-word recall task than the "real" word recall task. They also exhibited significantly higher scores on visual working memory than those of the age-matched sample from the standardized memory assessment. Each of the six participants in this study displayed the same pattern of strengths and weaknesses in verbal short-term memory and visual working memory despite their very different chronological ages. The children's poor ability to recall single syllable words in relation to non-words is a clinical indicator of their difficulties in verbal short-term memory. However, the children with hearing-impairment do not display generalized processing difficulties and indeed demonstrate strengths in visual working memory. The poor ability to recall words, in combination with difficulties with early word learning may be indicators of children with hearing-impairment who will struggle to develop spoken language equal to that of their normally hearing peers. This early identification has the potential to allow for target specific intervention that may remediate their difficulties. Copyright © 2014. Published by Elsevier Ireland Ltd.

  6. Semantically induced distortions of visual awareness in a patient with Balint's syndrome.

    PubMed

    Soto, David; Humphreys, Glyn W

    2009-02-01

    We present data indicating that visual awareness for a basic perceptual feature (colour) can be influenced by the relation between the feature and the semantic properties of the stimulus. We examined semantic interference from the meaning of a colour word (''RED") on simple colour (ink related) detection responses in a patient with simultagnosia due to bilateral parietal lesions. We found that colour detection was influenced by the congruency between the meaning of the word and the relevant ink colour, with impaired performance when the word and the colour mismatched (on incongruent trials). This result held even when remote associations between meaning and colour were used (i.e. the word ''PEA" influenced detection of the ink colour red). The results are consistent with a late locus of conscious visual experience that is derived at post-semantic levels. The implications for the understanding of the role of parietal cortex in object binding and visual awareness are discussed.

  7. The Impact of Visual-Spatial Attention on Reading and Spelling in Chinese Children

    ERIC Educational Resources Information Center

    Liu, Duo; Chen, Xi; Wang, Ying

    2016-01-01

    The present study investigated the associations of visual-spatial attention with word reading fluency and spelling in 92 third grade Hong Kong Chinese children. Word reading fluency was measured with a timed reading task whereas spelling was measured with a dictation task. Results showed that visual-spatial attention was a unique predictor of…

  8. A Task-Dependent Causal Role for Low-Level Visual Processes in Spoken Word Comprehension

    ERIC Educational Resources Information Center

    Ostarek, Markus; Huettig, Falk

    2017-01-01

    It is well established that the comprehension of spoken words referring to object concepts relies on high-level visual areas in the ventral stream that build increasingly abstract representations. It is much less clear whether basic low-level visual representations are also involved. Here we asked in what task situations low-level visual…

  9. Qualitative Differences in the Representation of Abstract versus Concrete Words: Evidence from the Visual-World Paradigm

    ERIC Educational Resources Information Center

    Dunabeitia, Jon Andoni; Aviles, Alberto; Afonso, Olivia; Scheepers, Christoph; Carreiras, Manuel

    2009-01-01

    In the present visual-world experiment, participants were presented with visual displays that included a target item that was a semantic associate of an abstract or a concrete word. This manipulation allowed us to test a basic prediction derived from the qualitatively different representational framework that supports the view of different…

  10. Selective visual attention to emotional words: Early parallel frontal and visual activations followed by interactive effects in visual cortex.

    PubMed

    Schindler, Sebastian; Kissler, Johanna

    2016-10-01

    Human brains spontaneously differentiate between various emotional and neutral stimuli, including written words whose emotional quality is symbolic. In the electroencephalogram (EEG), emotional-neutral processing differences are typically reflected in the early posterior negativity (EPN, 200-300 ms) and the late positive potential (LPP, 400-700 ms). These components are also enlarged by task-driven visual attention, supporting the assumption that emotional content naturally drives attention. Still, the spatio-temporal dynamics of interactions between emotional stimulus content and task-driven attention remain to be specified. Here, we examine this issue in visual word processing. Participants attended to negative, neutral, or positive nouns while high-density EEG was recorded. Emotional content and top-down attention both amplified the EPN component in parallel. On the LPP, by contrast, emotion and attention interacted: Explicit attention to emotional words led to a substantially larger amplitude increase than did explicit attention to neutral words. Source analysis revealed early parallel effects of emotion and attention in bilateral visual cortex and a later interaction of both in right visual cortex. Distinct effects of attention were found in inferior, middle and superior frontal, paracentral, and parietal areas, as well as in the anterior cingulate cortex (ACC). Results specify separate and shared mechanisms of emotion and attention at distinct processing stages. Hum Brain Mapp 37:3575-3587, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  11. The time course of morphological processing during spoken word recognition in Chinese.

    PubMed

    Shen, Wei; Qu, Qingqing; Ni, Aiping; Zhou, Junyi; Li, Xingshan

    2017-12-01

    We investigated the time course of morphological processing during spoken word recognition using the printed-word paradigm. Chinese participants were asked to listen to a spoken disyllabic compound word while simultaneously viewing a printed-word display. Each visual display consisted of three printed words: a semantic associate of the first constituent of the compound word (morphemic competitor), a semantic associate of the whole compound word (whole-word competitor), and an unrelated word (distractor). Participants were directed to detect whether the spoken target word was on the visual display. Results indicated that both the morphemic and whole-word competitors attracted more fixations than the distractor. More importantly, the morphemic competitor began to diverge from the distractor immediately at the acoustic offset of the first constituent, which was earlier than the whole-word competitor. These results suggest that lexical access to the auditory word is incremental and morphological processing (i.e., semantic access to the first constituent) that occurs at an early processing stage before access to the representation of the whole word in Chinese.

  12. The utility of modeling word identification from visual input within models of eye movements in reading

    PubMed Central

    Bicknell, Klinton; Levy, Roger

    2012-01-01

    Decades of empirical work have shown that a range of eye movement phenomena in reading are sensitive to the details of the process of word identification. Despite this, major models of eye movement control in reading do not explicitly model word identification from visual input. This paper presents a argument for developing models of eye movements that do include detailed models of word identification. Specifically, we argue that insights into eye movement behavior can be gained by understanding which phenomena naturally arise from an account in which the eyes move for efficient word identification, and that one important use of such models is to test which eye movement phenomena can be understood this way. As an extended case study, we present evidence from an extension of a previous model of eye movement control in reading that does explicitly model word identification from visual input, Mr. Chips (Legge, Klitz, & Tjan, 1997), to test two proposals for the effect of using linguistic context on reading efficiency. PMID:23074362

  13. Massive cortical reorganization in sighted Braille readers.

    PubMed

    Siuda-Krzywicka, Katarzyna; Bola, Łukasz; Paplińska, Małgorzata; Sumera, Ewa; Jednoróg, Katarzyna; Marchewka, Artur; Śliwińska, Magdalena W; Amedi, Amir; Szwed, Marcin

    2016-03-15

    The brain is capable of large-scale reorganization in blindness or after massive injury. Such reorganization crosses the division into separate sensory cortices (visual, somatosensory...). As its result, the visual cortex of the blind becomes active during tactile Braille reading. Although the possibility of such reorganization in the normal, adult brain has been raised, definitive evidence has been lacking. Here, we demonstrate such extensive reorganization in normal, sighted adults who learned Braille while their brain activity was investigated with fMRI and transcranial magnetic stimulation (TMS). Subjects showed enhanced activity for tactile reading in the visual cortex, including the visual word form area (VWFA) that was modulated by their Braille reading speed and strengthened resting-state connectivity between visual and somatosensory cortices. Moreover, TMS disruption of VWFA activity decreased their tactile reading accuracy. Our results indicate that large-scale reorganization is a viable mechanism recruited when learning complex skills.

  14. The impact of task demand on visual word recognition.

    PubMed

    Yang, J; Zevin, J

    2014-07-11

    The left occipitotemporal cortex has been found sensitive to the hierarchy of increasingly complex features in visually presented words, from individual letters to bigrams and morphemes. However, whether this sensitivity is a stable property of the brain regions engaged by word recognition is still unclear. To address the issue, the current study investigated whether different task demands modify this sensitivity. Participants viewed real English words and stimuli with hierarchical word-likeness while performing a lexical decision task (i.e., to decide whether each presented stimulus is a real word) and a symbol detection task. General linear model and independent component analysis indicated strong activation in the fronto-parietal and temporal regions during the two tasks. Furthermore, the bilateral inferior frontal gyrus and insula showed significant interaction effects between task demand and stimulus type in the pseudoword condition. The occipitotemporal cortex showed strong main effects for task demand and stimulus type, but no sensitivity to the hierarchical word-likeness was found. These results suggest that different task demands on semantic, phonological and orthographic processes can influence the involvement of the relevant regions during visual word recognition. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.

  15. Modulation of human extrastriate visual processing by selective attention to colours and words.

    PubMed

    Nobre, A C; Allison, T; McCarthy, G

    1998-07-01

    The present study investigated the effect of visual selective attention upon neural processing within functionally specialized regions of the human extrastriate visual cortex. Field potentials were recorded directly from the inferior surface of the temporal lobes in subjects with epilepsy. The experimental task required subjects to focus attention on words from one of two competing texts. Words were presented individually and foveally. Texts were interleaved randomly and were distinguishable on the basis of word colour. Focal field potentials were evoked by words in the posterior part of the fusiform gyrus. Selective attention strongly modulated long-latency potentials evoked by words. The attention effect co-localized with word-related potentials in the posterior fusiform gyrus, and was independent of stimulus colour. The results demonstrated that stimuli receive differential processing within specialized regions of the extrastriate cortex as a function of attention. The late onset of the attention effect and its co-localization with letter string-related potentials but not with colour-related potentials recorded from nearby regions of the fusiform gyrus suggest that the attention effect is due to top-down influences from downstream regions involved in word processing.

  16. Lip-reading aids word recognition most in moderate noise: a Bayesian explanation using high-dimensional feature space.

    PubMed

    Ma, Wei Ji; Zhou, Xiang; Ross, Lars A; Foxe, John J; Parra, Lucas C

    2009-01-01

    Watching a speaker's facial movements can dramatically enhance our ability to comprehend words, especially in noisy environments. From a general doctrine of combining information from different sensory modalities (the principle of inverse effectiveness), one would expect that the visual signals would be most effective at the highest levels of auditory noise. In contrast, we find, in accord with a recent paper, that visual information improves performance more at intermediate levels of auditory noise than at the highest levels, and we show that a novel visual stimulus containing only temporal information does the same. We present a Bayesian model of optimal cue integration that can explain these conflicts. In this model, words are regarded as points in a multidimensional space and word recognition is a probabilistic inference process. When the dimensionality of the feature space is low, the Bayesian model predicts inverse effectiveness; when the dimensionality is high, the enhancement is maximal at intermediate auditory noise levels. When the auditory and visual stimuli differ slightly in high noise, the model makes a counterintuitive prediction: as sound quality increases, the proportion of reported words corresponding to the visual stimulus should first increase and then decrease. We confirm this prediction in a behavioral experiment. We conclude that auditory-visual speech perception obeys the same notion of optimality previously observed only for simple multisensory stimuli.

  17. Does visual letter similarity modulate masked form priming in young readers of Arabic?

    PubMed

    Perea, Manuel; Abu Mallouh, Reem; Mohammed, Ahmed; Khalifa, Batoul; Carreiras, Manuel

    2018-05-01

    We carried out a masked priming lexical decision experiment to study whether visual letter similarity plays a role during the initial phases of word processing in young readers of Arabic (fifth graders). Arabic is ideally suited to test these effects because most Arabic letters share their basic shape with at least one other letter and differ only in the number/position of diacritical points (e.g., ض - ص ;ظ - ط ;غ - ع ;ث - ت - ن ب ;ذ - د ;خ - ح - ج ;ق - ف ;ش - س ;ز - ر). We created two one-letter-different priming conditions for each target word, in which a letter from the consonantal root was substituted by another letter that did or did not keep the same shape (e.g., خدمة - حدمة vs. خدمة - فدمة). Another goal of the current experiment was to test the presence of masked orthographic priming effects, which are thought to be unreliable in Semitic languages. To that end, we included an unrelated priming condition. We found a sizable masked orthographic priming effect relative to the unrelated condition regardless of visual letter similarity, thereby revealing that young readers are able to quickly process the diacritical points of Arabic letters. Furthermore, the presence of masked orthographic priming effects in Arabic suggests that the word identification stream in Indo-European and Semitic languages is more similar than previously thought. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Processing visual words with numbers: electrophysiological evidence for semantic activation.

    PubMed

    Lien, Mei-Ching; Allen, Philip; Martin, Nicole

    2014-08-01

    Perea, Duñabeitia, and Carreiras (Journal of Experimental Psychology: Human Perception and Performance 34:237-241, 2008) found that LEET stimuli, formed by a mixture of digits and letters (e.g., T4BL3 instead of TABLE), produced priming effects similar to those for regular words. This finding led them to conclude that LEET stimuli automatically activate lexical information. In the present study, we examined whether semantic activation occurs for LEET stimuli by using an electrophysiological measure called the N400 effect. The N400 effect, also known as the mismatch negativity, reflects detection of a mismatch between a word and the current semantic context. This N400 effect could occur only if the LEET stimulus had been identified and processed semantically. Participants determined whether a stimulus (word or LEET) was related to a given category (e.g., APPLE or 4PPL3 belongs to the category "fruit," but TABLE or T4BL3 does not). We found that LEET stimuli produced an N400 effect similar in magnitude to that for regular uppercase words, suggesting that LEET stimuli can access meaning in a manner similar to words presented in consistent uppercase letters.

  19. Representational neglect for words as revealed by bisection tasks.

    PubMed

    Arduino, Lisa S; Marinelli, Chiara Valeria; Pasotti, Fabrizio; Ferrè, Elisa Raffaella; Bottini, Gabriella

    2012-03-01

    In the present study, we showed that a representational disorder for words can dissociate from both representational neglect for objects and neglect dyslexia. This study involved 14 brain-damaged patients with left unilateral spatial neglect and a group of normal subjects. Patients were divided into four groups based on presence of left neglect dyslexia and representational neglect for non-verbal material, as evaluated by the Clock Drawing test. The patients were presented with bisection tasks for words and lines. The word bisection tasks (with words of five and seven letters) comprised the following: (1) representational bisection: the experimenter pronounced a word and then asked the patient to name the letter in the middle position; (2) visual bisection: same as (1) with stimuli presented visually; and (3) motor bisection: the patient was asked to cross out the letter in the middle position. The standard line bisection task was presented using lines of different length. Consistent with the literature, long lines were bisected to the right and short lines, rendered comparable in length to the words of the word bisection test, deviated to the left (crossover effect). Both patients and controls showed the same leftward bias on words in the visual and motor bisection conditions. A significant difference emerged between the groups only in the case of the representational bisection task, whereas the group exhibiting neglect dyslexia associated with representational neglect for objects showed a significant rightward bias, while the other three patient groups and the controls showed a leftward bisection bias. Neither the presence of neglect alone nor the presence of visual neglect dyslexia was sufficient to produce a specific disorder in mental imagery. These results demonstrate a specific representational neglect for words independent of both representational neglect and neglect dyslexia. ©2011 The British Psychological Society.

  20. Unintentional Activation of Translation Equivalents in Bilinguals Leads to Attention Capture in a Cross-Modal Visual Task

    PubMed Central

    Singh, Niharika; Mishra, Ramesh Kumar

    2015-01-01

    Using a variant of the visual world eye tracking paradigm, we examined if language non- selective activation of translation equivalents leads to attention capture and distraction in a visual task in bilinguals. High and low proficient Hindi-English speaking bilinguals were instructed to programme a saccade towards a line drawing which changed colour among other distractor objects. A spoken word, irrelevant to the main task, was presented before the colour change. On critical trials, one of the line drawings was a phonologically related word of the translation equivalent of the spoken word. Results showed that saccade latency was significantly higher towards the target in the presence of this cross-linguistic translation competitor compared to when the display contained completely unrelated objects. Participants were also slower when the display contained the referent of the spoken word among the distractors. However, the bilingual groups did not differ with regard to the interference effect observed. These findings suggest that spoken words activates translation equivalent which bias attention leading to interference in goal directed action in the visual domain. PMID:25775184

  1. Tracking the emergence of the consonant bias in visual-word recognition: evidence with developing readers.

    PubMed

    Soares, Ana Paula; Perea, Manuel; Comesaña, Montserrat

    2014-01-01

    Recent research with skilled adult readers has consistently revealed an advantage of consonants over vowels in visual-word recognition (i.e., the so-called "consonant bias"). Nevertheless, little is known about how early in development the consonant bias emerges. This work aims to address this issue by studying the relative contribution of consonants and vowels at the early stages of visual-word recognition in developing readers (2(nd) and 4(th) Grade children) and skilled adult readers (college students) using a masked priming lexical decision task. Target words starting either with a consonant or a vowel were preceded by a briefly presented masked prime (50 ms) that could be the same as the target (e.g., pirata-PIRATA [pirate-PIRATE]), a consonant-preserving prime (e.g., pureto-PIRATA), a vowel-preserving prime (e.g., gicala-PIRATA), or an unrelated prime (e.g., bocelo -PIRATA). Results revealed significant priming effects for the identity and consonant-preserving conditions in adult readers and 4(th) Grade children, whereas 2(nd) graders only showed priming for the identity condition. In adult readers, the advantage of consonants was observed both for words starting with a consonant or a vowel, while in 4(th) graders this advantage was restricted to words with an initial consonant. Thus, the present findings suggest that a Consonant/Vowel skeleton should be included in future (developmental) models of visual-word recognition and reading.

  2. Competition between conceptual relations affects compound recognition: the role of entropy.

    PubMed

    Schmidtke, Daniel; Kuperman, Victor; Gagné, Christina L; Spalding, Thomas L

    2016-04-01

    Previous research has suggested that the conceptual representation of a compound is based on a relational structure linking the compound's constituents. Existing accounts of the visual recognition of modifier-head or noun-noun compounds posit that the process involves the selection of a relational structure out of a set of competing relational structures associated with the same compound. In this article, we employ the information-theoretic metric of entropy to gauge relational competition and investigate its effect on the visual identification of established English compounds. The data from two lexical decision megastudies indicates that greater entropy (i.e., increased competition) in a set of conceptual relations associated with a compound is associated with longer lexical decision latencies. This finding indicates that there exists competition between potential meanings associated with the same complex word form. We provide empirical support for conceptual composition during compound word processing in a model that incorporates the effect of the integration of co-activated and competing relational information.

  3. Blind image quality assessment via probabilistic latent semantic analysis.

    PubMed

    Yang, Xichen; Sun, Quansen; Wang, Tianshu

    2016-01-01

    We propose a blind image quality assessment that is highly unsupervised and training free. The new method is based on the hypothesis that the effect caused by distortion can be expressed by certain latent characteristics. Combined with probabilistic latent semantic analysis, the latent characteristics can be discovered by applying a topic model over a visual word dictionary. Four distortion-affected features are extracted to form the visual words in the dictionary: (1) the block-based local histogram; (2) the block-based local mean value; (3) the mean value of contrast within a block; (4) the variance of contrast within a block. Based on the dictionary, the latent topics in the images can be discovered. The discrepancy between the frequency of the topics in an unfamiliar image and a large number of pristine images is applied to measure the image quality. Experimental results for four open databases show that the newly proposed method correlates well with human subjective judgments of diversely distorted images.

  4. Reading faces: investigating the use of a novel face-based orthography in acquired alexia.

    PubMed

    Moore, Michelle W; Brendel, Paul C; Fiez, Julie A

    2014-02-01

    Skilled visual word recognition is thought to rely upon a particular region within the left fusiform gyrus, the visual word form area (VWFA). We investigated whether an individual (AA1) with pure alexia resulting from acquired damage to the VWFA territory could learn an alphabetic "FaceFont" orthography, in which faces rather than typical letter-like units are used to represent phonemes. FaceFont was designed to distinguish between perceptual versus phonological influences on the VWFA. AA1 was unable to learn more than five face-phoneme mappings, performing well below that of controls. AA1 succeeded, however, in learning and using a proto-syllabary comprising 15 face-syllable mappings. These results suggest that the VWFA provides a "linguistic bridge" into left hemisphere speech and language regions, irrespective of the perceptual characteristics of a written language. They also suggest that some individuals may be able to acquire a non-alphabetic writing system more readily than an alphabetic writing system. Copyright © 2013 Elsevier Inc. All rights reserved.

  5. Reading faces: Investigating the use of a novel face-based orthography in acquired alexia

    PubMed Central

    Moore, Michelle W.; Brendel, Paul C.; Fiez, Julie A.

    2014-01-01

    Skilled visual word recognition is thought to rely upon a particular region within the left fusiform gyrus, the visual word form area (VWFA). We investigated whether an individual (AA1) with pure alexia resulting from acquired damage to the VWFA territory could learn an alphabetic “FaceFont” orthography, in which faces rather than typical letter-like units are used to represent phonemes. FaceFont was designed to distinguish between perceptual versus phonological influences on the VWFA. AA1 was unable to learn more than five face-phoneme mappings, performing well below that of controls. AA1 succeeded, however, in learning and using a proto-syllabary comprising 15 face-syllable mappings. These results suggest that the VWFA provides a “linguistic bridge” into left hemisphere speech and language regions, irrespective of the perceptual characteristics of a written language. They also suggest that some individuals may be able to acquire a non-alphabetic writing system more readily than an alphabetic writing system. PMID:24463310

  6. Automatic Activation of Phonological Code during Visual Word Recognition in Children: A Masked Priming Study in Grades 3 and 5

    ERIC Educational Resources Information Center

    Sauval, Karinne; Perre, Laetitia; Casalis, Séverine

    2017-01-01

    The present study aimed to investigate the development of automatic phonological processes involved in visual word recognition during reading acquisition in French. A visual masked priming lexical decision experiment was carried out with third, fifth graders and adult skilled readers. Three different types of partial overlap between the prime and…

  7. Effects of visual span on reading speed and parafoveal processing in eye movements during sentence reading.

    PubMed

    Risse, Sarah

    2014-07-15

    The visual span (or ‘‘uncrowded window’’), which limits the sensory information on each fixation, has been shown to determine reading speed in tasks involving rapid serial visual presentation of single words. The present study investigated whether this is also true for fixation durations during sentence reading when all words are presented at the same time and parafoveal preview of words prior to fixation typically reduces later word-recognition times. If so, a larger visual span may allow more efficient parafoveal processing and thus faster reading. In order to test this hypothesis, visual span profiles (VSPs) were collected from 60 participants and related to data from an eye-tracking reading experiment. The results confirmed a positive relationship between the readers’ VSPs and fixation-based reading speed. However, this relationship was not determined by parafoveal processing. There was no evidence that individual differences in VSPs predicted differences in parafoveal preview benefit. Nevertheless, preview benefit correlated with reading speed, suggesting an independent effect on oculomotor control during reading. In summary, the present results indicate a more complex relationship between the visual span, parafoveal processing, and reading speed than initially assumed. © 2014 ARVO.

  8. Object-based attentional selection modulates anticipatory alpha oscillations

    PubMed Central

    Knakker, Balázs; Weiss, Béla; Vidnyánszky, Zoltán

    2015-01-01

    Visual cortical alpha oscillations are involved in attentional gating of incoming visual information. It has been shown that spatial and feature-based attentional selection result in increased alpha oscillations over the cortical regions representing sensory input originating from the unattended visual field and task-irrelevant visual features, respectively. However, whether attentional gating in the case of object based selection is also associated with alpha oscillations has not been investigated before. Here we measured anticipatory electroencephalography (EEG) alpha oscillations while participants were cued to attend to foveal face or word stimuli, the processing of which is known to have right and left hemispheric lateralization, respectively. The results revealed that in the case of simultaneously displayed, overlapping face and word stimuli, attending to the words led to increased power of parieto-occipital alpha oscillations over the right hemisphere as compared to when faces were attended. This object category-specific modulation of the hemispheric lateralization of anticipatory alpha oscillations was maintained during sustained attentional selection of sequentially presented face and word stimuli. These results imply that in the case of object-based attentional selection—similarly to spatial and feature-based attention—gating of visual information processing might involve visual cortical alpha oscillations. PMID:25628554

  9. Helping Remedial Readers Master the Reading Vocabulary through a Seven Step Method.

    ERIC Educational Resources Information Center

    Aaron, Robert L.

    1981-01-01

    An outline of seven important steps for teaching vocabulary development includes components of language development, visual memory, visual-auditory perception, speeded recall, spelling, reading the word in a sentence, and word comprehension in written context. (JN)

  10. More than words: Using visual graphics for community-based health research.

    PubMed

    Morton Ninomiya, Melody E

    2017-04-20

    With increased attention to knowledge translation and community engagement in the applied health research field, many researchers aim to find effective ways of engaging health policy and decision makers and community stakeholders. While visual graphics such as graphs, charts, figures and photographs are common in scientific research dissemination, they are less common as a communication tool in research. In this commentary, I illustrate how and why visual graphics were created and used to facilitate dialogue and communication throughout all phases of a community-based health research study with a rural Indigenous community, advancing community engagement and knowledge utilization of a research study. I suggest that it is essential that researchers consider the use of visual graphics to accurately communicate and translate important health research concepts and content in accessible forms for diverse research stakeholders and target audiences.

  11. Cross-Situational Learning with Bayesian Generative Models for Multimodal Category and Word Learning in Robots

    PubMed Central

    Taniguchi, Akira; Taniguchi, Tadahiro; Cangelosi, Angelo

    2017-01-01

    In this paper, we propose a Bayesian generative model that can form multiple categories based on each sensory-channel and can associate words with any of the four sensory-channels (action, position, object, and color). This paper focuses on cross-situational learning using the co-occurrence between words and information of sensory-channels in complex situations rather than conventional situations of cross-situational learning. We conducted a learning scenario using a simulator and a real humanoid iCub robot. In the scenario, a human tutor provided a sentence that describes an object of visual attention and an accompanying action to the robot. The scenario was set as follows: the number of words per sensory-channel was three or four, and the number of trials for learning was 20 and 40 for the simulator and 25 and 40 for the real robot. The experimental results showed that the proposed method was able to estimate the multiple categorizations and to learn the relationships between multiple sensory-channels and words accurately. In addition, we conducted an action generation task and an action description task based on word meanings learned in the cross-situational learning scenario. The experimental results showed that the robot could successfully use the word meanings learned by using the proposed method. PMID:29311888

  12. Separating the influences of prereading skills on early word and nonword reading.

    PubMed

    Shapiro, Laura R; Carroll, Julia M; Solity, Jonathan E

    2013-10-01

    The essential first step for a beginning reader is to learn to match printed forms to phonological representations. For a new word, this is an effortful process where each grapheme must be translated individually (serial decoding). The role of phonological awareness in developing a decoding strategy is well known. We examined whether beginning readers recruit different skills depending on the nature of the words being read (familiar words vs. nonwords). Print knowledge, phoneme and rhyme awareness, rapid automatized naming (RAN), phonological short-term memory (STM), nonverbal reasoning, vocabulary, auditory skills, and visual attention were measured in 392 prereaders 4 and 5 years of age. Word and nonword reading were measured 9 months later. We used structural equation modeling to examine the skills-reading relationship and modeled correlations between our two reading outcomes and among all prereading skills. We found that a broad range of skills were associated with reading outcomes: early print knowledge, phonological STM, phoneme awareness and RAN. Whereas all of these skills were directly predictive of nonword reading, early print knowledge was the only direct predictor of word reading. Our findings suggest that beginning readers draw most heavily on their existing print knowledge to read familiar words. Copyright © 2013 Elsevier Inc. All rights reserved.

  13. Chinese and Korean Characters Engage the Same Visual Word Form Area in Proficient Early Chinese-Korean Bilinguals

    PubMed Central

    Bai, Jian'e; Shi, Jinfu; Jiang, Yi; He, Sheng; Weng, Xuchu

    2011-01-01

    A number of recent studies consistently show an area, known as the visual word form area (VWFA), in the left fusiform gyrus that is selectively responsive for visual words in alphabetic scripts as well as in logographic scripts, such as Chinese characters. However, given the large difference between Chinese characters and alphabetic scripts in terms of their orthographic rules, it is not clear at a fine spatial scale, whether Chinese characters engage the same VWFA in the occipito-temporal cortex as alphabetic scripts. We specifically compared Chinese with Korean script, with Korean script serving as a good example of alphabetic writing system, but matched to Chinese in the overall square shape. Sixteen proficient early Chinese-Korean bilinguals took part in the fMRI experiment. Four types of stimuli (Chinese characters, Korean characters, line drawings and unfamiliar Chinese faces) were presented in a block-design paradigm. By contrasting characters (Chinese or Korean) to faces, presumed VWFAs could be identified for both Chinese and Korean characters in the left occipito-temporal sulcus in each subject. The location of peak response point in these two VWFAs were essentially the same. Further analysis revealed a substantial overlap between the VWFA identified for Chinese and that for Korean. At the group level, there was no significant difference in amplitude of response to Chinese and Korean characters. Spatial patterns of response to Chinese and Korean are similar. In addition to confirming that there is an area in the left occipito-temporal cortex that selectively responds to scripts in both Korean and Chinese in early Chinese-Korean bilinguals, our results show that these two scripts engage essentially the same VWFA, even at the level of fine spatial patterns of activation across voxels. These results suggest that similar populations of neurons are engaged in processing the different scripts within the same VWFA in early bilinguals. PMID:21818386

  14. Projectors, associators, visual imagery, and the time course of visual processing in grapheme-color synesthesia.

    PubMed

    Amsel, Ben D; Kutas, Marta; Coulson, Seana

    2017-10-01

    In grapheme-color synesthesia, seeing particular letters or numbers evokes the experience of specific colors. We investigate the brain's real-time processing of words in this population by recording event-related brain potentials (ERPs) from 15 grapheme-color synesthetes and 15 controls as they judged the validity of word pairs ('yellow banana' vs. 'blue banana') presented under high and low visual contrast. Low contrast words elicited delayed P1/N170 visual ERP components in both groups, relative to high contrast. When color concepts were conveyed to synesthetes by individually tailored achromatic grapheme strings ('55555 banana'), visual contrast effects were like those in color words: P1/N170 components were delayed but unchanged in amplitude. When controls saw equivalent colored grapheme strings, visual contrast modulated P1/N170 amplitude but not latency. Color induction in synesthetes thus differs from color perception in controls. Independent from experimental effects, all orthographic stimuli elicited larger N170 and P2 in synesthetes than controls. While P2 (150-250ms) enhancement was similar in all synesthetes, N170 (130-210ms) amplitude varied with individual differences in synesthesia and visual imagery. Results suggest immediate cross-activation in visual areas processing color and shape is most pronounced in so-called projector synesthetes whose concurrent colors are experienced as originating in external space.

  15. Considering the Spatial Layout Information of Bag of Features (BoF) Framework for Image Classification.

    PubMed

    Mu, Guangyu; Liu, Ying; Wang, Limin

    2015-01-01

    The spatial pooling method such as spatial pyramid matching (SPM) is very crucial in the bag of features model used in image classification. SPM partitions the image into a set of regular grids and assumes that the spatial layout of all visual words obey the uniform distribution over these regular grids. However, in practice, we consider that different visual words should obey different spatial layout distributions. To improve SPM, we develop a novel spatial pooling method, namely spatial distribution pooling (SDP). The proposed SDP method uses an extension model of Gauss mixture model to estimate the spatial layout distributions of the visual vocabulary. For each visual word type, SDP can generate a set of flexible grids rather than the regular grids from the traditional SPM. Furthermore, we can compute the grid weights for visual word tokens according to their spatial coordinates. The experimental results demonstrate that SDP outperforms the traditional spatial pooling methods, and is competitive with the state-of-the-art classification accuracy on several challenging image datasets.

  16. The Concreteness Effect and the Bilingual Lexicon: The Impact of Visual Stimuli Attachment on Meaning Recall of Abstract L2 Words

    ERIC Educational Resources Information Center

    Farley, Andrew P.; Ramonda, Kris; Liu, Xun

    2012-01-01

    According to the Dual-Coding Theory (Paivio & Desrochers, 1980), words that are associated with rich visual imagery are more easily learned than abstract words due to what is termed the concreteness effect (Altarriba & Bauer, 2004; de Groot, 1992, de Groot et al., 1994; ter Doest & Semin, 2005). The present study examined the effects of attaching…

  17. Examining the direct and indirect effects of visual-verbal paired associate learning on Chinese word reading.

    PubMed

    Georgiou, George; Liu, Cuina; Xu, Shiyang

    2017-08-01

    Associative learning, traditionally measured with paired associate learning (PAL) tasks, has been found to predict reading ability in several languages. However, it remains unclear whether it also predicts word reading in Chinese, which is known for its ambiguous print-sound correspondences, and whether its effects are direct or indirect through the effects of other reading-related skills such as phonological awareness and rapid naming. Thus, the purpose of this study was to examine the direct and indirect effects of visual-verbal PAL on word reading in an unselected sample of Chinese children followed from the second to the third kindergarten year. A sample of 141 second-year kindergarten children (71 girls and 70 boys; mean age=58.99months, SD=3.17) were followed for a year and were assessed at both times on measures of visual-verbal PAL, rapid naming, and phonological awareness. In the third kindergarten year, they were also assessed on word reading. The results of path analysis showed that visual-verbal PAL exerted a significant direct effect on word reading that was independent of the effects of phonological awareness and rapid naming. However, it also exerted significant indirect effects through phonological awareness. Taken together, these findings suggest that variations in cross-modal associative learning (as measured by visual-verbal PAL) place constraints on the development of word recognition skills irrespective of the characteristics of the orthography children are learning to read. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Can colours be used to segment words when reading?

    PubMed

    Perea, Manuel; Tejero, Pilar; Winskel, Heather

    2015-07-01

    Rayner, Fischer, and Pollatsek (1998, Vision Research) demonstrated that reading unspaced text in Indo-European languages produces a substantial reading cost in word identification (as deduced from an increased word-frequency effect on target words embedded in the unspaced vs. spaced sentences) and in eye movement guidance (as deduced from landing sites closer to the beginning of the words in unspaced sentences). However, the addition of spaces between words comes with a cost: nearby words may fall outside high-acuity central vision, thus reducing the potential benefits of parafoveal processing. In the present experiment, we introduced a salient visual cue intended to facilitate the process of word segmentation without compromising visual acuity: each alternating word was printed in a different colour (i.e., ). Results only revealed a small reading cost of unspaced alternating colour sentences relative to the spaced sentences. Thus, present data are a demonstration that colour can be useful to segment words for readers of spaced orthographies. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Don't words come easy? A psychophysical exploration of word superiority

    PubMed Central

    Starrfelt, Randi; Petersen, Anders; Vangkilde, Signe

    2013-01-01

    Words are made of letters, and yet sometimes it is easier to identify a word than a single letter. This word superiority effect (WSE) has been observed when written stimuli are presented very briefly or degraded by visual noise. We compare performance with letters and words in three experiments, to explore the extents and limits of the WSE. Using a carefully controlled list of three letter words, we show that a WSE can be revealed in vocal reaction times even to undegraded stimuli. With a novel combination of psychophysics and mathematical modeling, we further show that the typical WSE is specifically reflected in perceptual processing speed: single words are simply processed faster than single letters. Intriguingly, when multiple stimuli are presented simultaneously, letters are perceived more easily than words, and this is reflected both in perceptual processing speed and visual short term memory (VSTM) capacity. So, even if single words come easy, there is a limit to the WSE. PMID:24027510

  20. VisualUrText: A Text Analytics Tool for Unstructured Textual Data

    NASA Astrophysics Data System (ADS)

    Zainol, Zuraini; Jaymes, Mohd T. H.; Nohuddin, Puteri N. E.

    2018-05-01

    The growing amount of unstructured text over Internet is tremendous. Text repositories come from Web 2.0, business intelligence and social networking applications. It is also believed that 80-90% of future growth data is available in the form of unstructured text databases that may potentially contain interesting patterns and trends. Text Mining is well known technique for discovering interesting patterns and trends which are non-trivial knowledge from massive unstructured text data. Text Mining covers multidisciplinary fields involving information retrieval (IR), text analysis, natural language processing (NLP), data mining, machine learning statistics and computational linguistics. This paper discusses the development of text analytics tool that is proficient in extracting, processing, analyzing the unstructured text data and visualizing cleaned text data into multiple forms such as Document Term Matrix (DTM), Frequency Graph, Network Analysis Graph, Word Cloud and Dendogram. This tool, VisualUrText, is developed to assist students and researchers for extracting interesting patterns and trends in document analyses.

  1. Effect of study context on item recollection.

    PubMed

    Skinner, Erin I; Fernandes, Myra A

    2010-07-01

    We examined how visual context information provided during encoding, and unrelated to the target word, affected later recollection for words presented alone using a remember-know paradigm. Experiments 1A and 1B showed that participants had better overall memory-specifically, recollection-for words studied with pictures of intact faces than for words studied with pictures of scrambled or inverted faces. Experiment 2 replicated these results and showed that recollection was higher for words studied with pictures of faces than when no image accompanied the study word. In Experiment 3 participants showed equivalent memory for words studied with unique faces as for those studied with a repeatedly presented face. Results suggest that recollection benefits when visual context information high in meaningful content accompanies study words and that this benefit is not related to the uniqueness of the context. We suggest that participants use elaborative processes to integrate item and meaningful contexts into ensemble information, improving subsequent item recollection.

  2. Modality dependency of familiarity ratings of Japanese words.

    PubMed

    Amano, S; Kondo, T; Kakehi, K

    1995-07-01

    Familiarity ratings for a large number of aurally and visually presented Japanese words wer measured for 11 subjects, in order to investigate the modality dependency of familiarity. The correlation coefficient between auditory and visual ratings was .808, which is lower than that observed for English words, suggesting that a substantial portion of the mental lexicon is modality dependent. It was shown that the modality dependency is greater for low-familiarity words than it is for medium- or high-familiarity words. This difference between the low- and the medium- or high-familiarity words has a relationship to orthography. That is, the dependency is larger in words consisting only of kanji, which may have multiple pronunciations and usually represent meaning, than it is in words consisting only of hiragana or katakana, which have a single pronunciation and usually do not represent meaning. These results indicate that the idiosyncratic characteristics of Japanese orthography contribute to the modality dependency.

  3. Bag-of-features based medical image retrieval via multiple assignment and visual words weighting.

    PubMed

    Wang, Jingyan; Li, Yongping; Zhang, Ying; Wang, Chao; Xie, Honglan; Chen, Guoling; Gao, Xin

    2011-11-01

    Bag-of-features based approaches have become prominent for image retrieval and image classification tasks in the past decade. Such methods represent an image as a collection of local features, such as image patches and key points with scale invariant feature transform (SIFT) descriptors. To improve the bag-of-features methods, we first model the assignments of local descriptors as contribution functions, and then propose a novel multiple assignment strategy. Assuming the local features can be reconstructed by their neighboring visual words in a vocabulary, reconstruction weights can be solved by quadratic programming. The weights are then used to build contribution functions, resulting in a novel assignment method, called quadratic programming (QP) assignment. We further propose a novel visual word weighting method. The discriminative power of each visual word is analyzed by the sub-similarity function in the bin that corresponds to the visual word. Each sub-similarity function is then treated as a weak classifier. A strong classifier is learned by boosting methods that combine those weak classifiers. The weighting factors of the visual words are learned accordingly. We evaluate the proposed methods on medical image retrieval tasks. The methods are tested on three well-known data sets, i.e., the ImageCLEFmed data set, the 304 CT Set, and the basal-cell carcinoma image set. Experimental results demonstrate that the proposed QP assignment outperforms the traditional nearest neighbor assignment, the multiple assignment, and the soft assignment, whereas the proposed boosting based weighting strategy outperforms the state-of-the-art weighting methods, such as the term frequency weights and the term frequency-inverse document frequency weights.

  4. The Neural Basis of the Right Visual Field Advantage in Reading: An MEG Analysis Using Virtual Electrodes

    ERIC Educational Resources Information Center

    Barca, Laura; Cornelissen, Piers; Simpson, Michael; Urooj, Uzma; Woods, Will; Ellis, Andrew W.

    2011-01-01

    Right-handed participants respond more quickly and more accurately to written words presented in the right visual field (RVF) than in the left visual field (LVF). Previous attempts to identify the neural basis of the RVF advantage have had limited success. Experiment 1 was a behavioral study of lateralized word naming which established that the…

  5. Semantic mapping reveals distinct patterns in descriptions of social relations in adults with autism spectrum disorder.

    PubMed

    Luo, Sean X; Shinall, Jacqueline A; Peterson, Bradley S; Gerber, Andrew J

    2016-08-01

    Adults with autism spectrum disorder (ASD) may describe other individuals differently compared with typical adults. In this study, we first asked participants to describe closely related individuals such as parents and close friends with 10 positive and 10 negative characteristics. We then used standard natural language processing methods to digitize and visualize these descriptions. The complex patterns of these descriptive sentences exhibited a difference in semantic space between individuals with ASD and control participants. Machine learning algorithms were able to automatically detect and discriminate between these two groups. Furthermore, we showed that these descriptive sentences from adults with ASD exhibited fewer connections as defined by word-word co-occurrences in descriptions, and these connections in words formed a less "small-world" like network. Autism Res 2016, 9: 846-853. © 2015 International Society for Autism Research, Wiley Periodicals, Inc. © 2015 International Society for Autism Research, Wiley Periodicals, Inc.

  6. Zipf's word frequency law in natural language: a critical review and future directions.

    PubMed

    Piantadosi, Steven T

    2014-10-01

    The frequency distribution of words has been a key object of study in statistical linguistics for the past 70 years. This distribution approximately follows a simple mathematical form known as Zipf's law. This article first shows that human language has a highly complex, reliable structure in the frequency distribution over and above this classic law, although prior data visualization methods have obscured this fact. A number of empirical phenomena related to word frequencies are then reviewed. These facts are chosen to be informative about the mechanisms giving rise to Zipf's law and are then used to evaluate many of the theoretical explanations of Zipf's law in language. No prior account straightforwardly explains all the basic facts or is supported with independent evaluation of its underlying assumptions. To make progress at understanding why language obeys Zipf's law, studies must seek evidence beyond the law itself, testing assumptions and evaluating novel predictions with new, independent data.

  7. Neural correlates of word production stages delineated by parametric modulation of psycholinguistic variables.

    PubMed

    Wilson, Stephen M; Isenberg, Anna Lisette; Hickok, Gregory

    2009-11-01

    Word production is a complex multistage process linking conceptual representations, lexical entries, phonological forms and articulation. Previous studies have revealed a network of predominantly left-lateralized brain regions supporting this process, but many details regarding the precise functions of different nodes in this network remain unclear. To better delineate the functions of regions involved in word production, we used event-related functional magnetic resonance imaging (fMRI) to identify brain areas where blood oxygen level-dependent (BOLD) responses to overt picture naming were modulated by three psycholinguistic variables: concept familiarity, word frequency, and word length, and one behavioral variable: reaction time. Each of these variables has been suggested by prior studies to be associated with different aspects of word production. Processing of less familiar concepts was associated with greater BOLD responses in bilateral occipitotemporal regions, reflecting visual processing and conceptual preparation. Lower frequency words produced greater BOLD signal in left inferior temporal cortex and the left temporoparietal junction, suggesting involvement of these regions in lexical selection and retrieval and encoding of phonological codes. Word length was positively correlated with signal intensity in Heschl's gyrus bilaterally, extending into the mid-superior temporal gyrus (STG) and sulcus (STS) in the left hemisphere. The left mid-STS site was also modulated by reaction time, suggesting a role in the storage of lexical phonological codes.

  8. Words, shape, visual search and visual working memory in 3-year-old children.

    PubMed

    Vales, Catarina; Smith, Linda B

    2015-01-01

    Do words cue children's visual attention, and if so, what are the relevant mechanisms? Across four experiments, 3-year-old children (N = 163) were tested in visual search tasks in which targets were cued with only a visual preview versus a visual preview and a spoken name. The experiments were designed to determine whether labels facilitated search times and to examine one route through which labels could have their effect: By influencing the visual working memory representation of the target. The targets and distractors were pictures of instances of basic-level known categories and the labels were the common name for the target category. We predicted that the label would enhance the visual working memory representation of the target object, guiding attention to objects that better matched the target representation. Experiments 1 and 2 used conjunctive search tasks, and Experiment 3 varied shape discriminability between targets and distractors. Experiment 4 compared the effects of labels to repeated presentations of the visual target, which should also influence the working memory representation of the target. The overall pattern fits contemporary theories of how the contents of visual working memory interact with visual search and attention, and shows that even in very young children heard words affect the processing of visual information. © 2014 John Wiley & Sons Ltd.

  9. Encoding context and false recognition memories.

    PubMed

    Bruce, Darryl; Phillips-Grant, Kimberly; Conrad, Nicole; Bona, Susan

    2004-09-01

    False recognition of an extralist word that is thematically related to all words of a study list may reflect internal activation of the theme word during encoding followed by impaired source monitoring at retrieval, that is, difficulty in determining whether the word had actually been experienced or merely thought of. To assist source monitoring, distinctive visual or verbal contexts were added to study words at input. Both types of context produced similar effects: False alarms to theme-word (critical) lures were reduced; remember judgements of critical lures called old were lower; and if contextual information had been added to lists, subjects indicated as much for list items and associated critical foils identified as old. The visual and verbal contexts used in the present studies were held to disrupt semantic categorisation of list words at input and to facilitate source monitoring at output.

  10. Prosodic Phonological Representations Early in Visual Word Recognition

    ERIC Educational Resources Information Center

    Ashby, Jane; Martin, Andrea E.

    2008-01-01

    Two experiments examined the nature of the phonological representations used during visual word recognition. We tested whether a minimality constraint (R. Frost, 1998) limits the complexity of early representations to a simple string of phonemes. Alternatively, readers might activate elaborated representations that include prosodic syllable…

  11. The Relation of Visual and Auditory Aptitudes to First Grade Low Readers' Achievement under Sight-Word and Systematic Phonic Instructions. Research Report #36.

    ERIC Educational Resources Information Center

    Gallistel, Elizabeth; And Others

    Ten auditory and ten visual aptitude measures were administered in the middle of first grade to a sample of 58 low readers. More than half of this low reader sample had scored more than a year below expected grade level on two or more aptitudes. Word recognition measures were administered after four months of sight word instruction and again after…

  12. Letter position coding across modalities: the case of Braille readers.

    PubMed

    Perea, Manuel; García-Chamorro, Cristina; Martín-Suesta, Miguel; Gómez, Pablo

    2012-01-01

    The question of how the brain encodes letter position in written words has attracted increasing attention in recent years. A number of models have recently been proposed to accommodate the fact that transposed-letter stimuli like jugde or caniso are perceptually very close to their base words. Here we examined how letter position coding is attained in the tactile modality via Braille reading. The idea is that Braille word recognition may provide more serial processing than the visual modality, and this may produce differences in the input coding schemes employed to encode letters in written words. To that end, we conducted a lexical decision experiment with adult Braille readers in which the pseudowords were created by transposing/replacing two letters. We found a word-frequency effect for words. In addition, unlike parallel experiments in the visual modality, we failed to find any clear signs of transposed-letter confusability effects. This dissociation highlights the differences between modalities. The present data argue against models of letter position coding that assume that transposed-letter effects (in the visual modality) occur at a relatively late, abstract locus.

  13. The Word Shape Hypothesis Re-Examined: Evidence for an External Feature Advantage in Visual Word Recognition

    ERIC Educational Resources Information Center

    Beech, John R.; Mayall, Kate A.

    2005-01-01

    This study investigates the relative roles of internal and external letter features in word recognition. In Experiment 1 the efficacy of outer word fragments (words with all their horizontal internal features removed) was compared with inner word fragments (words with their outer features removed) as primes in a forward masking paradigm. These…

  14. Tracking the Emergence of the Consonant Bias in Visual-Word Recognition: Evidence with Developing Readers

    PubMed Central

    Soares, Ana Paula; Perea, Manuel; Comesaña, Montserrat

    2014-01-01

    Recent research with skilled adult readers has consistently revealed an advantage of consonants over vowels in visual-word recognition (i.e., the so-called “consonant bias”). Nevertheless, little is known about how early in development the consonant bias emerges. This work aims to address this issue by studying the relative contribution of consonants and vowels at the early stages of visual-word recognition in developing readers (2nd and 4th Grade children) and skilled adult readers (college students) using a masked priming lexical decision task. Target words starting either with a consonant or a vowel were preceded by a briefly presented masked prime (50 ms) that could be the same as the target (e.g., pirata-PIRATA [pirate-PIRATE]), a consonant-preserving prime (e.g., pureto-PIRATA), a vowel-preserving prime (e.g., gicala-PIRATA), or an unrelated prime (e.g., bocelo -PIRATA). Results revealed significant priming effects for the identity and consonant-preserving conditions in adult readers and 4th Grade children, whereas 2nd graders only showed priming for the identity condition. In adult readers, the advantage of consonants was observed both for words starting with a consonant or a vowel, while in 4th graders this advantage was restricted to words with an initial consonant. Thus, the present findings suggest that a Consonant/Vowel skeleton should be included in future (developmental) models of visual-word recognition and reading. PMID:24523917

  15. A multistream model of visual word recognition.

    PubMed

    Allen, Philip A; Smith, Albert F; Lien, Mei-Ching; Kaut, Kevin P; Canfield, Angie

    2009-02-01

    Four experiments are reported that test a multistream model of visual word recognition, which associates letter-level and word-level processing channels with three known visual processing streams isolated in macaque monkeys: the magno-dominated (MD) stream, the interblob-dominated (ID) stream, and the blob-dominated (BD) stream (Van Essen & Anderson, 1995). We show that mixing the color of adjacent letters of words does not result in facilitation of response times or error rates when the spatial-frequency pattern of a whole word is familiar. However, facilitation does occur when the spatial-frequency pattern of a whole word is not familiar. This pattern of results is not due to different luminance levels across the different-colored stimuli and the background because isoluminant displays were used. Also, the mixed-case, mixed-hue facilitation occurred when different display distances were used (Experiments 2 and 3), so this suggests that image normalization can adjust independently of object size differences. Finally, we show that this effect persists in both spaced and unspaced conditions (Experiment 4)--suggesting that inappropriate letter grouping by hue cannot account for these results. These data support a model of visual word recognition in which lower spatial frequencies are processed first in the more rapid MD stream. The slower ID and BD streams may process some lower spatial frequency information in addition to processing higher spatial frequency information, but these channels tend to lose the processing race to recognition unless the letter string is unfamiliar to the MD stream--as with mixed-case presentation.

  16. Visual Exploration of Semantic Relationships in Neural Word Embeddings

    DOE PAGES

    Liu, Shusen; Bremer, Peer-Timo; Thiagarajan, Jayaraman J.; ...

    2017-08-29

    Constructing distributed representations for words through neural language models and using the resulting vector spaces for analysis has become a crucial component of natural language processing (NLP). But, despite their widespread application, little is known about the structure and properties of these spaces. To gain insights into the relationship between words, the NLP community has begun to adapt high-dimensional visualization techniques. Particularly, researchers commonly use t-distributed stochastic neighbor embeddings (t-SNE) and principal component analysis (PCA) to create two-dimensional embeddings for assessing the overall structure and exploring linear relationships (e.g., word analogies), respectively. Unfortunately, these techniques often produce mediocre or evenmore » misleading results and cannot address domain-specific visualization challenges that are crucial for understanding semantic relationships in word embeddings. We introduce new embedding techniques for visualizing semantic and syntactic analogies, and the corresponding tests to determine whether the resulting views capture salient structures. Additionally, we introduce two novel views for a comprehensive study of analogy relationships. Finally, we augment t-SNE embeddings to convey uncertainty information in order to allow a reliable interpretation. Combined, the different views address a number of domain-specific tasks difficult to solve with existing tools.« less

  17. Visual Exploration of Semantic Relationships in Neural Word Embeddings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Shusen; Bremer, Peer-Timo; Thiagarajan, Jayaraman J.

    Constructing distributed representations for words through neural language models and using the resulting vector spaces for analysis has become a crucial component of natural language processing (NLP). But, despite their widespread application, little is known about the structure and properties of these spaces. To gain insights into the relationship between words, the NLP community has begun to adapt high-dimensional visualization techniques. Particularly, researchers commonly use t-distributed stochastic neighbor embeddings (t-SNE) and principal component analysis (PCA) to create two-dimensional embeddings for assessing the overall structure and exploring linear relationships (e.g., word analogies), respectively. Unfortunately, these techniques often produce mediocre or evenmore » misleading results and cannot address domain-specific visualization challenges that are crucial for understanding semantic relationships in word embeddings. We introduce new embedding techniques for visualizing semantic and syntactic analogies, and the corresponding tests to determine whether the resulting views capture salient structures. Additionally, we introduce two novel views for a comprehensive study of analogy relationships. Finally, we augment t-SNE embeddings to convey uncertainty information in order to allow a reliable interpretation. Combined, the different views address a number of domain-specific tasks difficult to solve with existing tools.« less

  18. Errorless discrimination and picture fading as techniques for teaching sight words to TMR students.

    PubMed

    Walsh, B F; Lamberts, F

    1979-03-01

    The effectiveness of two approaches for teaching beginning sight words to 30 TMR students was compared. In Dorry and Zeaman's picture-fading technique, words are taught through association with pictures that are faded out over a series of trials, while in the Edmark program errorless-discrimination technique, words are taught through shaped sequences of visual and auditory--visual matching-to-sample, with the target word first appearing alone and eventually appearing with orthographically similar words. Students were instructed on two lists of 10 words each, one list in the picture-fading and one in the discrimination method, in a double counter-balanced, repeated-measures design. Covariance analysis on three measures (word identification, word recognition, and picture--word matching) showed highly significant differences between the two methods. Students' performance was better after instruction with the errorless-discrimination method than after instruction with the picture-fading method. The findings on picture fading were interpreted as indicating a possible failure of the shifting of control from picture to printed word that earlier researchers have hypothesized as occurring.

  19. Reading skill and word skipping: Implications for visual and linguistic accounts of word skipping.

    PubMed

    Eskenazi, Michael A; Folk, Jocelyn R

    2015-11-01

    We investigated whether high-skill readers skip more words than low-skill readers as a result of parafoveal processing differences based on reading skill. We manipulated foveal load and word length, two variables that strongly influence word skipping, and measured reading skill using the Nelson-Denny Reading Test. We found that reading skill did not influence the probability of skipping five-letter words, but low-skill readers were less likely to skip three-letter words when foveal load was high. Thus, reading skill is likely to influence word skipping when the amount of information in the parafovea falls within the word identification span. We interpret the data in the context of visual-based (extended optimal viewing position model) and linguistic based (E-Z Reader model) accounts of word skipping. The models make different predictions about how and why a word and skipped; however, the data indicate that both models should take into account the fact that different factors influence skipping rates for high- and low-skill readers. (c) 2015 APA, all rights reserved).

  20. Imagining the truth and the moon: an electrophysiological study of abstract and concrete word processing.

    PubMed

    Gullick, Margaret M; Mitra, Priya; Coch, Donna

    2013-05-01

    Previous event-related potential studies have indicated that both a widespread N400 and an anterior N700 index differential processing of concrete and abstract words, but the nature of these components in relation to concreteness and imagery has been unclear. Here, we separated the effects of word concreteness and task demands on the N400 and N700 in a single word processing paradigm with a within-subjects, between-tasks design and carefully controlled word stimuli. The N400 was larger to concrete words than to abstract words, and larger in the visualization task condition than in the surface task condition, with no interaction. A marked anterior N700 was elicited only by concrete words in the visualization task condition, suggesting that this component indexes imagery. These findings are consistent with a revised or extended dual coding theory according to which concrete words benefit from greater activation in both verbal and imagistic systems. Copyright © 2013 Society for Psychophysiological Research.

  1. Contextual diversity is a main determinant of word identification times in young readers.

    PubMed

    Perea, Manuel; Soares, Ana Paula; Comesaña, Montserrat

    2013-09-01

    Recent research with college-aged skilled readers by Adelman and colleagues revealed that contextual diversity (i.e., the number of contexts in which a word appears) is a more critical determinant of visual word recognition than mere repeated exposure (i.e., word frequency) (Psychological Science, 2006, Vol. 17, pp. 814-823). Given that contextual diversity has been claimed to be a relevant factor to word acquisition in developing readers, the effects of contextual diversity should also be a main determinant of word identification times in developing readers. A lexical decision experiment was conducted to examine the effects of contextual diversity and word frequency in young readers (children in fourth grade). Results revealed a sizable effect of contextual diversity, but not of word frequency, thereby generalizing Adelman and colleagues' data to a child population. These findings call for the implementation of dynamic developmental models of visual word recognition that go beyond a learning rule by mere exposure. Copyright © 2012 Elsevier Inc. All rights reserved.

  2. Recall of short word lists presented visually at fast rates: effects of phonological similarity and word length.

    PubMed

    Coltheart, V; Langdon, R

    1998-03-01

    Phonological similarity of visually presented list items impairs short-term serial recall. Lists of long words are also recalled less accurately than are lists of short words. These results have been attributed to phonological recoding and rehearsal. If subjects articulate irrelevant words during list presentation, both phonological similarity and word length effects are abolished. Experiments 1 and 2 examined effects of phonological similarity and recall instructions on recall of lists shown at fast rates (from one item per 0.114-0.50 sec), which might not permit phonological encoding and rehearsal. In Experiment 3, recall instructions and word length were manipulated using fast presentation rates. Both phonological similarity and word length effects were observed, and they were not dependent on recall instructions. Experiments 4 and 5 investigated the effects of irrelevant concurrent articulation on lists shown at fast rates. Both phonological similarity and word length effects were removed by concurrent articulation, as they were with slow presentation rates.

  3. Word position affects stimulus recognition: evidence for early ERP short-term plastic modulation.

    PubMed

    Spironelli, Chiara; Galfano, Giovanni; Umiltà, Carlo; Angrilli, Alessandro

    2011-12-01

    The present study was aimed at investigating the short-term plastic changes that follow word learning at a neurophysiological level. The main hypothesis was that word position (left or right visual field, LVF/RH or RVF/LH) in the initial learning phase would leave a trace that affected, in the subsequent recognition phase, the Recognition Potential (i.e., the first negative component distinguishing words from other stimuli) elicited 220-240 ms after centrally presented stimuli. Forty-eight students were administered, in the learning phase, 125 words for 4s, randomly presented half in the left and half in the right visual field. In the recognition phase, participants were split into two equal groups, one was assigned to the Word task, the other to the Picture task (in which half of the 125 pictures were new, and half matched prior studied words). During the Word task, old RVF/LH words elicited significantly greater negativity in left posterior sites with respect to old LVF/RH words, which in turn showed the same pattern of activation evoked by new words. Therefore, correspondence between stimulus spatial position and hemisphere specialized in automatic word recognition created a robust prime for subsequent recognition. During the Picture task, pictures matching old RVF/LH words showed no differences compared with new pictures, but evoked significantly greater negativity than pictures matching old LVF/RH words. Thus, the priming effect vanished when the task required a switch from visual analysis to stored linguistic information, whereas the lack of correspondence between stimulus position and network specialized in automatic word recognition (i.e., when words were presented to the LVF/RH) revealed the implicit costs for recognition. Results support the view that short-term plastic changes occurring in a linguistic learning task interact with both stimulus position and modality (written word vs. picture representation). Copyright © 2011 Elsevier B.V. All rights reserved.

  4. Semantic priming from crowded words.

    PubMed

    Yeh, Su-Ling; He, Sheng; Cavanagh, Patrick

    2012-06-01

    Vision in a cluttered scene is extremely inefficient. This damaging effect of clutter, known as crowding, affects many aspects of visual processing (e.g., reading speed). We examined observers' processing of crowded targets in a lexical decision task, using single-character Chinese words that are compact but carry semantic meaning. Despite being unrecognizable and indistinguishable from matched nonwords, crowded prime words still generated robust semantic-priming effects on lexical decisions for test words presented in isolation. Indeed, the semantic-priming effect of crowded primes was similar to that of uncrowded primes. These findings show that the meanings of words survive crowding even when the identities of the words do not, suggesting that crowding does not prevent semantic activation, a process that may have evolved in the context of a cluttered visual environment.

  5. Evaluating the Performance of a Visually Guided Hearing Aid Using a Dynamic Auditory-Visual Word Congruence Task.

    PubMed

    Roverud, Elin; Best, Virginia; Mason, Christine R; Streeter, Timothy; Kidd, Gerald

    2017-12-15

    The "visually guided hearing aid" (VGHA), consisting of a beamforming microphone array steered by eye gaze, is an experimental device being tested for effectiveness in laboratory settings. Previous studies have found that beamforming without visual steering can provide significant benefits (relative to natural binaural listening) for speech identification in spatialized speech or noise maskers when sound sources are fixed in location. The aim of the present study was to evaluate the performance of the VGHA in listening conditions in which target speech could switch locations unpredictably, requiring visual steering of the beamforming. To address this aim, the present study tested an experimental simulation of the VGHA in a newly designed dynamic auditory-visual word congruence task. Ten young normal-hearing (NH) and 11 young hearing-impaired (HI) adults participated. On each trial, three simultaneous spoken words were presented from three source positions (-30, 0, and 30 azimuth). An auditory-visual word congruence task was used in which participants indicated whether there was a match between the word printed on a screen at a location corresponding to the target source and the spoken target word presented acoustically from that location. Performance was compared for a natural binaural condition (stimuli presented using impulse responses measured on KEMAR), a simulated VGHA condition (BEAM), and a hybrid condition that combined lowpass-filtered KEMAR and highpass-filtered BEAM information (BEAMAR). In some blocks, the target remained fixed at one location across trials, and in other blocks, the target could transition in location between one trial and the next with a fixed but low probability. Large individual variability in performance was observed. There were significant benefits for the hybrid BEAMAR condition relative to the KEMAR condition on average for both NH and HI groups when the targets were fixed. Although not apparent in the averaged data, some individuals showed BEAM benefits relative to KEMAR. Under dynamic conditions, BEAM and BEAMAR performance dropped significantly immediately following a target location transition. However, performance recovered by the second word in the sequence and was sustained until the next transition. When performance was assessed using an auditory-visual word congruence task, the benefits of beamforming reported previously were generally preserved under dynamic conditions in which the target source could move unpredictably from one location to another (i.e., performance recovered rapidly following source transitions) while the observer steered the beamforming via eye gaze, for both young NH and young HI groups.

  6. Interhemispheric interaction in the split-brain.

    PubMed

    Lambert, A J

    1991-01-01

    An experiment is reported in which a split-brain patient (LB) was simultaneously presented with two words, one to the left and one to the right of fixation. He was instructed to categorize the right sided word (living vs non-living), and to ignore anything appearing to the left of fixation. LB's performance on this task closely resembled that of normal neurologically intact individuals. Manual response speed was slower when the unattended (left visual field) word belonged to the same category as the right visual field word. Implications of this finding for views of the split-brain syndrome are discussed.

  7. Using complex auditory-visual samples to produce emergent relations in children with autism.

    PubMed

    Groskreutz, Nicole C; Karsina, Allen; Miguel, Caio F; Groskreutz, Mark P

    2010-03-01

    Six participants with autism learned conditional relations between complex auditory-visual sample stimuli (dictated words and pictures) and simple visual comparisons (printed words) using matching-to-sample training procedures. Pre- and posttests examined potential stimulus control by each element of the complex sample when presented individually and emergence of additional conditional relations and oral labeling. Tests revealed class-consistent performance for all participants following training.

  8. The influence of visual contrast and case changes on parafoveal preview benefits during reading.

    PubMed

    Wang, Chin-An; Inhoff, Albrecht W

    2010-04-01

    Reingold and Rayner (2006) showed that the visual contrast of a fixated target word influenced its viewing duration, but not the viewing of the next (posttarget) word in the text that was shown in regular contrast. Configurational target changes, by contrast, influenced target and posttarget viewing. The current study examined whether this effect pattern can be attributed to differential processing of the posttarget word during target viewing. A boundary paradigm (Rayner, 1975) was used to provide an informative or uninformative posttarget preview and to reveal the word when it was fixated. Consistent with the earlier study, more time was spent viewing the target when its visual contrast was low and its configuration unfamiliar. Critically, target contrast had no effect on the acquisition of useful information from a posttarget preview, but an unfamiliar target configuration diminished the usefulness of an informative posttarget preview. These findings are consistent with Reingold and Rayner's (2006) claim that saccade programming and attention shifting during reading can be controlled by functionally distinct word recognition processes.

  9. Multimodal Alexia: Neuropsychological Mechanisms and Implications for Treatment

    PubMed Central

    Kim, Esther S.; Rapcsak, Steven Z.; Andersen, Sarah; Beeson, Pélagie M.

    2011-01-01

    Letter-by-letter (LBL) reading is the phenomenon whereby individuals with acquired alexia decode words by sequential identification of component letters. In cases where letter recognition or letter naming is impaired, however, a LBL reading approach is obviated, resulting in a nearly complete inability to read, or global alexia. In some such cases, a treatment strategy wherein letter tracing is used to provide tactile and/or kinesthetic input has resulted in improved letter identification. In this study, a kinesthetic treatment approach was implemented with an individual who presented with severe alexia in the context of relatively preserved recognition of orally spelled words, and mildly impaired oral/written spelling. Eight weeks of kinesthetic treatment resulted in improved letter identification accuracy and oral reading of trained words; however, the participant remained unable to successfully decode untrained words. Further testing revealed that, in addition to the visual-verbal disconnection that resulted in impaired word reading and letter naming, her limited ability to derive benefit from the kinesthetic strategy was attributable to a disconnection that prevented access to letter names from kinesthetic input. We propose that this kinesthetic-verbal disconnection resulted from damage to the left parietal lobe and underlying white matter, a neuroanatomical feature that is not typically observed in patients with global alexia or classic LBL reading. This unfortunate combination of visual-verbal and kinesthetic-verbal disconnections demonstrated in this individual resulted in a persistent multimodal alexia syndrome that was resistant to behavioral treatment. To our knowledge, this is the first case in which the nature of this form of multimodal alexia has been fully characterized, and our findings provide guidance regarding the requisite cognitive skills and lesion profiles that are likely to be associated with a positive response to tactile/kinesthetic treatment. PMID:21952194

  10. Multimodal alexia: neuropsychological mechanisms and implications for treatment.

    PubMed

    Kim, Esther S; Rapcsak, Steven Z; Andersen, Sarah; Beeson, Pélagie M

    2011-11-01

    Letter-by-letter (LBL) reading is the phenomenon whereby individuals with acquired alexia decode words by sequential identification of component letters. In cases where letter recognition or letter naming is impaired, however, a LBL reading approach is obviated, resulting in a nearly complete inability to read, or global alexia. In some such cases, a treatment strategy wherein letter tracing is used to provide tactile and/or kinesthetic input has resulted in improved letter identification. In this study, a kinesthetic treatment approach was implemented with an individual who presented with severe alexia in the context of relatively preserved recognition of orally spelled words, and mildly impaired oral/written spelling. Eight weeks of kinesthetic treatment resulted in improved letter identification accuracy and oral reading of trained words; however, the participant remained unable to successfully decode untrained words. Further testing revealed that, in addition to the visual-verbal disconnection that resulted in impaired word reading and letter naming, her limited ability to derive benefit from the kinesthetic strategy was attributable to a disconnection that prevented access to letter names from kinesthetic input. We propose that this kinesthetic-verbal disconnection resulted from damage to the left parietal lobe and underlying white matter, a neuroanatomical feature that is not typically observed in patients with global alexia or classic LBL reading. This unfortunate combination of visual-verbal and kinesthetic-verbal disconnections demonstrated in this individual resulted in a persistent multimodal alexia syndrome that was resistant to behavioral treatment. To our knowledge, this is the first case in which the nature of this form of multimodal alexia has been fully characterized, and our findings provide guidance regarding the requisite cognitive skills and lesion profiles that are likely to be associated with a positive response to tactile/kinesthetic treatment. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. A taste for words and sounds: a case of lexical-gustatory and sound-gustatory synesthesia

    PubMed Central

    Colizoli, Olympia; Murre, Jaap M. J.; Rouw, Romke

    2013-01-01

    Gustatory forms of synesthesia involve the automatic and consistent experience of tastes that are triggered by non-taste related inducers. We present a case of lexical-gustatory and sound-gustatory synesthesia within one individual, SC. Most words and a subset of non-linguistic sounds induce the experience of taste, smell and physical sensations for SC. SC's lexical-gustatory associations were significantly more consistent than those of a group of controls. We tested for effects of presentation modality (visual vs. auditory), taste-related congruency, and synesthetic inducer-concurrent direction using a priming task. SC's performance did not differ significantly from a trained control group. We used functional magnetic resonance imaging to investigate the neural correlates of SC's synesthetic experiences by comparing her brain activation to the literature on brain networks related to language, music, and sound processing, in addition to synesthesia. Words that induced a strong taste were contrasted to words that induced weak-to-no tastes (“tasty” vs. “tasteless” words). Brain activation was also measured during passive listening to music and environmental sounds. Brain activation patterns showed evidence that two regions are implicated in SC's synesthetic experience of taste and smell: the left anterior insula and left superior parietal lobe. Anterior insula activation may reflect the synesthetic taste experience. The superior parietal lobe is proposed to be involved in binding sensory information across sub-types of synesthetes. We conclude that SC's synesthesia is genuine and reflected in her brain activation. The type of inducer (visual-lexical, auditory-lexical, and non-lexical auditory stimuli) could be differentiated based on patterns of brain activity. PMID:24167497

  12. Gamma-oscillations modulated by picture naming and word reading: Intracranial recording in epileptic patients

    PubMed Central

    Wu, Helen C.; Nagasawa, Tetsuro; Brown, Erik C.; Juhasz, Csaba; Rothermel, Robert; Hoechstetter, Karsten; Shah, Aashit; Mittal, Sandeep; Fuerst, Darren; Sood, Sandeep; Asano, Eishi

    2011-01-01

    Objective We measured cortical gamma-oscillations in response to visual-language tasks consisting of picture naming and word reading in an effort to better understand human visual-language pathways. Methods We studied six patients with focal epilepsy who underwent extraoperative electrocorticography (ECoG) recording. Patients were asked to overtly name images presented sequentially in the picture naming task and to overtly read written words in the reading task. Results Both tasks commonly elicited gamma-augmentation (maximally at 80–100 Hz) on ECoG in the occipital, inferior-occipital-temporal and inferior-Rolandic areas, bilaterally. Picture naming, compared to reading task, elicited greater gamma-augmentation in portions of pre-motor areas as well as occipital and inferior-occipital-temporal areas, bilaterally. In contrast, word reading elicited greater gamma-augmentation in portions of bilateral occipital, left occipital-temporal and left superior-posterior-parietal areas. Gamma-attenuation was elicited by both tasks in portions of posterior cingulate and ventral premotor-prefrontal areas bilaterally. The number of letters in a presented word was positively correlated to the degree of gamma-augmentation in the medial occipital areas. Conclusions Gamma-augmentation measured on ECoG identified cortical areas commonly and differentially involved in picture naming and reading tasks. Longer words may activate the primary visual cortex for the more peripheral field. Significance The present study increases our understanding of the visual-language pathways. PMID:21498109

  13. Locating the cortical bottleneck for slow reading in peripheral vision

    PubMed Central

    Yu, Deyue; Jiang, Yi; Legge, Gordon E.; He, Sheng

    2015-01-01

    Yu, Legge, Park, Gage, and Chung (2010) suggested that the neural bottleneck for slow peripheral reading is located in nonretinotopic areas. We investigated the potential rate-limiting neural site for peripheral reading using fMRI, and contrasted peripheral reading with recognition of peripherally presented line drawings of common objects. We measured the BOLD responses to both text (three-letter words/nonwords) and line-drawing objects presented either in foveal or peripheral vision (10° lower right visual field) at three presentation rates (2, 4, and 8/second). The statistically significant interaction effect of visual field × presentation rate on the BOLD response for text but not for line drawings provides evidence for distinctive processing of peripheral text. This pattern of results was obtained in all five regions of interest (ROIs). At the early retinotopic cortical areas, the BOLD signal slightly increased with increasing presentation rate for foveal text, and remained fairly constant for peripheral text. In the Occipital Word-Responsive Area (OWRA), Visual Word Form Area (VWFA), and object sensitive areas (LO and PHA), the BOLD responses to text decreased with increasing presentation rate for peripheral but not foveal presentation. In contrast, there was no rate-dependent reduction in BOLD response for line-drawing objects in all the ROIs for either foveal or peripheral presentation. Only peripherally presented text showed a distinctive rate-dependence pattern. Although it is possible that the differentiation starts to emerge at the early retinotopic cortical representation, the neural bottleneck for slower reading of peripherally presented text may be a special property of peripheral text processing in object category selective cortex. PMID:26237299

  14. An electrophysiological investigation of the role of orthography in accessing meaning of Chinese single-character words.

    PubMed

    Wang, Kui

    2011-01-10

    This study reported the role of orthography in semantic activation processes of Chinese single-character words. Eighteen native Chinese speaking adults were recruited to take part in a Stroop experiment consisting of one-character color words and pseudowords which were orthographically similar to these color words. Classic behavioral Stroop effects, namely longer reaction times for incongruent conditions than for congruent conditions, were demonstrated for color words and pseudowords. A clear N450 was also observed in the two incongruent conditions. The participants were also asked to perform a visual judgment task immediately following the Stroop experiment. Results from the visual judgment task showed that participants could distinguish color words and pseudowords well (with a mean accuracy rate over 90 percent). Taken together, these findings support the direct orthography-semantic route in Chinese one-character words. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  15. Individual differences in solving arithmetic word problems

    PubMed Central

    2013-01-01

    Background With the present functional magnetic resonance imaging (fMRI) study at 3 T, we investigated the neural correlates of visualization and verbalization during arithmetic word problem solving. In the domain of arithmetic, visualization might mean to visualize numbers and (intermediate) results while calculating, and verbalization might mean that numbers and (intermediate) results are verbally repeated during calculation. If the brain areas involved in number processing are domain-specific as assumed, that is, that the left angular gyrus (AG) shows an affinity to the verbal domain, and that the left and right intraparietal sulcus (IPS) shows an affinity to the visual domain, the activation of these areas should show a dependency on an individual’s cognitive style. Methods 36 healthy young adults participated in the fMRI study. The participants habitual use of visualization and verbalization during solving arithmetic word problems was assessed with a short self-report assessment. During the fMRI measurement, arithmetic word problems that had to be solved by the participants were presented in an event-related design. Results We found that visualizers showed greater brain activation in brain areas involved in visual processing, and that verbalizers showed greater brain activation within the left angular gyrus. Conclusions Our results indicate that cognitive styles or preferences play an important role in understanding brain activation. Our results confirm, that strong visualizers use mental imagery more strongly than weak visualizers during calculation. Moreover, our results suggest that the left AG shows a specific affinity to the verbal domain and subserves number processing in a modality-specific way. PMID:23883107

  16. Short-Term and Long-Term Effects on Visual Word Recognition

    ERIC Educational Resources Information Center

    Protopapas, Athanassios; Kapnoula, Efthymia C.

    2016-01-01

    Effects of lexical and sublexical variables on visual word recognition are often treated as homogeneous across participants and stable over time. In this study, we examine the modulation of frequency, length, syllable and bigram frequency, orthographic neighborhood, and graphophonemic consistency effects by (a) individual differences, and (b) item…

  17. Unconscious Cross-Modal Priming of Auditory Sound Localization by Visual Words

    ERIC Educational Resources Information Center

    Ansorge, Ulrich; Khalid, Shah; Laback, Bernhard

    2016-01-01

    Little is known about the cross-modal integration of unconscious and conscious information. In the current study, we therefore tested whether the spatial meaning of an unconscious visual word, such as "up", influences the perceived location of a subsequently presented auditory target. Although cross-modal integration of unconscious…

  18. Dyslexic children lack word selectivity gradients in occipito-temporal and inferior frontal cortex.

    PubMed

    Olulade, O A; Flowers, D L; Napoliello, E M; Eden, G F

    2015-01-01

    fMRI studies using a region-of-interest approach have revealed that the ventral portion of the left occipito-temporal cortex, which is specialized for orthographic processing of visually presented words (and includes the so-called "visual word form area", VWFA), is characterized by a posterior-to-anterior gradient of increasing selectivity for words in typically reading adults, adolescents, and children (e.g. Brem et al., 2006, 2009). Similarly, the left inferior frontal cortex (IFC) has been shown to exhibit a medial-to-lateral gradient of print selectivity in typically reading adults (Vinckier et al., 2007). Functional brain imaging studies of dyslexia have reported relative underactivity in left hemisphere occipito-temporal and inferior frontal regions using whole-brain analyses during word processing tasks. Hence, the question arises whether gradient sensitivities in these regions are altered in dyslexia. Indeed, a region-of-interest analysis revealed the gradient-specific functional specialization in the occipito-temporal cortex to be disrupted in dyslexic children (van der Mark et al., 2009). Building on these studies, we here (1) investigate if a word-selective gradient exists in the inferior frontal cortex in addition to the occipito-temporal cortex in normally reading children, (2) compare typically reading with dyslexic children, and (3) examine functional connections between these regions in both groups. We replicated the previously reported anterior-to-posterior gradient of increasing selectivity for words in the left occipito-temporal cortex in typically reading children, and its absence in the dyslexic children. Our novel finding is the detection of a pattern of increasing selectivity for words along the medial-to-lateral axis of the left inferior frontal cortex in typically reading children and evidence of functional connectivity between the most lateral aspect of this area and the anterior aspects of the occipito-temporal cortex. We report absence of an IFC gradient and connectivity between the lateral aspect of the IFC and the anterior occipito-temporal cortex in the dyslexic children. Together, our results provide insights into the source of the anomalies reported in previous studies of dyslexia and add to the growing evidence of an orthographic role of IFC in reading.

  19. READINESS AND PHONETIC ANALYSIS OF WORDS IN GRADES K-2.

    ERIC Educational Resources Information Center

    CAMPBELL, BONNIE; QUINN, GOLDIE

    THE METHOD USED AT THE BELLEVUE, NEBRASKA, PUBLIC SCHOOLS TO TEACH READING READINESS AND THE PHONETIC ANALYSIS OF WORDS IN KINDERGARTEN THROUGH GRADE TWO IS DESCRIBED. SUGGESTIONS FOR TEACHING THE READINESS SKILLS OF AUDITORY AND VISUAL PERCEPTION, VOCABULARY SKILLS OF WORD RECOGNITION AND WORD MEANING, AND THE PHONETIC ANALYSIS OF WORDS IN GRADES…

  20. Beginning Readers Activate Semantics from Sub-Word Orthography

    ERIC Educational Resources Information Center

    Nation, Kate; Cocksey, Joanne

    2009-01-01

    Two experiments assessed whether 7-year-old children activate semantic information from sub-word orthography. Children made category decisions to visually-presented words, some of which contained an embedded word (e.g., "hip" in s"hip"). In Experiment 1 children were slower and less accurate to classify words if they contained an embedded word…

  1. Visual Imagery and False Memory for Pictures: A Functional Magnetic Resonance Imaging Study in Healthy Participants.

    PubMed

    Stephan-Otto, Christian; Siddi, Sara; Senior, Carl; Muñoz-Samons, Daniel; Ochoa, Susana; Sánchez-Laforga, Ana María; Brébion, Gildas

    2017-01-01

    Visual mental imagery might be critical in the ability to discriminate imagined from perceived pictures. Our aim was to investigate the neural bases of this specific type of reality-monitoring process in individuals with high visual imagery abilities. A reality-monitoring task was administered to twenty-six healthy participants using functional magnetic resonance imaging. During the encoding phase, 45 words designating common items, and 45 pictures of other common items, were presented in random order. During the recall phase, participants were required to remember whether a picture of the item had been presented, or only a word. Two subgroups of participants with a propensity for high vs. low visual imagery were contrasted. Activation of the amygdala, left inferior occipital gyrus, insula, and precuneus were observed when high visual imagers encoded words later remembered as pictures. At the recall phase, these same participants activated the middle frontal gyrus and inferior and superior parietal lobes when erroneously remembering pictures. The formation of visual mental images might activate visual brain areas as well as structures involved in emotional processing. High visual imagers demonstrate increased activation of a fronto-parietal source-monitoring network that enables distinction between imagined and perceived pictures.

  2. Visual determinants of reduced performance on the Stroop color-word test in normal aging individuals.

    PubMed

    van Boxtel, M P; ten Tusscher, M P; Metsemakers, J F; Willems, B; Jolles, J

    2001-10-01

    It is unknown to what extent the performance on the Stroop color-word test is affected by reduced visual function in older individuals. We tested the impact of common deficiencies in visual function (reduced distant and close acuity, reduced contrast sensitivity, and color weakness) on Stroop performance among 821 normal individuals aged 53 and older. After adjustment for age, sex, and educational level, low contrast sensitivity was associated with more time needed on card I (word naming), red/green color weakness with slower card 2 performance (color naming), and reduced distant acuity with slower performance on card 3 (interference). Half of the age-related variance in speed performance was shared with visual function. The actual impact of reduced visual function may be underestimated in this study when some of this age-related variance in Stroop performance is mediated by visual function decrements. It is suggested that reduced visual function has differential effects on Stroop performance which need to be accounted for when the Stroop test is used both in research and in clinical settings. Stroop performance measured from older individuals with unknown visual status should be interpreted with caution.

  3. The word processing deficit in semantic dementia: all categories are equal, but some categories are more equal than others.

    PubMed

    Pulvermüller, Friedemann; Cooper-Pye, Elisa; Dine, Clare; Hauk, Olaf; Nestor, Peter J; Patterson, Karalyn

    2010-09-01

    It has been claimed that semantic dementia (SD), the temporal variant of fronto-temporal dementia, is characterized by an across-the-board deficit affecting all types of conceptual knowledge. We here confirm this generalized deficit but also report differential degrees of impairment in processing specific semantic word categories in a case series of SD patients (N = 11). Within the domain of words with strong visually grounded meaning, the patients' lexical decision accuracy was more impaired for color-related than for form-related words. Likewise, within the domain of action verbs, the patients' performance was worse for words referring to face movements and speech acts than for words semantically linked to actions performed with the hand and arm. Psycholinguistic properties were matched between the stimulus groups entering these contrasts; an explanation for the differential degrees of impairment must therefore involve semantic features of the words in the different conditions. Furthermore, this specific pattern of deficits cannot be captured by classic category distinctions such as nouns versus verbs or living versus nonliving things. Evidence from previous neuroimaging research indicates that color- and face/speech-related words, respectively, draw most heavily on anterior-temporal and inferior-frontal areas, the structures most affected in SD. Our account combines (a) the notion of an anterior-temporal amodal semantic "hub" to explain the profound across-the-board deficit in SD word processing, with (b) a semantic topography model of category-specific circuits whose cortical distributions reflect semantic features of the words and concepts represented.

  4. Manipulations of word frequency reveal differences in the processing of morphologically complex and simple words in German

    PubMed Central

    Bronk, Maria; Zwitserlood, Pienie; Bölte, Jens

    2013-01-01

    We tested current models of morphological processing in reading with data from four visual lexical decision experiments using German compounds and monomorphemic words. Triplets of two semantically transparent noun-noun compounds and one monomorphemic noun were used in Experiments 1a and 1b. Stimuli within a triplet were matched for full-form frequency. The frequency of the compounds' constituents was varied. The compounds of a triplet shared one constituent, while the frequency of the unshared constituent was either high or low, but always higher than full-form frequency. Reactions were faster to compounds with high-frequency constituents than to compounds with low-frequency constituents, while the latter did not differ from the monomorphemic words. This pattern was not influenced by task difficulty, induced by the type of pseudocompounds used. Pseudocompounds were either created by altering letters of an existing compound (easy pseudocompound, Experiment 1a) or by combining two free morphemes into a non-existing, but morphologically legal, compound (difficult pseudocompound, Experiment 1b). In Experiments 2a and 2b, frequency-matched pairs of semantically opaque noun-noun compounds and simple nouns were tested. In Experiment 2a, with easy pseudocompounds (of the same type as in Experiment 1a), a reaction-time advantage for compounds over monomorphemic words was again observed. This advantage disappeared in Experiment 2b, where difficult pseudocompounds were used. Although a dual-route might account for the data, the findings are best understood in terms of decomposition of low-frequency complex words prior to lexical access, followed by processing costs due to the recombination of morphemes for meaning access. These processing costs vary as a function of intrinsic factors such as semantic transparency, or external factors such as the difficulty of the experimental task. PMID:23986731

  5. Language experience shapes early electrophysiological responses to visual stimuli: the effects of writing system, stimulus length, and presentation duration.

    PubMed

    Xue, Gui; Jiang, Ting; Chen, Chuansheng; Dong, Qi

    2008-02-15

    How language experience affects visual word recognition has been a topic of intense interest. Using event-related potentials (ERPs), the present study compared the early electrophysiological responses (i.e., N1) to familiar and unfamiliar writings under different conditions. Thirteen native Chinese speakers (with English as their second language) were recruited to passively view four types of scripts: Chinese (familiar logographic writings), English (familiar alphabetic writings), Korean Hangul (unfamiliar logographic writings), and Tibetan (unfamiliar alphabetic writings). Stimuli also differed in lexicality (words vs. non-words, for familiar writings only), length (characters/letters vs. words), and presentation duration (100 ms vs. 750 ms). We found no significant differences between words and non-words, and the effect of language experience (familiar vs. unfamiliar) was significantly modulated by stimulus length and writing system, and to a less degree, by presentation duration. That is, the language experience effect (i.e., a stronger N1 response to familiar writings than to unfamiliar writings) was significant only for alphabetic letters, but not for alphabetic and logographic words. The difference between Chinese characters and unfamiliar logographic characters was significant under the condition of short presentation duration, but not under the condition of long presentation duration. Long stimuli elicited a stronger N1 response than did short stimuli, but this effect was significantly attenuated for familiar writings. These results suggest that N1 response might not reliably differentiate familiar and unfamiliar writings. More importantly, our results suggest that N1 is modulated by visual, linguistic, and task factors, which has important implications for the visual expertise hypothesis.

  6. Visual speech segmentation: using facial cues to locate word boundaries in continuous speech

    PubMed Central

    Mitchel, Aaron D.; Weiss, Daniel J.

    2014-01-01

    Speech is typically a multimodal phenomenon, yet few studies have focused on the exclusive contributions of visual cues to language acquisition. To address this gap, we investigated whether visual prosodic information can facilitate speech segmentation. Previous research has demonstrated that language learners can use lexical stress and pitch cues to segment speech and that learners can extract this information from talking faces. Thus, we created an artificial speech stream that contained minimal segmentation cues and paired it with two synchronous facial displays in which visual prosody was either informative or uninformative for identifying word boundaries. Across three familiarisation conditions (audio stream alone, facial streams alone, and paired audiovisual), learning occurred only when the facial displays were informative to word boundaries, suggesting that facial cues can help learners solve the early challenges of language acquisition. PMID:25018577

  7. Massive cortical reorganization in sighted Braille readers

    PubMed Central

    Siuda-Krzywicka, Katarzyna; Bola, Łukasz; Paplińska, Małgorzata; Sumera, Ewa; Jednoróg, Katarzyna; Marchewka, Artur; Śliwińska, Magdalena W; Amedi, Amir; Szwed, Marcin

    2016-01-01

    The brain is capable of large-scale reorganization in blindness or after massive injury. Such reorganization crosses the division into separate sensory cortices (visual, somatosensory...). As its result, the visual cortex of the blind becomes active during tactile Braille reading. Although the possibility of such reorganization in the normal, adult brain has been raised, definitive evidence has been lacking. Here, we demonstrate such extensive reorganization in normal, sighted adults who learned Braille while their brain activity was investigated with fMRI and transcranial magnetic stimulation (TMS). Subjects showed enhanced activity for tactile reading in the visual cortex, including the visual word form area (VWFA) that was modulated by their Braille reading speed and strengthened resting-state connectivity between visual and somatosensory cortices. Moreover, TMS disruption of VWFA activity decreased their tactile reading accuracy. Our results indicate that large-scale reorganization is a viable mechanism recruited when learning complex skills. DOI: http://dx.doi.org/10.7554/eLife.10762.001 PMID:26976813

  8. Recognition intent and visual word recognition.

    PubMed

    Wang, Man-Ying; Ching, Chi-Le

    2009-03-01

    This study adopted a change detection task to investigate whether and how recognition intent affects the construction of orthographic representation in visual word recognition. Chinese readers (Experiment 1-1) and nonreaders (Experiment 1-2) detected color changes in radical components of Chinese characters. Explicit recognition demand was imposed in Experiment 2 by an additional recognition task. When the recognition was implicit, a bias favoring the radical location informative of character identity was found in Chinese readers (Experiment 1-1), but not nonreaders (Experiment 1-2). With explicit recognition demands, the effect of radical location interacted with radical function and word frequency (Experiment 2). An estimate of identification performance under implicit recognition was derived in Experiment 3. These findings reflect the joint influence of recognition intent and orthographic regularity in shaping readers' orthographic representation. The implication for the role of visual attention in word recognition was also discussed.

  9. Selective attention in anxiety: distraction and enhancement in visual search.

    PubMed

    Rinck, Mike; Becker, Eni S; Kellermann, Jana; Roth, Walton T

    2003-01-01

    According to cognitive models of anxiety, anxiety patients exhibit an attentional bias towards threat, manifested as greater distractibility by threat stimuli and enhanced detection of them. Both phenomena were studied in two experiments, using a modified visual search task, in which participants were asked to find single target words (GAD-related, speech-related, neutral, or positive) hidden in matrices made up of distractor words (also GAD-related, speech-related, neutral, or positive). Generalized anxiety disorder (GAD) patients, social phobia (SP) patients afraid of giving speeches, and healthy controls participated in the visual search task. GAD patients were slowed by GAD-related distractor words but did not show statistically reliable evidence of enhanced detection of GAD-related target words. SP patients showed neither distraction nor enhancement effects. These results extend previous findings of attentional biases observed with other experimental paradigms. Copyright 2003 Wiley-Liss, Inc.

  10. The language used in describing autobiographical memories prompted by life period visually presented verbal cues, event-specific visually presented verbal cues and short musical clips of popular music.

    PubMed

    Zator, Krysten; Katz, Albert N

    2017-07-01

    Here, we examined linguistic differences in the reports of memories produced by three cueing methods. Two groups of young adults were cued visually either by words representing events or popular cultural phenomena that took place when they were 5, 10, or 16 years of age, or by words referencing a general lifetime period word cue directing them to that period in their life. A third group heard 30-second long musical clips of songs popular during the same three time periods. In each condition, participants typed a specific event memory evoked by the cue and these typed memories were subjected to analysis by the Linguistic Inquiry and Word Count (LIWC) program. Differences in the reports produced indicated that listening to music evoked memories embodied in motor-perceptual systems more so than memories evoked by our word-cueing conditions. Additionally, relative to music cues, lifetime period word cues produced memories with reliably more uses of personal pronouns, past tense terms, and negative emotions. The findings provide evidence for the embodiment of autobiographical memories, and how those differ when the cues emphasise different aspects of the encoded events.

  11. Letter Position Coding Across Modalities: The Case of Braille Readers

    PubMed Central

    Perea, Manuel; García-Chamorro, Cristina; Martín-Suesta, Miguel; Gómez, Pablo

    2012-01-01

    Background The question of how the brain encodes letter position in written words has attracted increasing attention in recent years. A number of models have recently been proposed to accommodate the fact that transposed-letter stimuli like jugde or caniso are perceptually very close to their base words. Methodology Here we examined how letter position coding is attained in the tactile modality via Braille reading. The idea is that Braille word recognition may provide more serial processing than the visual modality, and this may produce differences in the input coding schemes employed to encode letters in written words. To that end, we conducted a lexical decision experiment with adult Braille readers in which the pseudowords were created by transposing/replacing two letters. Principal Findings We found a word-frequency effect for words. In addition, unlike parallel experiments in the visual modality, we failed to find any clear signs of transposed-letter confusability effects. This dissociation highlights the differences between modalities. Conclusions The present data argue against models of letter position coding that assume that transposed-letter effects (in the visual modality) occur at a relatively late, abstract locus. PMID:23071522

  12. Novel word acquisition in aphasia: Facing the word-referent ambiguity of natural language learning contexts.

    PubMed

    Peñaloza, Claudia; Mirman, Daniel; Tuomiranta, Leena; Benetello, Annalisa; Heikius, Ida-Maria; Järvinen, Sonja; Majos, Maria C; Cardona, Pedro; Juncadella, Montserrat; Laine, Matti; Martin, Nadine; Rodríguez-Fornells, Antoni

    2016-06-01

    Recent research suggests that some people with aphasia preserve some ability to learn novel words and to retain them in the long-term. However, this novel word learning ability has been studied only in the context of single word-picture pairings. We examined the ability of people with chronic aphasia to learn novel words using a paradigm that presents new word forms together with a limited set of different possible visual referents and requires the identification of the correct word-object associations on the basis of online feedback. We also studied the relationship between word learning ability and aphasia severity, word processing abilities, and verbal short-term memory (STM). We further examined the influence of gross lesion location on new word learning. The word learning task was first validated with a group of forty-five young adults. Fourteen participants with chronic aphasia were administered the task and underwent tests of immediate and long-term recognition memory at 1 week. Their performance was compared to that of a group of fourteen matched controls using growth curve analysis. The learning curve and recognition performance of the aphasia group was significantly below the matched control group, although above-chance recognition performance and case-by-case analyses indicated that some participants with aphasia had learned the correct word-referent mappings. Verbal STM but not word processing abilities predicted word learning ability after controlling for aphasia severity. Importantly, participants with lesions in the left frontal cortex performed significantly worse than participants with lesions that spared the left frontal region both during word learning and on the recognition tests. Our findings indicate that some people with aphasia can preserve the ability to learn a small novel lexicon in an ambiguous word-referent context. This learning and recognition memory ability was associated with verbal STM capacity, aphasia severity and the integrity of the left inferior frontal region. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Novel word acquisition in aphasia: Facing the word-referent ambiguity of natural language learning contexts

    PubMed Central

    Peñaloza, Claudia; Mirman, Daniel; Tuomiranta, Leena; Benetello, Annalisa; Heikius, Ida-Maria; Järvinen, Sonja; Majos, Maria C.; Cardona, Pedro; Juncadella, Montserrat; Laine, Matti; Martin, Nadine; Rodríguez-Fornells, Antoni

    2017-01-01

    Recent research suggests that some people with aphasia preserve some ability to learn novel words and to retain them in the long-term. However, this novel word learning ability has been studied only in the context of single word-picture pairings. We examined the ability of people with chronic aphasia to learn novel words using a paradigm that presents new word forms together with a limited set of different possible visual referents and requires the identification of the correct word-object associations on the basis of online feedback. We also studied the relationship between word learning ability and aphasia severity, word processing abilities, and verbal short-term memory (STM). We further examined the influence of gross lesion location on new word learning. The word learning task was first validated with a group of forty-five young adults. Fourteen participants with chronic aphasia were administered the task and underwent tests of immediate and long-term recognition memory at 1 week. Their performance was compared to that of a group of fourteen matched controls using growth curve analysis. The learning curve and recognition performance of the aphasia group was significantly below the matched control group, although above-chance recognition performance and case-by-case analyses indicated that some participants with aphasia had learned the correct word-referent mappings. Verbal STM but not word processing abilities predicted word learning ability after controlling for aphasia severity. Importantly, participants with lesions in the left frontal cortex performed significantly worse than participants with lesions that spared the left frontal region both during word learning and on the recognition tests. Our findings indicate that some people with aphasia can preserve the ability to learn a small novel lexicon in an ambiguous word-referent context. This learning and recognition memory ability was associated with verbal STM capacity, aphasia severity and the integrity of the left inferior frontal region. PMID:27085892

  14. Effects of Multimodal Information on Learning Performance and Judgment of Learning

    ERIC Educational Resources Information Center

    Chen, Gongxiang; Fu, Xiaolan

    2003-01-01

    Two experiments were conducted to investigate the effects of multimodal information on learning performance and judgment of learning (JOL). Experiment 1 examined the effects of representation type (word-only versus word-plus-picture) and presentation channel (visual-only versus visual-plus-auditory) on recall and immediate-JOL in fixed-rate…

  15. Visual Word Recognition by Bilinguals in a Sentence Context: Evidence for Nonselective Lexical Access

    ERIC Educational Resources Information Center

    Duyck, Wouter; Van Assche, Eva; Drieghe, Denis; Hartsuiker, Robert J.

    2007-01-01

    Recent research on bilingualism has shown that lexical access in visual word recognition by bilinguals is not selective with respect to language. In the present study, the authors investigated language-independent lexical access in bilinguals reading sentences, which constitutes a strong unilingual linguistic context. In the first experiment,…

  16. Relationships between Visual and Auditory Perceptual Skills and Comprehension in Students with Learning Disabilities.

    ERIC Educational Resources Information Center

    Weaver, Phyllis A.; Rosner, Jerome

    1979-01-01

    Scores of 25 learning disabled students (aged 9 to 13) were compared on five tests: a visual-perceptual test (Coloured Progressive Matrices); an auditory-perceptual test (Auditory Motor Placement); a listening and reading comprehension test (Durrell Listening-Reading Series); and a word recognition test (Word Recognition subtest, Diagnostic…

  17. VStops: A Thinking Strategy and Visual Representation Approach in Mathematical Word Problem Solving toward Enhancing STEM Literacy

    ERIC Educational Resources Information Center

    Abdullah, Nasarudin; Halim, Lilia; Zakaria, Effandi

    2014-01-01

    This study aimed to determine the impact of strategic thinking and visual representation approaches (VStops) on the achievement, conceptual knowledge, metacognitive awareness, awareness of problem-solving strategies, and student attitudes toward mathematical word problem solving among primary school students. The experimental group (N = 96)…

  18. ERP Evidence of Hemispheric Independence in Visual Word Recognition

    ERIC Educational Resources Information Center

    Nemrodov, Dan; Harpaz, Yuval; Javitt, Daniel C.; Lavidor, Michal

    2011-01-01

    This study examined the capability of the left hemisphere (LH) and the right hemisphere (RH) to perform a visual recognition task independently as formulated by the Direct Access Model (Fernandino, Iacoboni, & Zaidel, 2007). Healthy native Hebrew speakers were asked to categorize nouns and non-words (created from nouns by transposing two middle…

  19. Lung texture classification using bag of visual words

    NASA Astrophysics Data System (ADS)

    Asherov, Marina; Diamant, Idit; Greenspan, Hayit

    2014-03-01

    Interstitial lung diseases (ILD) refer to a group of more than 150 parenchymal lung disorders. High-Resolution Computed Tomography (HRCT) is the most essential imaging modality of ILD diagnosis. Nonetheless, classification of various lung tissue patterns caused by ILD is still regarded as a challenging task. The current study focuses on the classification of five most common categories of lung tissues of ILD in HRCT images: normal, emphysema, ground glass, fibrosis and micronodules. The objective of the research is to classify an expert-given annotated region of interest (AROI) using a bag of visual words (BoVW) framework. The images are divided into small patches and a collection of representative patches are defined as visual words. This procedure, termed dictionary construction, is performed for each individual lung texture category. The assumption is that different lung textures are represented by a different visual word distribution. The classification is performed using an SVM classifier with histogram intersection kernel. In the experiments, we use a dataset of 1018 AROIs from 95 patients. Classification using a leave-one-patient-out cross validation (LOPO CV) is used. Current classification accuracy obtained is close to 80%.

  20. Do you remember where sounds, pictures and words came from? The role of the stimulus format in object location memory.

    PubMed

    Delogu, Franco; Lilla, Christopher C

    2017-11-01

    Contrasting results in visual and auditory spatial memory stimulate the debate over the role of sensory modality and attention in identity-to-location binding. We investigated the role of sensory modality in the incidental/deliberate encoding of the location of a sequence of items. In 4 separated blocks, 88 participants memorised sequences of environmental sounds, spoken words, pictures and written words, respectively. After memorisation, participants were asked to recognise old from new items in a new sequence of stimuli. They were also asked to indicate from which side of the screen (visual stimuli) or headphone channel (sounds) the old stimuli were presented in encoding. In the first block, participants were not aware of the spatial requirement while, in blocks 2, 3 and 4 they knew that their memory for item location was going to be tested. Results show significantly lower accuracy of object location memory for the auditory stimuli (environmental sounds and spoken words) than for images (pictures and written words). Awareness of spatial requirement did not influence localisation accuracy. We conclude that: (a) object location memory is more effective for visual objects; (b) object location is implicitly associated with item identity during encoding and (c) visual supremacy in spatial memory does not depend on the automaticity of object location binding.

  1. Phonological Activation in Multi-Syllabic Sord Recognition

    ERIC Educational Resources Information Center

    Lee, Chang H.

    2007-01-01

    Three experiments were conducted to test the phonological recoding hypothesis in visual word recognition. Most studies on this issue have been conducted using mono-syllabic words, eventually constructing various models of phonological processing. Yet in many languages including English, the majority of words are multi-syllabic words. English…

  2. Build an Interactive Word Wall

    ERIC Educational Resources Information Center

    Jackson, Julie

    2018-01-01

    Word walls visually display important vocabulary covered during class. Although teachers have often been encouraged to post word walls in their classrooms, little information is available to guide them. This article describes steps science teachers can follow to transform traditional word walls into interactive teaching tools. It also describes a…

  3. Embedded Words in Visual Word Recognition: Does the Left Hemisphere See the Rain in Brain?

    ERIC Educational Resources Information Center

    McCormick, Samantha F.; Davis, Colin J.; Brysbaert, Marc

    2010-01-01

    To examine whether interhemispheric transfer during foveal word recognition entails a discontinuity between the information presented to the left and right of fixation, we presented target words in such a way that participants fixated immediately left or right of an embedded word (as in "gr*apple", "bull*et") or in the middle…

  4. Klinefelter syndrome has increased brain responses to auditory stimuli and motor output, but not to visual stimuli or Stroop adaptation

    PubMed Central

    Wallentin, Mikkel; Skakkebæk, Anne; Bojesen, Anders; Fedder, Jens; Laurberg, Peter; Østergaard, John R.; Hertz, Jens Michael; Pedersen, Anders Degn; Gravholt, Claus Højbjerg

    2016-01-01

    Klinefelter syndrome (47, XXY) (KS) is a genetic syndrome characterized by the presence of an extra X chromosome and low level of testosterone, resulting in a number of neurocognitive abnormalities, yet little is known about brain function. This study investigated the fMRI-BOLD response from KS relative to a group of Controls to basic motor, perceptual, executive and adaptation tasks. Participants (N: KS = 49; Controls = 49) responded to whether the words “GREEN” or “RED” were displayed in green or red (incongruent versus congruent colors). One of the colors was presented three times as often as the other, making it possible to study both congruency and adaptation effects independently. Auditory stimuli saying “GREEN” or “RED” had the same distribution, making it possible to study effects of perceptual modality as well as Frequency effects across modalities. We found that KS had an increased response to motor output in primary motor cortex and an increased response to auditory stimuli in auditory cortices, but no difference in primary visual cortices. KS displayed a diminished response to written visual stimuli in secondary visual regions near the Visual Word Form Area, consistent with the widespread dyslexia in the group. No neural differences were found in inhibitory control (Stroop) or in adaptation to differences in stimulus frequencies. Across groups we found a strong positive correlation between age and BOLD response in the brain's motor network with no difference between groups. No effects of testosterone level or brain volume were found. In sum, the present findings suggest that auditory and motor systems in KS are selectively affected, perhaps as a compensatory strategy, and that this is not a systemic effect as it is not seen in the visual system. PMID:26958463

  5. Klinefelter syndrome has increased brain responses to auditory stimuli and motor output, but not to visual stimuli or Stroop adaptation.

    PubMed

    Wallentin, Mikkel; Skakkebæk, Anne; Bojesen, Anders; Fedder, Jens; Laurberg, Peter; Østergaard, John R; Hertz, Jens Michael; Pedersen, Anders Degn; Gravholt, Claus Højbjerg

    2016-01-01

    Klinefelter syndrome (47, XXY) (KS) is a genetic syndrome characterized by the presence of an extra X chromosome and low level of testosterone, resulting in a number of neurocognitive abnormalities, yet little is known about brain function. This study investigated the fMRI-BOLD response from KS relative to a group of Controls to basic motor, perceptual, executive and adaptation tasks. Participants (N: KS = 49; Controls = 49) responded to whether the words "GREEN" or "RED" were displayed in green or red (incongruent versus congruent colors). One of the colors was presented three times as often as the other, making it possible to study both congruency and adaptation effects independently. Auditory stimuli saying "GREEN" or "RED" had the same distribution, making it possible to study effects of perceptual modality as well as Frequency effects across modalities. We found that KS had an increased response to motor output in primary motor cortex and an increased response to auditory stimuli in auditory cortices, but no difference in primary visual cortices. KS displayed a diminished response to written visual stimuli in secondary visual regions near the Visual Word Form Area, consistent with the widespread dyslexia in the group. No neural differences were found in inhibitory control (Stroop) or in adaptation to differences in stimulus frequencies. Across groups we found a strong positive correlation between age and BOLD response in the brain's motor network with no difference between groups. No effects of testosterone level or brain volume were found. In sum, the present findings suggest that auditory and motor systems in KS are selectively affected, perhaps as a compensatory strategy, and that this is not a systemic effect as it is not seen in the visual system.

  6. “Distracters” Do Not Always Distract: Visual Working Memory for Angry Faces is Enhanced by Incidental Emotional Words

    PubMed Central

    Jackson, Margaret C.; Linden, David E. J.; Raymond, Jane E.

    2012-01-01

    We are often required to filter out distraction in order to focus on a primary task during which working memory (WM) is engaged. Previous research has shown that negative versus neutral distracters presented during a visual WM maintenance period significantly impair memory for neutral information. However, the contents of WM are often also emotional in nature. The question we address here is how incidental information might impact upon visual WM when both this and the memory items contain emotional information. We presented emotional versus neutral words during the maintenance interval of an emotional visual WM faces task. Participants encoded two angry or happy faces into WM, and several seconds into a 9 s maintenance period a negative, positive, or neutral word was flashed on the screen three times. A single neutral test face was presented for retrieval with a face identity that was either present or absent in the preceding study array. WM for angry face identities was significantly better when an emotional (negative or positive) versus neutral (or no) word was presented. In contrast, WM for happy face identities was not significantly affected by word valence. These findings suggest that the presence of emotion within an intervening stimulus boosts the emotional value of threat-related information maintained in visual WM and thus improves performance. In addition, we show that incidental events that are emotional in nature do not always distract from an ongoing WM task. PMID:23112782

  7. Identifying selective visual attention biases related to fear of pain by tracking eye movements within a dot-probe paradigm.

    PubMed

    Yang, Zhou; Jackson, Todd; Gao, Xiao; Chen, Hong

    2012-08-01

    This research examined selective biases in visual attention related to fear of pain by tracking eye movements (EM) toward pain-related stimuli among the pain-fearful. EM of 21 young adults scoring high on a fear of pain measure (H-FOP) and 20 lower-scoring (L-FOP) control participants were measured during a dot-probe task that featured sensory pain-neutral, health catastrophe-neutral and neutral-neutral word pairs. Analyses indicated that the H-FOP group was more likely to direct immediate visual attention toward sensory pain and health catastrophe words than was the L-FOP group. The H-FOP group also had comparatively shorter first fixation latencies toward sensory pain and health catastrophe words. Conversely, groups did not differ on EM indices of attentional maintenance (i.e., first fixation duration, gaze duration, and average fixation duration) or reaction times to dot probes. Finally, both groups showed a cycle of disengagement followed by re-engagement toward sensory pain words relative to other word types. In sum, this research is the first to reveal biases toward pain stimuli during very early stages of visual information processing among the highly pain-fearful and highlights the utility of EM tracking as a means to evaluate visual attention as a dynamic process in the context of FOP. Copyright © 2012 International Association for the Study of Pain. Published by Elsevier B.V. All rights reserved.

  8. Parafoveal preview benefit in reading is only obtained from the saccade goal.

    PubMed

    McDonald, Scott A

    2006-12-01

    Previous research has demonstrated that reading is less efficient when parafoveal visual information about upcoming words is invalid or unavailable; the benefit from a valid preview is realised as reduced reading times on the subsequently foveated word, and has been explained with reference to the allocation of attentional resources to parafoveal word(s). This paper presents eyetracking evidence that preview benefit is obtained only for words that are selected as the saccade target. Using a gaze-contingent display change paradigm (Rayner, K. (1975). The perceptual span and peripheral cues in reading. Cognitive Psychology, 7, 65-81), the position of the triggering boundary was set near the middle of the pretarget word. When a refixation saccade took the eye across the boundary in the pretarget word, there was no reliable effect of the validity of the target word preview. However, when the triggering boundary was positioned just after the pretarget word, a robust preview benefit was observed, replicating previous research. The current results complement findings from studies of basic visual function, suggesting that for the case of preview benefit in reading, attentional and oculomotor processes are obligatorily coupled.

  9. A different outlook on time: visual and auditory month names elicit different mental vantage points for a time-space synaesthete.

    PubMed

    Jarick, Michelle; Dixon, Mike J; Stewart, Mark T; Maxwell, Emily C; Smilek, Daniel

    2009-01-01

    Synaesthesia is a fascinating condition whereby individuals report extraordinary experiences when presented with ordinary stimuli. Here we examined an individual (L) who experiences time units (i.e., months of the year and hours of the day) as occupying specific spatial locations (January is 30 degrees to the left of midline). This form of time-space synaesthesia has been recently investigated by Smilek et al. (2007) who demonstrated that synaesthetic time-space associations are highly consistent, occur regardless of intention, and can direct spatial attention. We extended this work by showing that for the synaesthete L, her time-space vantage point changes depending on whether the time units are seen or heard. For example, when L sees the word JANUARY, she reports experiencing January on her left side, however when she hears the word "January" she experiences the month on her right side. L's subjective reports were validated using a spatial cueing paradigm. The names of months were centrally presented followed by targets on the left or right. L was faster at detecting targets in validly cued locations relative to invalidly cued locations both for visually presented cues (January orients attention to the left) and for aurally presented cues (January orients attention to the right). We replicated this difference in visual and aural cueing effects using hour of the day. Our findings support previous research showing that time-space synaesthesia can bias visual spatial attention, and further suggest that for this synaesthete, time-space associations differ depending on whether they are visually or aurally induced.

  10. Evidence for Separate Contributions of High and Low Spatial Frequencies during Visual Word Recognition.

    PubMed

    Winsler, Kurt; Holcomb, Phillip J; Midgley, Katherine J; Grainger, Jonathan

    2017-01-01

    Previous studies have shown that different spatial frequency information processing streams interact during the recognition of visual stimuli. However, it is a matter of debate as to the contributions of high and low spatial frequency (HSF and LSF) information for visual word recognition. This study examined the role of different spatial frequencies in visual word recognition using event-related potential (ERP) masked priming. EEG was recorded from 32 scalp sites in 30 English-speaking adults in a go/no-go semantic categorization task. Stimuli were white characters on a neutral gray background. Targets were uppercase five letter words preceded by a forward-mask (#######) and a 50 ms lowercase prime. Primes were either the same word (repeated) or a different word (un-repeated) than the subsequent target and either contained only high, only low, or full spatial frequency information. Additionally within each condition, half of the prime-target pairs were high lexical frequency, and half were low. In the full spatial frequency condition, typical ERP masked priming effects were found with an attenuated N250 (sub-lexical) and N400 (lexical-semantic) for repeated compared to un-repeated primes. For HSF primes there was a weaker N250 effect which interacted with lexical frequency, a significant reversal of the effect around 300 ms, and an N400-like effect for only high lexical frequency word pairs. LSF primes did not produce any of the classic ERP repetition priming effects, however they did elicit a distinct early effect around 200 ms in the opposite direction of typical repetition effects. HSF information accounted for many of the masked repetition priming ERP effects and therefore suggests that HSFs are more crucial for word recognition. However, LSFs did produce their own pattern of priming effects indicating that larger scale information may still play a role in word recognition.

  11. Origins of the specialization for letters and numbers in ventral occipitotemporal cortex.

    PubMed

    Hannagan, Thomas; Amedi, Amir; Cohen, Laurent; Dehaene-Lambertz, Ghislaine; Dehaene, Stanislas

    2015-07-01

    Deep in the occipitotemporal cortex lie two functional regions, the visual word form area (VWFA) and the number form area (NFA), which are thought to play a special role in letter and number recognition, respectively. We review recent progress made in characterizing the origins of these symbol form areas in children or adults, sighted or blind subjects, and humans or monkeys. We propose two non-mutually-exclusive hypotheses on the origins of the VWFA and NFA: the presence of a connectivity bias, and a sensitivity to shape features. We assess the explanatory power of these hypotheses, describe their consequences, and offer several experimental tests. Copyright © 2015 Elsevier Ltd. All rights reserved.

  12. The brain adapts to orthography with experience: Evidence from English and Chinese

    PubMed Central

    Cao, Fan; Brennan, Christine; Booth, James R.

    2016-01-01

    Using functional magnetic resonance imaging (fMRI), we examined the process of language specialization in the brain by comparing developmental changes in two contrastive orthographies: Chinese and English. In a visual word rhyming judgment task, we found a significant interaction between age and language in left inferior parietal lobule and left superior temporal gyrus, which was due to greater developmental increases in English than in Chinese. Moreover, we found that higher skill only in English children was correlated with greater activation in left inferior parietal lobule. These findings suggest that the regions associated with phonological processing are essential in English reading development. We also found greater developmental increases in English than in Chinese in left inferior temporal gyrus, suggesting refinement of this region for fine-grained word form recognition. In contrast, greater developmental increases in Chinese than in English were found in right middle occipital gyrus, suggesting the importance of holistic visual-orthographic analysis in Chinese reading acquisition. Our results suggest that the brain adapts to the special features of the orthography by engaging relevant brain regions to a greater degree over development. PMID:25444089

  13. A Bayesian generative model for learning semantic hierarchies

    PubMed Central

    Mittelman, Roni; Sun, Min; Kuipers, Benjamin; Savarese, Silvio

    2014-01-01

    Building fine-grained visual recognition systems that are capable of recognizing tens of thousands of categories, has received much attention in recent years. The well known semantic hierarchical structure of categories and concepts, has been shown to provide a key prior which allows for optimal predictions. The hierarchical organization of various domains and concepts has been subject to extensive research, and led to the development of the WordNet domains hierarchy (Fellbaum, 1998), which was also used to organize the images in the ImageNet (Deng et al., 2009) dataset, in which the category count approaches the human capacity. Still, for the human visual system, the form of the hierarchy must be discovered with minimal use of supervision or innate knowledge. In this work, we propose a new Bayesian generative model for learning such domain hierarchies, based on semantic input. Our model is motivated by the super-subordinate organization of domain labels and concepts that characterizes WordNet, and accounts for several important challenges: maintaining context information when progressing deeper into the hierarchy, learning a coherent semantic concept for each node, and modeling uncertainty in the perception process. PMID:24904452

  14. The role of visual representations during the lexical access of spoken words

    PubMed Central

    Lewis, Gwyneth; Poeppel, David

    2015-01-01

    Do visual representations contribute to spoken word recognition? We examine, using MEG, the effects of sublexical and lexical variables at superior temporal (ST) areas and the posterior middle temporal gyrus (pMTG) compared with that of word imageability at visual cortices. Embodied accounts predict early modulation of visual areas by imageability - concurrently with or prior to modulation of pMTG by lexical variables. Participants responded to speech stimuli varying continuously in imageability during lexical decision with simultaneous MEG recording. We employed the linguistic variables in a new type of correlational time course analysis to assess trial-by-trial activation in occipital, ST, and pMTG regions of interest (ROIs). The linguistic variables modulated the ROIs during different time windows. Critically, visual regions reflected an imageability effect prior to effects of lexicality on pMTG. This surprising effect supports a view on which sensory aspects of a lexical item are not a consequence of lexical activation. PMID:24814579

  15. The role of visual representations during the lexical access of spoken words.

    PubMed

    Lewis, Gwyneth; Poeppel, David

    2014-07-01

    Do visual representations contribute to spoken word recognition? We examine, using MEG, the effects of sublexical and lexical variables at superior temporal (ST) areas and the posterior middle temporal gyrus (pMTG) compared with that of word imageability at visual cortices. Embodied accounts predict early modulation of visual areas by imageability--concurrently with or prior to modulation of pMTG by lexical variables. Participants responded to speech stimuli varying continuously in imageability during lexical decision with simultaneous MEG recording. We employed the linguistic variables in a new type of correlational time course analysis to assess trial-by-trial activation in occipital, ST, and pMTG regions of interest (ROIs). The linguistic variables modulated the ROIs during different time windows. Critically, visual regions reflected an imageability effect prior to effects of lexicality on pMTG. This surprising effect supports a view on which sensory aspects of a lexical item are not a consequence of lexical activation. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. Brain mechanisms of recovery from pure alexia: A single case study with multiple longitudinal scans.

    PubMed

    Cohen, Laurent; Dehaene, Stanislas; McCormick, Samantha; Durant, Szonya; Zanker, Johannes M

    2016-10-01

    Pure alexia is an acquired reading disorder, typically due to a left occipito-temporal lesion affecting the Visual Word Form Area (VWFA). It is unclear whether the VWFA acts as a unique bottleneck for reading, or whether alternative routes are available for recovery. Here, we address this issue through the single-case longitudinal study of a neuroscientist who experienced pure alexia and participated in 17 behavioral, 9 anatomical, and 9 fMRI assessment sessions over a period of two years. The origin of the impairment was assigned to a small left fusiform lesion, accompanied by a loss of VWFA responsivity and by the degeneracy of the associated white matter pathways. fMRI experiments allowed us to image longitudinally the visual perception of words, as compared to other classes of stimuli, as well as the mechanisms of letter-by-letter reading. The progressive improvement of reading was not associated with the re-emergence of a new area selective to words, but with increasing responses in spared occipital cortex posterior to the lesion and in contralateral right occipital cortex. Those regions showed a non-specific increase of activations over time and an increase in functional correlation with distant language areas. Those results confirm the existence of an alternative occipital route for reading, bypassing the VWFA, but they also point to its key limitation: the patient remained a slow letter-by-letter reader, thus supporting the critical importance of the VWFA for the efficient parallel recognition of written words. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Sustained meaning activation for polysemous but not homonymous words: evidence from EEG.

    PubMed

    MacGregor, Lucy J; Bouwsema, Jennifer; Klepousniotou, Ekaterini

    2015-02-01

    Theoretical linguistic accounts of lexical ambiguity distinguish between homonymy, where words that share a lexical form have unrelated meanings, and polysemy, where the meanings are related. The present study explored the psychological reality of this theoretical assumption by asking whether there is evidence that homonyms and polysemes are represented and processed differently in the brain. We investigated the time-course of meaning activation of different types of ambiguous words using EEG. Homonyms and polysemes were each further subdivided into two: unbalanced homonyms (e.g., "coach") and balanced homonyms (e.g., "match"); metaphorical polysemes (e.g., "mouth") and metonymic polysemes (e.g., "rabbit"). These four types of ambiguous words were presented as primes in a visual single-word priming delayed lexical decision task employing a long ISI (750 ms). Targets were related to one of the meanings of the primes, or were unrelated. ERPs formed relative to the target onset indicated that the theoretical distinction between homonymy and polysemy was reflected in the N400 brain response. For targets following homonymous primes (both unbalanced and balanced), no effects survived at this long ISI indicating that both meanings of the prime had already decayed. On the other hand, for polysemous primes (both metaphorical and metonymic), activation was observed for both dominant and subordinate senses. The observed processing differences between homonymy and polysemy provide evidence in support of differential neuro-cognitive representations for the two types of ambiguity. We argue that the polysemous senses act collaboratively to strengthen the representation, facilitating maintenance, while the competitive nature of homonymous meanings leads to decay. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Hierarchical levels of representation in language prediction: The influence of first language acquisition in highly proficient bilinguals.

    PubMed

    Molinaro, Nicola; Giannelli, Francesco; Caffarra, Sendy; Martin, Clara

    2017-07-01

    Language comprehension is largely supported by predictive mechanisms that account for the ease and speed with which communication unfolds. Both native and proficient non-native speakers can efficiently handle contextual cues to generate reliable linguistic expectations. However, the link between the variability of the linguistic background of the speaker and the hierarchical format of the representations predicted is still not clear. We here investigate whether native language exposure to typologically highly diverse languages (Spanish and Basque) affects the way early balanced bilingual speakers carry out language predictions. During Spanish sentence comprehension, participants developed predictions of words the form of which (noun ending) could be either diagnostic of grammatical gender values (transparent) or totally ambiguous (opaque). We measured electrophysiological prediction effects time-locked both to the target word and to its determiner, with the former being expected or unexpected. Event-related (N200-N400) and oscillatory activity in the low beta-band (15-17Hz) frequency channel showed that both Spanish and Basque natives optimally carry out lexical predictions independently of word transparency. Crucially, in contrast to Spanish natives, Basque natives displayed visual word form predictions for transparent words, in consistency with the relevance that noun endings (post-nominal suffixes) play in their native language. We conclude that early language exposure largely shapes prediction mechanisms, so that bilinguals reading in their second language rely on the distributional regularities that are highly relevant in their first language. More importantly, we show that individual linguistic experience hierarchically modulates the format of the predicted representation. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Language Proficiency Modulates the Recruitment of Non-Classical Language Areas in Bilinguals

    PubMed Central

    Leonard, Matthew K.; Torres, Christina; Travis, Katherine E.; Brown, Timothy T.; Hagler, Donald J.; Dale, Anders M.; Elman, Jeffrey L.; Halgren, Eric

    2011-01-01

    Bilingualism provides a unique opportunity for understanding the relative roles of proficiency and order of acquisition in determining how the brain represents language. In a previous study, we combined magnetoencephalography (MEG) and magnetic resonance imaging (MRI) to examine the spatiotemporal dynamics of word processing in a group of Spanish-English bilinguals who were more proficient in their native language. We found that from the earliest stages of lexical processing, words in the second language evoke greater activity in bilateral posterior visual regions, while activity to the native language is largely confined to classical left hemisphere fronto-temporal areas. In the present study, we sought to examine whether these effects relate to language proficiency or order of language acquisition by testing Spanish-English bilingual subjects who had become dominant in their second language. Additionally, we wanted to determine whether activity in bilateral visual regions was related to the presentation of written words in our previous study, so we presented subjects with both written and auditory words. We found greater activity for the less proficient native language in bilateral posterior visual regions for both the visual and auditory modalities, which started during the earliest word encoding stages and continued through lexico-semantic processing. In classical left fronto-temporal regions, the two languages evoked similar activity. Therefore, it is the lack of proficiency rather than secondary acquisition order that determines the recruitment of non-classical areas for word processing. PMID:21455315

  20. γ-oscillations modulated by picture naming and word reading: intracranial recording in epileptic patients.

    PubMed

    Wu, Helen C; Nagasawa, Tetsuro; Brown, Erik C; Juhasz, Csaba; Rothermel, Robert; Hoechstetter, Karsten; Shah, Aashit; Mittal, Sandeep; Fuerst, Darren; Sood, Sandeep; Asano, Eishi

    2011-10-01

    We measured cortical gamma-oscillations in response to visual-language tasks consisting of picture naming and word reading in an effort to better understand human visual-language pathways. We studied six patients with focal epilepsy who underwent extraoperative electrocorticography (ECoG) recording. Patients were asked to overtly name images presented sequentially in the picture naming task and to overtly read written words in the reading task. Both tasks commonly elicited gamma-augmentation (maximally at 80-100 Hz) on ECoG in the occipital, inferior-occipital-temporal and inferior-Rolandic areas, bilaterally. Picture naming, compared to reading task, elicited greater gamma-augmentation in portions of pre-motor areas as well as occipital and inferior-occipital-temporal areas, bilaterally. In contrast, word reading elicited greater gamma-augmentation in portions of bilateral occipital, left occipital-temporal and left superior-posterior-parietal areas. Gamma-attenuation was elicited by both tasks in portions of posterior cingulate and ventral premotor-prefrontal areas bilaterally. The number of letters in a presented word was positively correlated to the degree of gamma-augmentation in the medial occipital areas. Gamma-augmentation measured on ECoG identified cortical areas commonly and differentially involved in picture naming and reading tasks. Longer words may activate the primary visual cortex for the more peripheral field. The present study increases our understanding of the visual-language pathways. Copyright © 2011 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  1. Hemispheric Asymmetry in Event Knowledge Activation During Incremental Language Comprehension: A Visual Half-Field ERP Study

    PubMed Central

    Metusalem, Ross; Kutas, Marta; Urbach, Thomas P.; Elman, Jeffrey L.

    2016-01-01

    During incremental language comprehension, the brain activates knowledge of described events, including knowledge elements that constitute semantic anomalies in their linguistic context. The present study investigates hemispheric asymmetries in this process, with the aim of advancing our understanding of the neural basis and functional properties of event knowledge activation during incremental comprehension. In a visual half-field event-related brain potential (ERP) experiment, participants read brief discourses in which the third sentence contained a word that was either highly expected, semantically anomalous but related to the described event, or semantically anomalous but unrelated to the described event. For both visual fields of target word presentation, semantically anomalous words elicited N400 ERP components of greater amplitude than did expected words. Crucially, event-related anomalous words elicited a reduced N400 relative to event-unrelated anomalous words only with left visual field/right hemisphere presentation. This result suggests that right hemisphere processes are critical to the activation of event knowledge elements that violate the linguistic context, and in doing so informs existing theories of hemispheric asymmetries in semantic processing during language comprehension. Additionally, this finding coincides with past research suggesting a crucial role for the right hemisphere in elaborative inference generation, raises interesting questions regarding hemispheric coordination in generating event-specific linguistic expectancies, and more generally highlights the possibility of functional dissociation between event knowledge activation for the generation of elaborative inferences and for linguistic expectancies. PMID:26878980

  2. Hemispheric asymmetry in event knowledge activation during incremental language comprehension: A visual half-field ERP study.

    PubMed

    Metusalem, Ross; Kutas, Marta; Urbach, Thomas P; Elman, Jeffrey L

    2016-04-01

    During incremental language comprehension, the brain activates knowledge of described events, including knowledge elements that constitute semantic anomalies in their linguistic context. The present study investigates hemispheric asymmetries in this process, with the aim of advancing our understanding of the neural basis and functional properties of event knowledge activation during incremental comprehension. In a visual half-field event-related brain potential (ERP) experiment, participants read brief discourses in which the third sentence contained a word that was either highly expected, semantically anomalous but related to the described event (Event-Related), or semantically anomalous but unrelated to the described event (Event-Unrelated). For both visual fields of target word presentation, semantically anomalous words elicited N400 ERP components of greater amplitude than did expected words. Crucially, Event-Related anomalous words elicited a reduced N400 relative to Event-Unrelated anomalous words only with left visual field/right hemisphere presentation. This result suggests that right hemisphere processes are critical to the activation of event knowledge elements that violate the linguistic context, and in doing so informs existing theories of hemispheric asymmetries in semantic processing during language comprehension. Additionally, this finding coincides with past research suggesting a crucial role for the right hemisphere in elaborative inference generation, raises interesting questions regarding hemispheric coordination in generating event-specific linguistic expectancies, and more generally highlights the possibility of functional dissociation of event knowledge activation for the generation of elaborative inferences and for linguistic expectancies. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Visual thinking in action: visualizations as used on whiteboards.

    PubMed

    Walny, Jagoda; Carpendale, Sheelagh; Riche, Nathalie Henry; Venolia, Gina; Fawcett, Philip

    2011-12-01

    While it is still most common for information visualization researchers to develop new visualizations from a data- or taskdriven perspective, there is growing interest in understanding the types of visualizations people create by themselves for personal use. As part of this recent direction, we have studied a large collection of whiteboards in a research institution, where people make active use of combinations of words, diagrams and various types of visuals to help them further their thought processes. Our goal is to arrive at a better understanding of the nature of visuals that are created spontaneously during brainstorming, thinking, communicating, and general problem solving on whiteboards. We use the qualitative approaches of open coding, interviewing, and affinity diagramming to explore the use of recognizable and novel visuals, and the interplay between visualization and diagrammatic elements with words, numbers and labels. We discuss the potential implications of our findings on information visualization design. © 2011 IEEE

  4. Morphological Influences on the Recognition of Monosyllabic Monomorphemic Words

    ERIC Educational Resources Information Center

    Baayen, R. H.; Feldman, L. B.; Schreuder, R.

    2006-01-01

    Balota et al. [Balota, D., Cortese, M., Sergent-Marshall, S., Spieler, D., & Yap, M. (2004). Visual word recognition for single-syllable words. "Journal of Experimental Psychology: General, 133," 283-316] studied lexical processing in word naming and lexical decision using hierarchical multiple regression techniques for a large data set of…

  5. Interactive Word Walls

    ERIC Educational Resources Information Center

    Jackson, Julie; Narvaez, Rose

    2013-01-01

    It is common to see word walls displaying the vocabulary that students have learned in class. Word walls serve as visual scaffolds and are a classroom strategy used to reinforce reading and language arts instruction. Research shows a strong relationship between student word knowledge and academic achievement (Stahl and Fairbanks 1986). As a…

  6. Interpreting Chicken-Scratch: Lexical Access for Handwritten Words

    ERIC Educational Resources Information Center

    Barnhart, Anthony S.; Goldinger, Stephen D.

    2010-01-01

    Handwritten word recognition is a field of study that has largely been neglected in the psychological literature, despite its prevalence in society. Whereas studies of spoken word recognition almost exclusively employ natural, human voices as stimuli, studies of visual word recognition use synthetic typefaces, thus simplifying the process of word…

  7. Independent Effects of Orthographic and Phonological Facilitation on Spoken Word Production in Mandarin

    ERIC Educational Resources Information Center

    Zhang, Qingfang; Chen, Hsuan-Chih; Weekes, Brendan Stuart; Yang, Yufang

    2009-01-01

    A picture-word interference paradigm with visually presented distractors was used to investigate the independent effects of orthographic and phonological facilitation on Mandarin monosyllabic word production. Both the stimulus-onset asynchrony (SOA) and the picture-word relationship along different lexical dimensions were varied. We observed a…

  8. Charting the functional relevance of Broca's area for visual word recognition and picture naming in Dutch using fMRI-guided TMS.

    PubMed

    Wheat, Katherine L; Cornelissen, Piers L; Sack, Alexander T; Schuhmann, Teresa; Goebel, Rainer; Blomert, Leo

    2013-05-01

    Magnetoencephalography (MEG) has shown pseudohomophone priming effects at Broca's area (specifically pars opercularis of left inferior frontal gyrus and precentral gyrus; LIFGpo/PCG) within ∼100ms of viewing a word. This is consistent with Broca's area involvement in fast phonological access during visual word recognition. Here we used online transcranial magnetic stimulation (TMS) to investigate whether LIFGpo/PCG is necessary for (not just correlated with) visual word recognition by ∼100ms. Pulses were delivered to individually fMRI-defined LIFGpo/PCG in Dutch speakers 75-500ms after stimulus onset during reading and picture naming. Reading and picture naming reactions times were significantly slower following pulses at 225-300ms. Contrary to predictions, there was no disruption to reading for pulses before 225ms. This does not provide evidence in favour of a functional role for LIFGpo/PCG in reading before 225ms in this case, but does extend previous findings in picture stimuli to written Dutch words. Copyright © 2012 Elsevier Inc. All rights reserved.

  9. Optimal viewing position in vertically and horizontally presented Japanese words.

    PubMed

    Kajii, N; Osaka, N

    2000-11-01

    In the present study, the optimal viewing position (OVP) phenomenon in Japanese Hiragana was investigated, with special reference to a comparison between the vertical and the horizontal meridians in the visual field. In the first experiment, word recognition scores were determined while the eyes were fixating predetermined locations in vertically and horizontally displayed words. Similar to what has been reported for Roman scripts, OVP curves, which were asymmetric with respect to the beginning of words, were observed in both conditions. However, this asymmetry was less pronounced for vertically than for horizontally displayed words. In the second experiment, the visibility of individual characters within strings was examined for the vertical and horizontal meridians. As for Roman characters, letter identification scores were better in the right than in the left visual field. However, identification scores did not differ between the upper and the lower sides of fixation along the vertical meridian. The results showed that the model proposed by Nazir, O'Regan, and Jacobs (1991) cannot entirely account for the OVP phenomenon. A model in which visual and lexical factors are combined is proposed instead.

  10. Reading speed benefits from increased vertical word spacing in normal peripheral vision.

    PubMed

    Chung, Susana T L

    2004-07-01

    Crowding, the adverse spatial interaction due to proximity of adjacent targets, has been suggested as an explanation for slow reading in peripheral vision. The purposes of this study were to (1) demonstrate that crowding exists at the word level and (2) examine whether or not reading speed in central and peripheral vision can be enhanced with increased vertical word spacing. Five normal observers read aloud sequences of six unrelated four-letter words presented on a computer monitor, one word at a time, using rapid serial visual presentation (RSVP). Reading speeds were calculated based on the RSVP exposure durations yielding 80% correct. Testing was conducted at the fovea and at 5 degrees and 10 degrees in the inferior visual field. Critical print size (CPS) for each observer and at each eccentricity was first determined by measuring reading speeds for four print sizes using unflanked words. We then presented words at 0.8x or 1.4x CPS, with each target word flanked by two other words, one above and one below the target word. Reading speeds were determined for vertical word spacings (baseline-to-baseline separation between two vertically separated words) ranging from 0.8x to 2x the standard single-spacing, as well as the unflanked condition. At the fovea, reading speed increased with vertical word spacing up to about 1.2x to 1.5x the standard spacing and remained constant and similar to the unflanked reading speed at larger vertical word spacings. In the periphery, reading speed also increased with vertical word spacing, but it remained below the unflanked reading speed for all spacings tested. At 2x the standard spacing, peripheral reading speed was still about 25% lower than the unflanked reading speed for both eccentricities and print sizes. Results from a control experiment showed that the greater reliance of peripheral reading speed on vertical word spacing was also found in the right visual field. Increased vertical word spacing, which presumably decreases the adverse effect of crowding between adjacent lines of text, benefits reading speed. This benefit is greater in peripheral than central vision.

  11. Bag-of-visual-ngrams for histopathology image classification

    NASA Astrophysics Data System (ADS)

    López-Monroy, A. Pastor; Montes-y-Gómez, Manuel; Escalante, Hugo Jair; Cruz-Roa, Angel; González, Fabio A.

    2013-11-01

    This paper describes an extension of the Bag-of-Visual-Words (BoVW) representation for image categorization (IC) of histophatology images. This representation is one of the most used approaches in several high-level computer vision tasks. However, the BoVW representation has an important limitation: the disregarding of spatial information among visual words. This information may be useful to capture discriminative visual-patterns in specific computer vision tasks. In order to overcome this problem we propose the use of visual n-grams. N-grams based-representations are very popular in the field of natural language processing (NLP), in particular within text mining and information retrieval. We propose building a codebook of n-grams and then representing images by histograms of visual n-grams. We evaluate our proposal in the challenging task of classifying histopathology images. The novelty of our proposal lies in the fact that we use n-grams as attributes for a classification model (together with visual-words, i.e., 1-grams). This is common practice within NLP, although, to the best of our knowledge, this idea has not been explored yet within computer vision. We report experimental results in a database of histopathology images where our proposed method outperforms the traditional BoVWs formulation.

  12. Morphological effects in children word reading: a priming study in fourth graders.

    PubMed

    Casalis, Séverine; Dusautoir, Marion; Colé, Pascale; Ducrot, Stéphanie

    2009-09-01

    A growing corpus of evidence suggests that morphology could play a role in reading acquisition, and that young readers could be sensitive to the morphemic structure of written words. In the present experiment, we examined whether and when morphological information is activated in word recognition. French fourth graders made visual lexical decisions to derived words preceded by primes sharing either a morphological or an orthographic relationship with the target. Results showed significant and equivalent facilitation priming effects in cases of morphologically and orthographically related primes at the shortest prime duration, and a significant facilitation priming effect in the case of only morphologically related primes at the longer prime duration. Thus, these results strongly suggest that a morphological level is involved in children's visual word recognition, although it is not distinct from the formal one at an early stage of word processing.

  13. The effects of articulatory suppression on word recognition in Serbian.

    PubMed

    Tenjović, Lazar; Lalović, Dejan

    2005-11-01

    The relatedness of phonological coding to the articulatory mechanisms in visual word recognition vary in different writing systems. While articulatory suppression (i.e., continuous verbalising during a visual word processing task) has a detrimental effect on the processing of Japanese words printed in regular syllabic Khana script, it has no such effect on the processing of irregular alphabetic English words. Besner (1990) proposed an experiment in the Serbian language, written in Cyrillic and Roman regular but alphabetic scripts, to disentangle the importance of script regularity vs. the syllabic-alphabetic dimension for the effects observed. Articulatory suppression had an equally detrimental effect in a lexical decision task for both alphabetically regular and distorted (by a mixture of the two alphabets) Serbian words, but comparisons of articulatory suppression effect size obtained in Serbian to those obtained in English and Japanese suggest "alphabeticity-syllabicity" to be the more critical dimension in determining the relatedness of phonological coding and articulatory activity.

  14. Hemispheric asymmetry in holistic processing of words.

    PubMed

    Ventura, Paulo; Delgado, João; Ferreira, Miguel; Farinha-Fernandes, António; Guerreiro, José C; Faustino, Bruno; Leite, Isabel; Wong, Alan C-N

    2018-05-13

    Holistic processing has been regarded as a hallmark of face perception, indicating the automatic and obligatory tendency of the visual system to process all face parts as a perceptual unit rather than in isolation. Studies involving lateralized stimulus presentation suggest that the right hemisphere dominates holistic face processing. Holistic processing can also be shown with other categories such as words and thus it is not specific to faces or face-like expertize. Here, we used divided visual field presentation to investigate the possibly different contributions of the two hemispheres for holistic word processing. Observers performed same/different judgment on the cued parts of two sequentially presented words in the complete composite paradigm. Our data indicate a right hemisphere specialization for holistic word processing. Thus, these markers of expert object recognition are domain general.

  15. Left-Lateralized Contributions of Saccades to Cortical Activity During a One-Back Word Recognition Task.

    PubMed

    Chang, Yu-Cherng C; Khan, Sheraz; Taulu, Samu; Kuperberg, Gina; Brown, Emery N; Hämäläinen, Matti S; Temereanca, Simona

    2018-01-01

    Saccadic eye movements are an inherent component of natural reading, yet their contribution to information processing at subsequent fixation remains elusive. Here we use anatomically-constrained magnetoencephalography (MEG) to examine cortical activity following saccades as healthy human subjects engaged in a one-back word recognition task. This activity was compared with activity following external visual stimulation that mimicked saccades. A combination of procedures was employed to eliminate saccadic ocular artifacts from the MEG signal. Both saccades and saccade-like external visual stimulation produced early-latency responses beginning ~70 ms after onset in occipital cortex and spreading through the ventral and dorsal visual streams to temporal, parietal and frontal cortices. Robust differential activity following the onset of saccades vs. similar external visual stimulation emerged during 150-350 ms in a left-lateralized cortical network. This network included: (i) left lateral occipitotemporal (LOT) and nearby inferotemporal (IT) cortex; (ii) left posterior Sylvian fissure (PSF) and nearby multimodal cortex; and (iii) medial parietooccipital (PO), posterior cingulate and retrosplenial cortices. Moreover, this left-lateralized network colocalized with word repetition priming effects. Together, results suggest that central saccadic mechanisms influence a left-lateralized language network in occipitotemporal and temporal cortex above and beyond saccadic influences at preceding stages of information processing during visual word recognition.

  16. Letter position coding across modalities: braille and sighted reading of sentences with jumbled words.

    PubMed

    Perea, Manuel; Jiménez, María; Martín-Suesta, Miguel; Gómez, Pablo

    2015-04-01

    This article explores how letter position coding is attained during braille reading and its implications for models of word recognition. When text is presented visually, the reading process easily adjusts to the jumbling of some letters (jugde-judge), with a small cost in reading speed. Two explanations have been proposed: One relies on a general mechanism of perceptual uncertainty at the visual level, and the other focuses on the activation of an abstract level of representation (i.e., bigrams) that is shared by all orthographic codes. Thus, these explanations make differential predictions about reading in a tactile modality. In the present study, congenitally blind readers read sentences presented on a braille display that tracked the finger position. The sentences either were intact or involved letter transpositions. A parallel experiment was conducted in the visual modality. Results revealed a substantially greater reading cost for the sentences with transposed-letter words in braille readers. In contrast with the findings with sighted readers, in which there is a cost of transpositions in the external (initial and final) letters, the reading cost in braille readers occurs serially, with a large cost for initial letter transpositions. Thus, these data suggest that the letter-position-related effects in visual word recognition are due to the characteristics of the visual stream.

  17. Left-Lateralized Contributions of Saccades to Cortical Activity During a One-Back Word Recognition Task

    PubMed Central

    Chang, Yu-Cherng C.; Khan, Sheraz; Taulu, Samu; Kuperberg, Gina; Brown, Emery N.; Hämäläinen, Matti S.; Temereanca, Simona

    2018-01-01

    Saccadic eye movements are an inherent component of natural reading, yet their contribution to information processing at subsequent fixation remains elusive. Here we use anatomically-constrained magnetoencephalography (MEG) to examine cortical activity following saccades as healthy human subjects engaged in a one-back word recognition task. This activity was compared with activity following external visual stimulation that mimicked saccades. A combination of procedures was employed to eliminate saccadic ocular artifacts from the MEG signal. Both saccades and saccade-like external visual stimulation produced early-latency responses beginning ~70 ms after onset in occipital cortex and spreading through the ventral and dorsal visual streams to temporal, parietal and frontal cortices. Robust differential activity following the onset of saccades vs. similar external visual stimulation emerged during 150–350 ms in a left-lateralized cortical network. This network included: (i) left lateral occipitotemporal (LOT) and nearby inferotemporal (IT) cortex; (ii) left posterior Sylvian fissure (PSF) and nearby multimodal cortex; and (iii) medial parietooccipital (PO), posterior cingulate and retrosplenial cortices. Moreover, this left-lateralized network colocalized with word repetition priming effects. Together, results suggest that central saccadic mechanisms influence a left-lateralized language network in occipitotemporal and temporal cortex above and beyond saccadic influences at preceding stages of information processing during visual word recognition. PMID:29867372

  18. Masked priming and ERPs dissociate maturation of orthographic and semantic components of visual word recognition in children

    PubMed Central

    Eddy, Marianna D.; Grainger, Jonathan; Holcomb, Phillip J.; Mitra, Priya; Gabrieli, John D. E.

    2014-01-01

    This study examined the time-course of reading single words in children and adults using masked repetition priming and the recording of event-related potentials. The N250 and N400 repetition priming effects were used to characterize form- and meaning-level processing, respectively. Children had larger amplitude N250 effects than adults for both shorter and longer duration primes. Children did not differ from adults on the N400 effect. The difference on the N250 suggests that automaticity for form processing is still maturing in children relative to adults, while the lack of differentiation on the N400 effect suggests that meaning processing is relatively mature by late childhood. The overall similarity in the children’s repetition priming effects to adults’ effects is in line with theories of reading acquisition, according to which children rapidly transition to an orthographic strategy for fast access to semantic information from print. PMID:24313638

  19. The Relationship between Intrinsic Couplings of the Visual Word Form Area with Spoken Language Network and Reading Ability in Children and Adults

    PubMed Central

    Li, Yu; Zhang, Linjun; Xia, Zhichao; Yang, Jie; Shu, Hua; Li, Ping

    2017-01-01

    Reading plays a key role in education and communication in modern society. Learning to read establishes the connections between the visual word form area (VWFA) and language areas responsible for speech processing. Using resting-state functional connectivity (RSFC) and Granger Causality Analysis (GCA) methods, the current developmental study aimed to identify the difference in the relationship between the connections of VWFA-language areas and reading performance in both adults and children. The results showed that: (1) the spontaneous connectivity between VWFA and the spoken language areas, i.e., the left inferior frontal gyrus/supramarginal gyrus (LIFG/LSMG), was stronger in adults compared with children; (2) the spontaneous functional patterns of connectivity between VWFA and language network were negatively correlated with reading ability in adults but not in children; (3) the causal influence from LIFG to VWFA was negatively correlated with reading ability only in adults but not in children; (4) the RSFCs between left posterior middle frontal gyrus (LpMFG) and VWFA/LIFG were positively correlated with reading ability in both adults and children; and (5) the causal influence from LIFG to LSMG was positively correlated with reading ability in both groups. These findings provide insights into the relationship between VWFA and the language network for reading, and the role of the unique features of Chinese in the neural circuits of reading. PMID:28690507

  20. The Relationship between Intrinsic Couplings of the Visual Word Form Area with Spoken Language Network and Reading Ability in Children and Adults.

    PubMed

    Li, Yu; Zhang, Linjun; Xia, Zhichao; Yang, Jie; Shu, Hua; Li, Ping

    2017-01-01

    Reading plays a key role in education and communication in modern society. Learning to read establishes the connections between the visual word form area (VWFA) and language areas responsible for speech processing. Using resting-state functional connectivity (RSFC) and Granger Causality Analysis (GCA) methods, the current developmental study aimed to identify the difference in the relationship between the connections of VWFA-language areas and reading performance in both adults and children. The results showed that: (1) the spontaneous connectivity between VWFA and the spoken language areas, i.e., the left inferior frontal gyrus/supramarginal gyrus (LIFG/LSMG), was stronger in adults compared with children; (2) the spontaneous functional patterns of connectivity between VWFA and language network were negatively correlated with reading ability in adults but not in children; (3) the causal influence from LIFG to VWFA was negatively correlated with reading ability only in adults but not in children; (4) the RSFCs between left posterior middle frontal gyrus (LpMFG) and VWFA/LIFG were positively correlated with reading ability in both adults and children; and (5) the causal influence from LIFG to LSMG was positively correlated with reading ability in both groups. These findings provide insights into the relationship between VWFA and the language network for reading, and the role of the unique features of Chinese in the neural circuits of reading.

Top