Why Do Pictures, but Not Visual Words, Reduce Older Adults’ False Memories?
Smith, Rebekah E.; Hunt, R. Reed; Dunlap, Kathryn R.
2015-01-01
Prior work shows that false memories resulting from the study of associatively related lists are reduced for both young and older adults when the auditory presentation of study list words is accompanied by related pictures relative to when auditory word presentation is combined with visual presentation of the word. In contrast, young adults, but not older adults, show a reduction in false memories when presented with the visual word along with the auditory word relative to hearing the word only. In both the case of pictures relative to visual words and visual words relative to auditory words alone, the benefit of picture and visual words in reducing false memories has been explained in terms of monitoring for perceptual information. In our first experiment we provide the first simultaneous comparison of all three study presentation modalities (auditory only, auditory plus visual word, and auditory plus picture). Young and older adults show a reduction in false memories in the auditory plus picture condition, but only young adults show a reduction in the visual word condition relative to the auditory only condition. A second experiment investigates whether older adults fail to show a reduction in false memory in the visual word condition because they do not encode perceptual information in the visual word condition. In addition, the second experiment provides evidence that the failure of older adults to show the benefits of visual word presentation is related to reduced cognitive resources. PMID:26213799
Why do pictures, but not visual words, reduce older adults' false memories?
Smith, Rebekah E; Hunt, R Reed; Dunlap, Kathryn R
2015-09-01
Prior work shows that false memories resulting from the study of associatively related lists are reduced for both young and older adults when the auditory presentation of study list words is accompanied by related pictures relative to when auditory word presentation is combined with visual presentation of the word. In contrast, young adults, but not older adults, show a reduction in false memories when presented with the visual word along with the auditory word relative to hearing the word only. In both cases of pictures relative to visual words and visual words relative to auditory words alone, the benefit of picture and visual words in reducing false memories has been explained in terms of monitoring for perceptual information. In our first experiment, we provide the first simultaneous comparison of all 3 study presentation modalities (auditory only, auditory plus visual word, and auditory plus picture). Young and older adults show a reduction in false memories in the auditory plus picture condition, but only young adults show a reduction in the visual word condition relative to the auditory only condition. A second experiment investigates whether older adults fail to show a reduction in false memory in the visual word condition because they do not encode perceptual information in the visual word condition. In addition, the second experiment provides evidence that the failure of older adults to show the benefits of visual word presentation is related to reduced cognitive resources. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
Effects of visual familiarity for words on interhemispheric cooperation for lexical processing.
Yoshizaki, K
2001-12-01
The purpose of this study was to examine the effects of visual familiarity of words on interhemispheric lexical processing. Words and pseudowords were tachistoscopically presented in a left, a right, or bilateral visual fields. Two types of words, Katakana-familiar-type and Hiragana-familiar-type, were used as the word stimuli. The former refers to the words which are more frequently written with Katakana script, and the latter refers to the words which are written predominantly in Hiragana script. Two conditions for the words were set up in terms of visual familiarity for a word. In visually familiar condition, words were presented in familiar script form and in visually unfamiliar condition, words were presented in less familiar script form. The 32 right-handed Japanese students were asked to make a lexical decision. Results showed that a bilateral gain, which indicated that the performance in the bilateral visual fields was superior to that in the unilateral visual field, was obtained only in the visually familiar condition, not in the visually unfamiliar condition. These results suggested that the visual familiarity for a word had an influence on the interhemispheric lexical processing.
Word learning and the cerebral hemispheres: from serial to parallel processing of written words
Ellis, Andrew W.; Ferreira, Roberto; Cathles-Hagan, Polly; Holt, Kathryn; Jarvis, Lisa; Barca, Laura
2009-01-01
Reading familiar words differs from reading unfamiliar non-words in two ways. First, word reading is faster and more accurate than reading of unfamiliar non-words. Second, effects of letter length are reduced for words, particularly when they are presented in the right visual field in familiar formats. Two experiments are reported in which right-handed participants read aloud non-words presented briefly in their left and right visual fields before and after training on those items. The non-words were interleaved with familiar words in the naming tests. Before training, naming was slow and error prone, with marked effects of length in both visual fields. After training, fewer errors were made, naming was faster, and the effect of length was much reduced in the right visual field compared with the left. We propose that word learning creates orthographic word forms in the mid-fusiform gyrus of the left cerebral hemisphere. Those word forms allow words to access their phonological and semantic representations on a lexical basis. But orthographic word forms also interact with more posterior letter recognition systems in the middle/inferior occipital gyri, inducing more parallel processing of right visual field words than is possible for any left visual field stimulus, or for unfamiliar non-words presented in the right visual field. PMID:19933140
Effects of audio-visual presentation of target words in word translation training
NASA Astrophysics Data System (ADS)
Akahane-Yamada, Reiko; Komaki, Ryo; Kubo, Rieko
2004-05-01
Komaki and Akahane-Yamada (Proc. ICA2004) used 2AFC translation task in vocabulary training, in which the target word is presented visually in orthographic form of one language, and the appropriate meaning in another language has to be chosen between two choices. Present paper examined the effect of audio-visual presentation of target word when native speakers of Japanese learn to translate English words into Japanese. Pairs of English words contrasted in several phonemic distinctions (e.g., /r/-/l/, /b/-/v/, etc.) were used as word materials, and presented in three conditions; visual-only (V), audio-only (A), and audio-visual (AV) presentations. Identification accuracy of those words produced by two talkers was also assessed. During pretest, the accuracy for A stimuli was lowest, implying that insufficient translation ability and listening ability interact with each other when aurally presented word has to be translated. However, there was no difference in accuracy between V and AV stimuli, suggesting that participants translate the words depending on visual information only. The effect of translation training using AV stimuli did not transfer to identification ability, showing that additional audio information during translation does not help improve speech perception. Further examination is necessary to determine the effective L2 training method. [Work supported by TAO, Japan.
A test of the orthographic recoding hypothesis
NASA Astrophysics Data System (ADS)
Gaygen, Daniel E.
2003-04-01
The Orthographic Recoding Hypothesis [D. E. Gaygen and P. A. Luce, Percept. Psychophys. 60, 465-483 (1998)] was tested. According to this hypothesis, listeners recognize spoken words heard for the first time by mapping them onto stored representations of the orthographic forms of the words. Listeners have a stable orthographic representation of words, but no phonological representation, when those words have been read frequently but never heard or spoken. Such may be the case for low frequency words such as jargon. Three experiments using visually and auditorily presented nonword stimuli tested this hypothesis. The first two experiments were explicit tests of memory (old-new tests) for words presented visually. In the first experiment, the recognition of auditorily presented nonwords was facilitated when they previously appeared on a visually presented list. The second experiment was similar, but included a concurrent articulation task during a visual word list presentation, thus preventing covert rehearsal of the nonwords. The results were similar to the first experiment. The third experiment was an indirect test of memory (auditory lexical decision task) for visually presented nonwords. Auditorily presented nonwords were identified as nonwords significantly more slowly if they had previously appeared on the visually presented list accompanied by a concurrent articulation task.
Talker and lexical effects on audiovisual word recognition by adults with cochlear implants.
Kaiser, Adam R; Kirk, Karen Iler; Lachs, Lorin; Pisoni, David B
2003-04-01
The present study examined how postlingually deafened adults with cochlear implants combine visual information from lipreading with auditory cues in an open-set word recognition task. Adults with normal hearing served as a comparison group. Word recognition performance was assessed using lexically controlled word lists presented under auditory-only, visual-only, and combined audiovisual presentation formats. Effects of talker variability were studied by manipulating the number of talkers producing the stimulus tokens. Lexical competition was investigated using sets of lexically easy and lexically hard test words. To assess the degree of audiovisual integration, a measure of visual enhancement, R(a), was used to assess the gain in performance provided in the audiovisual presentation format relative to the maximum possible performance obtainable in the auditory-only format. Results showed that word recognition performance was highest for audiovisual presentation followed by auditory-only and then visual-only stimulus presentation. Performance was better for single-talker lists than for multiple-talker lists, particularly under the audiovisual presentation format. Word recognition performance was better for the lexically easy than for the lexically hard words regardless of presentation format. Visual enhancement scores were higher for single-talker conditions compared to multiple-talker conditions and tended to be somewhat better for lexically easy words than for lexically hard words. The pattern of results suggests that information from the auditory and visual modalities is used to access common, multimodal lexical representations in memory. The findings are discussed in terms of the complementary nature of auditory and visual sources of information that specify the same underlying gestures and articulatory events in speech.
Talker and Lexical Effects on Audiovisual Word Recognition by Adults With Cochlear Implants
Kaiser, Adam R.; Kirk, Karen Iler; Lachs, Lorin; Pisoni, David B.
2012-01-01
The present study examined how postlingually deafened adults with cochlear implants combine visual information from lipreading with auditory cues in an open-set word recognition task. Adults with normal hearing served as a comparison group. Word recognition performance was assessed using lexically controlled word lists presented under auditory-only, visual-only, and combined audiovisual presentation formats. Effects of talker variability were studied by manipulating the number of talkers producing the stimulus tokens. Lexical competition was investigated using sets of lexically easy and lexically hard test words. To assess the degree of audiovisual integration, a measure of visual enhancement, Ra, was used to assess the gain in performance provided in the audiovisual presentation format relative to the maximum possible performance obtainable in the auditory-only format. Results showed that word recognition performance was highest for audiovisual presentation followed by auditory-only and then visual-only stimulus presentation. Performance was better for single-talker lists than for multiple-talker lists, particularly under the audiovisual presentation format. Word recognition performance was better for the lexically easy than for the lexically hard words regardless of presentation format. Visual enhancement scores were higher for single-talker conditions compared to multiple-talker conditions and tended to be somewhat better for lexically easy words than for lexically hard words. The pattern of results suggests that information from the auditory and visual modalities is used to access common, multimodal lexical representations in memory. The findings are discussed in terms of the complementary nature of auditory and visual sources of information that specify the same underlying gestures and articulatory events in speech. PMID:14700380
Direct comparison of four implicit memory tests.
Rajaram, S; Roediger, H L
1993-07-01
Four verbal implicit memory tests, word identification, word stem completion, word fragment completion, and anagram solution, were directly compared in one experiment and were contrasted with free recall. On all implicit tests, priming was greatest from prior visual presentation of words, less (but significant) from auditory presentation, and least from pictorial presentations. Typefont did not affect priming. In free recall, pictures were recalled better than words. The four implicit tests all largely index perceptual (lexical) operations in recognizing words, or visual word form representations.
ERIC Educational Resources Information Center
Brochard, Renaud; Tassin, Maxime; Zagar, Daniel
2013-01-01
The present research aimed to investigate whether, as previously observed with pictures, background auditory rhythm would also influence visual word recognition. In a lexical decision task, participants were presented with bisyllabic visual words, segmented into two successive groups of letters, while an irrelevant strongly metric auditory…
Processing of threat-related information outside the focus of visual attention.
Calvo, Manuel G; Castillo, M Dolores
2005-05-01
This study investigates whether threat-related words are especially likely to be perceived in unattended locations of the visual field. Threat-related, positive, and neutral words were presented at fixation as probes in a lexical decision task. The probe word was preceded by 2 simultaneous prime words (1 foveal, i.e., at fixation; 1 parafoveal, i.e., 2.2 deg. of visual angle from fixation), which were presented for 150 ms, one of which was either identical or unrelated to the probe. Results showed significant facilitation in lexical response times only for the probe threat words when primed parafoveally by an identical word presented in the right visual field. We conclude that threat-related words have privileged access to processing outside the focus of attention. This reveals a cognitive bias in the preferential, parallel processing of information that is important for adaptation.
Nakamura, Kimihiro; Dehaene, Stanislas; Jobert, Antoinette; Le Bihan, Denis; Kouider, Sid
2005-06-01
Recent evidence has suggested that the human occipitotemporal region comprises several subregions, each sensitive to a distinct processing level of visual words. To further explore the functional architecture of visual word recognition, we employed a subliminal priming method with functional magnetic resonance imaging (fMRI) during semantic judgments of words presented in two different Japanese scripts, Kanji and Kana. Each target word was preceded by a subliminal presentation of either the same or a different word, and in the same or a different script. Behaviorally, word repetition produced significant priming regardless of whether the words were presented in the same or different script. At the neural level, this cross-script priming was associated with repetition suppression in the left inferior temporal cortex anterior and dorsal to the visual word form area hypothesized for alphabetical writing systems, suggesting that cross-script convergence occurred at a semantic level. fMRI also evidenced a shared visual occipito-temporal activation for words in the two scripts, with slightly more mesial and right-predominant activation for Kanji and with greater occipital activation for Kana. These results thus allow us to separate script-specific and script-independent regions in the posterior temporal lobe, while demonstrating that both can be activated subliminally.
Nakagawa, A; Sukigara, M
2000-09-01
The purpose of this study was to examine the relationship between familiarity and laterality in reading Japanese Kana words. In two divided-visual-field experiments, three- or four-character Hiragana or Katakana words were presented in both familiar and unfamiliar scripts, to which subjects performed lexical decisions. Experiment 1, using three stimulus durations (40, 100, 160 ms), suggested that only in the unfamiliar script condition was increased stimulus presentation time differently affected in each visual field. To examine this lateral difference during the processing of unfamiliar scripts as related to attentional laterality, a concurrent auditory shadowing task was added in Experiment 2. The results suggested that processing words in an unfamiliar script requires attention, which could be left-hemisphere lateralized, while orthographically familiar kana words can be processed automatically on the basis of their word-level orthographic representations or visual word form. Copyright 2000 Academic Press.
The Neural Basis of Obligatory Decomposition of Suffixed Words
ERIC Educational Resources Information Center
Lewis, Gwyneth; Solomyak, Olla; Marantz, Alec
2011-01-01
Recent neurolinguistic studies present somewhat conflicting evidence concerning the role of the inferior temporal cortex (IT) in visual word recognition within the first 200 ms after presentation. On the one hand, fMRI studies of the Visual Word Form Area (VWFA) suggest that the IT might recover representations of the orthographic form of words.…
Visual hallucinations in schizophrenia: confusion between imagination and perception.
Brébion, Gildas; Ohlsen, Ruth I; Pilowsky, Lyn S; David, Anthony S
2008-05-01
An association between hallucinations and reality-monitoring deficit has been repeatedly observed in patients with schizophrenia. Most data concern auditory/verbal hallucinations. The aim of this study was to investigate the association between visual hallucinations and a specific type of reality-monitoring deficit, namely confusion between imagined and perceived pictures. Forty-one patients with schizophrenia and 43 healthy control participants completed a reality-monitoring task. Thirty-two items were presented either as written words or as pictures. After the presentation phase, participants had to recognize the target words and pictures among distractors, and then remember their mode of presentation. All groups of participants recognized the pictures better than the words, except the patients with visual hallucinations, who presented the opposite pattern. The participants with visual hallucinations made more misattributions to pictures than did the others, and higher ratings of visual hallucinations were correlated with increased tendency to remember words as pictures. No association with auditory hallucinations was revealed. Our data suggest that visual hallucinations are associated with confusion between visual mental images and perception.
Emotional words facilitate lexical but not early visual processing.
Trauer, Sophie M; Kotz, Sonja A; Müller, Matthias M
2015-12-12
Emotional scenes and faces have shown to capture and bind visual resources at early sensory processing stages, i.e. in early visual cortex. However, emotional words have led to mixed results. In the current study ERPs were assessed simultaneously with steady-state visual evoked potentials (SSVEPs) to measure attention effects on early visual activity in emotional word processing. Neutral and negative words were flickered at 12.14 Hz whilst participants performed a Lexical Decision Task. Emotional word content did not modulate the 12.14 Hz SSVEP amplitude, neither did word lexicality. However, emotional words affected the ERP. Negative compared to neutral words as well as words compared to pseudowords lead to enhanced deflections in the P2 time range indicative of lexico-semantic access. The N400 was reduced for negative compared to neutral words and enhanced for pseudowords compared to words indicating facilitated semantic processing of emotional words. LPC amplitudes reflected word lexicality and thus the task-relevant response. In line with previous ERP and imaging evidence, the present results indicate that written emotional words are facilitated in processing only subsequent to visual analysis.
Rapid extraction of gist from visual text and its influence on word recognition.
Asano, Michiko; Yokosawa, Kazuhiko
2011-01-01
Two experiments explored rapid extraction of gist from a visual text and its influence on word recognition. In both, a short text (sentence) containing a target word was presented for 200 ms and was followed by a target recognition task. Results showed that participants recognized contextually anomalous word targets less frequently than contextually consistent counterparts (Experiment 1). This context effect was obtained when sentences contained the same semantic content but with disrupted syntactic structure (Experiment 2). Results demonstrate that words in a briefly presented visual sentence are processed in parallel and that rapid extraction of sentence gist relies on a primitive representation of sentence context (termed protocontext) that is semantically activated by the simultaneous presentation of multiple words (i.e., a sentence) before syntactic processing.
Shen, Wei; Qu, Qingqing; Li, Xingshan
2016-07-01
In the present study, we investigated whether the activation of semantic information during spoken word recognition can mediate visual attention's deployment to printed Chinese words. We used a visual-world paradigm with printed words, in which participants listened to a spoken target word embedded in a neutral spoken sentence while looking at a visual display of printed words. We examined whether a semantic competitor effect could be observed in the printed-word version of the visual-world paradigm. In Experiment 1, the relationship between the spoken target words and the printed words was manipulated so that they were semantically related (a semantic competitor), phonologically related (a phonological competitor), or unrelated (distractors). We found that the probability of fixations on semantic competitors was significantly higher than that of fixations on the distractors. In Experiment 2, the orthographic similarity between the spoken target words and their semantic competitors was manipulated to further examine whether the semantic competitor effect was modulated by orthographic similarity. We found significant semantic competitor effects regardless of orthographic similarity. Our study not only reveals that semantic information can affect visual attention, it also provides important new insights into the methodology employed to investigate the semantic processing of spoken words during spoken word recognition using the printed-word version of the visual-world paradigm.
Auditory Emotional Cues Enhance Visual Perception
ERIC Educational Resources Information Center
Zeelenberg, Rene; Bocanegra, Bruno R.
2010-01-01
Recent studies show that emotional stimuli impair performance to subsequently presented neutral stimuli. Here we show a cross-modal perceptual enhancement caused by emotional cues. Auditory cue words were followed by a visually presented neutral target word. Two-alternative forced-choice identification of the visual target was improved by…
ERIC Educational Resources Information Center
Burton, John K.; Wildman, Terry M.
The purpose of this study was to test the applicability of the dual coding hypothesis to children's recall performance. The hypothesis predicts that visual interference will have a small effect on the recall of visually presented words or pictures, but that acoustic interference will cause a decline in recall of visually presented words and…
Deployment of spatial attention to words in central and peripheral vision.
Ducrot, Stéphanie; Grainger, Jonathan
2007-05-01
Four perceptual identification experiments examined the influence of spatial cues on the recognition of words presented in central vision (with fixation on either the first or last letter of the target word) and in peripheral vision (displaced left or right of a central fixation point). Stimulus location had a strong effect on word identification accuracy in both central and peripheral vision, showing a strong right visual field superiority that did not depend on eccentricity. Valid spatial cues improved word identification for peripherally presented targets but were largely ineffective for centrally presented targets. Effects of spatial cuing interacted with visual field effects in Experiment 1, with valid cues reducing the right visual field superiority for peripherally located targets, but this interaction was shown to depend on the type of neutral cue. These results provide further support for the role of attentional factors in visual field asymmetries obtained with targets in peripheral vision but not with centrally presented targets.
D’Angiulli, Amedeo; Griffiths, Gordon; Marmolejo-Ramos, Fernando
2015-01-01
The neural correlates of visualization underlying word comprehension were examined in preschool children. On each trial, a concrete or abstract word was delivered binaurally (part 1: post-auditory visualization), followed by a four-picture array (a target plus three distractors; part 2: matching visualization). Children were to select the picture matching the word they heard in part 1. Event-related potentials (ERPs) locked to each stimulus presentation and task interval were averaged over sets of trials of increasing word abstractness. ERP time-course during both parts of the task showed that early activity (i.e., <300 ms) was predominant in response to concrete words, while activity in response to abstract words became evident only at intermediate (i.e., 300–699 ms) and late (i.e., 700–1000 ms) ERP intervals. Specifically, ERP topography showed that while early activity during post-auditory visualization was linked to left temporo-parietal areas for concrete words, early activity during matching visualization occurred mostly in occipito-parietal areas for concrete words, but more anteriorly in centro-parietal areas for abstract words. In intermediate ERPs, post-auditory visualization coincided with parieto-occipital and parieto-frontal activity in response to both concrete and abstract words, while in matching visualization a parieto-central activity was common to both types of words. In the late ERPs for both types of words, the post-auditory visualization involved right-hemispheric activity following a “post-anterior” pathway sequence: occipital, parietal, and temporal areas; conversely, matching visualization involved left-hemispheric activity following an “ant-posterior” pathway sequence: frontal, temporal, parietal, and occipital areas. These results suggest that, similarly, for concrete and abstract words, meaning in young children depends on variably complex visualization processes integrating visuo-auditory experiences and supramodal embodying representations. PMID:26175697
A Critical Boundary to the Left-Hemisphere Advantage in Visual-Word Processing
ERIC Educational Resources Information Center
Deason, R.G.; Marsolek, C.J.
2005-01-01
Two experiments explored boundary conditions for the ubiquitous left-hemisphere advantage in visual-word recognition. Subjects perceptually identified words presented directly to the left or right hemisphere. Strong left-hemisphere advantages were observed for UPPERCASE and lowercase words. However, only a weak effect was observed for…
Ueno, Daisuke; Masumoto, Kouhei; Sutani, Kouichi; Iwaki, Sunao
2015-04-15
This study used magnetoencephalography (MEG) to examine the latency of modality-specific reactivation in the visual and auditory cortices during a recognition task to determine the effects of reactivation on episodic memory retrieval. Nine right-handed healthy young adults participated in the experiment. The experiment consisted of a word-encoding phase and two recognition phases. Three encoding conditions were included: encoding words alone (word-only) and encoding words presented with either related pictures (visual) or related sounds (auditory). The recognition task was conducted in the MEG scanner 15 min after the completion of the encoding phase. After the recognition test, a source-recognition task was given, in which participants were required to choose whether each recognition word was not presented or was presented with which information during the encoding phase. Word recognition in the auditory condition was higher than that in the word-only condition. Confidence-of-recognition scores (d') and the source-recognition test showed superior performance in both the visual and the auditory conditions compared with the word-only condition. An equivalent current dipoles analysis of MEG data indicated that higher equivalent current dipole amplitudes in the right fusiform gyrus occurred during the visual condition and in the superior temporal auditory cortices during the auditory condition, both 450-550 ms after onset of the recognition stimuli. Results suggest that reactivation of visual and auditory brain regions during recognition binds language with modality-specific information and that reactivation enhances confidence in one's recognition performance.
Serial and semantic encoding of lists of words in schizophrenia patients with visual hallucinations.
Brébion, Gildas; Ohlsen, Ruth I; Pilowsky, Lyn S; David, Anthony S
2011-03-30
Previous research has suggested that visual hallucinations in schizophrenia are associated with abnormal salience of visual mental images. Since visual imagery is used as a mnemonic strategy to learn lists of words, increased visual imagery might impede the other commonly used strategies of serial and semantic encoding. We had previously published data on the serial and semantic strategies implemented by patients when learning lists of concrete words with different levels of semantic organisation (Brébion et al., 2004). In this paper we present a re-analysis of these data, aiming at investigating the associations between learning strategies and visual hallucinations. Results show that the patients with visual hallucinations presented less serial clustering in the non-organisable list than the other patients. In the semantically organisable list with typical instances, they presented both less serial and less semantic clustering than the other patients. Thus, patients with visual hallucinations demonstrate reduced use of serial and semantic encoding in the lists made up of fairly familiar concrete words, which enable the formation of mental images. Although these results are preliminary, we propose that this different processing of the lists stems from the abnormal salience of the mental images such patients experience from the word stimuli. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Caffeine Improves Left Hemisphere Processing of Positive Words
Kuchinke, Lars; Lux, Vanessa
2012-01-01
A positivity advantage is known in emotional word recognition in that positive words are consistently processed faster and with fewer errors compared to emotionally neutral words. A similar advantage is not evident for negative words. Results of divided visual field studies, where stimuli are presented in either the left or right visual field and are initially processed by the contra-lateral brain hemisphere, point to a specificity of the language-dominant left hemisphere. The present study examined this effect by showing that the intake of caffeine further enhanced the recognition performance of positive, but not negative or neutral stimuli compared to a placebo control group. Because this effect was only present in the right visual field/left hemisphere condition, and based on the close link between caffeine intake and dopaminergic transmission, this result points to a dopaminergic explanation of the positivity advantage in emotional word recognition. PMID:23144893
The Effect of the Balance of Orthographic Neighborhood Distribution in Visual Word Recognition
ERIC Educational Resources Information Center
Robert, Christelle; Mathey, Stephanie; Zagar, Daniel
2007-01-01
The present study investigated whether the balance of neighborhood distribution (i.e., the way orthographic neighbors are spread across letter positions) influences visual word recognition. Three word conditions were compared. Word neighbors were either concentrated on one letter position (e.g.,nasse/basse-lasse-tasse-masse) or were unequally…
Evidence for Early Morphological Decomposition in Visual Word Recognition
ERIC Educational Resources Information Center
Solomyak, Olla; Marantz, Alec
2010-01-01
We employ a single-trial correlational MEG analysis technique to investigate early processing in the visual recognition of morphologically complex words. Three classes of affixed words were presented in a lexical decision task: free stems (e.g., taxable), bound roots (e.g., tolerable), and unique root words (e.g., vulnerable, the root of which…
Cao, Hong-Wen; Yang, Ke-Yu; Yan, Hong-Mei
2017-01-01
Character order information is encoded at the initial stage of Chinese word processing, however, its time course remains underspecified. In this study, we assess the exact time course of the character decomposition and transposition processes of two-character Chinese compound words (canonical, transposed, or reversible words) compared with pseudowords using dual-target rapid serial visual presentation (RSVP) of stimuli appearing at 30 ms per character with no inter-stimulus interval. The results indicate that Chinese readers can identify words with character transpositions in rapid succession; however, a transposition cost is involved in identifying transposed words compared to canonical words. In RSVP reading, character order of words is more likely to be reversed during the period from 30 to 180 ms for canonical and reversible words, but the period from 30 to 240 ms for transposed words. Taken together, the findings demonstrate that the holistic representation of the base word is activated, however, the order of the two constituent characters is not strictly processed during the very early stage of visual word processing.
Developmental changes in the inferior frontal cortex for selecting semantic representations
Lee, Shu-Hui; Booth, James R.; Chen, Shiou-Yuan; Chou, Tai-Li
2012-01-01
Functional magnetic resonance imaging (fMRI) was used to examine the neural correlates of semantic judgments to Chinese words in a group of 10–15 year old Chinese children. Two semantic tasks were used: visual–visual versus visual–auditory presentation. The first word was visually presented (i.e. character) and the second word was either visually or auditorily presented, and the participant had to determine if these two words were related in meaning. Different from English, Chinese has many homophones in which each spoken word corresponds to many characters. The visual–auditory task, therefore, required greater engagement of cognitive control for the participants to select a semantically appropriate answer for the second homophonic word. Weaker association pairs produced greater activation in the mid-ventral region of left inferior frontal gyrus (BA 45) for both tasks. However, this effect was stronger for the visual–auditory task than for the visual–visual task and this difference was stronger for older compared to younger children. The findings suggest greater involvement of semantic selection mechanisms in the cross-modal task requiring the access of the appropriate meaning of homophonic spoken words, especially for older children. PMID:22337757
van Schie, Hein T; Wijers, Albertus A; Mars, Rogier B; Benjamins, Jeroen S; Stowe, Laurie A
2005-05-01
Event-related brain potentials were used to study the retrieval of visual semantic information to concrete words, and to investigate possible structural overlap between visual object working memory and concreteness effects in word processing. Subjects performed an object working memory task that involved 5 s retention of simple 4-angled polygons (load 1), complex 10-angled polygons (load 2), and a no-load baseline condition. During the polygon retention interval subjects were presented with a lexical decision task to auditory presented concrete (imageable) and abstract (nonimageable) words, and pseudowords. ERP results are consistent with the use of object working memory for the visualisation of concrete words. Our data indicate a two-step processing model of visual semantics in which visual descriptive information of concrete words is first encoded in semantic memory (indicated by an anterior N400 and posterior occipital positivity), and is subsequently visualised via the network for object working memory (reflected by a left frontal positive slow wave and a bilateral occipital slow wave negativity). Results are discussed in the light of contemporary models of semantic memory.
Hino, Yasushi; Kusunose, Yuu; Miyamura, Shinobu; Lupker, Stephen J
2017-01-01
In most models of word processing, the degrees of consistency in the mappings between orthographic, phonological, and semantic representations are hypothesized to affect reading time. Following Hino, Miyamura, and Lupker's (2011) examination of the orthographic-phonological (O-P) and orthographic-semantic (O-S) consistency for 1,114 Japanese words (339 katakana and 775 kanji words), in the present research, we initially attempted to measure the phonological-orthographic (P-O) consistency for those same words. In contrast to the O-P and O-S consistencies, which were equivalent for kanji and katakana words, the P-O relationships were much more inconsistent for the kanji words than for the katakana words. The impact of kanji words' P-O consistency was then examined in both visual and auditory word recognition tasks. Although there was no effect of P-O consistency in the standard visual lexical-decision task, significant effects were detected in a lexical-decision task with auditory stimuli, in a perceptual identification task using masked visual stimuli, and in a lexical-decision task with degraded visual stimuli. The implications of these results are discussed in terms of the impact of P-O consistency in auditory and visual word recognition. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
ERIC Educational Resources Information Center
Dunabeitia, Jon Andoni; Aviles, Alberto; Afonso, Olivia; Scheepers, Christoph; Carreiras, Manuel
2009-01-01
In the present visual-world experiment, participants were presented with visual displays that included a target item that was a semantic associate of an abstract or a concrete word. This manipulation allowed us to test a basic prediction derived from the qualitatively different representational framework that supports the view of different…
Interference Effects on the Recall of Pictures, Printed Words, and Spoken Words.
ERIC Educational Resources Information Center
Burton, John K.; Bruning, Roger H.
1982-01-01
Nouns were presented in triads as pictures, printed words, or spoken words and followed by various types of interference. Measures of short- and long-term memory were obtained. In short-term memory, pictorial superiority occurred with acoustic, and visual and acoustic, but not visual interference. Long-term memory showed superior recall for…
Teaching the Meaning of Words to Children with Visual Impairments
ERIC Educational Resources Information Center
Vervloed, Mathijs P. J.; Loijens, Nancy E. A.; Waller, Sarah E.
2014-01-01
In the report presented here, the authors describe a pilot intervention study that was intended to teach children with visual impairments the meaning of far-away words, and that used their mothers as mediators. The aim was to teach both labels and deep word knowledge, which is the comprehension of the full meaning of words, illustrated through…
ERIC Educational Resources Information Center
White, Sarah J.; Hirotani, Masako; Liversedge, Simon P.
2012-01-01
Two experiments are presented that examine how the visual characteristics of Japanese words influence eye movement behaviour during reading. In Experiment 1, reading behaviour was compared for words comprising either one or two kanji characters. The one-character words were significantly less likely to be fixated on first-pass, and had…
Developmental Differences for Word Processing in the Ventral Stream
ERIC Educational Resources Information Center
Olulade, Olumide A.; Flowers, D. Lynn; Napoliello, Eileen M.; Eden, Guinevere F.
2013-01-01
The visual word form system (VWFS), located in the occipito-temporal cortex, is involved in orthographic processing of visually presented words (Cohen et al., 2002). Recent fMRI studies in children and adults have demonstrated a gradient of increasing word-selectivity along the posterior-to-anterior axis of this system (Vinckier et al., 2007), yet…
The Role of Derivative Suffix Productivity in the Visual Word Recognition of Complex Words
ERIC Educational Resources Information Center
Lázaro, Miguel; Sainz, Javier; Illera, Víctor
2015-01-01
In this article we present two lexical decision experiments that examine the role of base frequency and of derivative suffix productivity in visual recognition of Spanish words. In the first experiment we find that complex words with productive derivative suffixes result in lower response times than those with unproductive derivative suffixes.…
Morphological Processing during Visual Word Recognition in Hebrew as a First and a Second Language
ERIC Educational Resources Information Center
Norman, Tal; Degani, Tamar; Peleg, Orna
2017-01-01
The present study examined whether sublexical morphological processing takes place during visual word-recognition in Hebrew, and whether morphological decomposition of written words depends on lexical activation of the complete word. Furthermore, it examined whether morphological processing is similar when reading Hebrew as a first language (L1)…
The impact of inverted text on visual word processing: An fMRI study.
Sussman, Bethany L; Reddigari, Samir; Newman, Sharlene D
2018-06-01
Visual word recognition has been studied for decades. One question that has received limited attention is how different text presentation orientations disrupt word recognition. By examining how word recognition processes may be disrupted by different text orientations it is hoped that new insights can be gained concerning the process. Here, we examined the impact of rotating and inverting text on the neural network responsible for visual word recognition focusing primarily on a region of the occipto-temporal cortex referred to as the visual word form area (VWFA). A lexical decision task was employed in which words and pseudowords were presented in one of three orientations (upright, rotated or inverted). The results demonstrate that inversion caused the greatest disruption of visual word recognition processes. Both rotated and inverted text elicited increased activation in spatial attention regions within the right parietal cortex. However, inverted text recruited phonological and articulatory processing regions within the left inferior frontal and left inferior parietal cortices. Finally, the VWFA was found to not behave similarly to the fusiform face area in that unusual text orientations resulted in increased activation and not decreased activation. It is hypothesized here that the VWFA activation is modulated by feedback from linguistic processes. Copyright © 2018 Elsevier Inc. All rights reserved.
Do preschool children learn to read words from environmental prints?
Zhao, Jing; Zhao, Pei; Weng, Xuchu; Li, Su
2014-01-01
Parents and teachers worldwide believe that a visual environment rich with print can contribute to young children's literacy. Children seem to recognize words in familiar logos at an early age. However, most of previous studies were carried out with alphabetic scripts. Alphabetic letters regularly correspond to phonological segments in a word and provide strong cues about the identity of the whole word. Thus it was not clear whether children can learn to read words by extracting visual word form information from environmental prints. To exclude the phonological-cue confound, this study tested children's knowledge of Chinese words embedded in familiar logos. The four environmental logos were employed and transformed into four versions with the contextual cues (i.e., something apart from the presentation of the words themselves in logo format like the color, logo and font type cues) gradually minimized. Children aged from 3 to 5 were tested. We observed that children of different ages all performed better when words were presented in highly familiar logos compared to when they were presented in a plain fashion, devoid of context. This advantage for familiar logos was also present when the contextual information was only partial. However, the role of various cues in learning words changed with age. The color and logo cues had a larger effect in 3- and 4- year-olds than in 5-year-olds, while the font type cue played a greater role in 5-year-olds than in the other two groups. Our findings demonstrated that young children did not easily learn words by extracting their visual form information even from familiar environmental prints. However, children aged 5 begin to pay more attention to the visual form information of words in highly familiar logos than those aged 3 and 4.
Do Preschool Children Learn to Read Words from Environmental Prints?
Zhao, Jing; Zhao, Pei; Weng, Xuchu; Li, Su
2014-01-01
Parents and teachers worldwide believe that a visual environment rich with print can contribute to young children's literacy. Children seem to recognize words in familiar logos at an early age. However, most of previous studies were carried out with alphabetic scripts. Alphabetic letters regularly correspond to phonological segments in a word and provide strong cues about the identity of the whole word. Thus it was not clear whether children can learn to read words by extracting visual word form information from environmental prints. To exclude the phonological-cue confound, this study tested children's knowledge of Chinese words embedded in familiar logos. The four environmental logos were employed and transformed into four versions with the contextual cues (i.e., something apart from the presentation of the words themselves in logo format like the color, logo and font type cues) gradually minimized. Children aged from 3 to 5 were tested. We observed that children of different ages all performed better when words were presented in highly familiar logos compared to when they were presented in a plain fashion, devoid of context. This advantage for familiar logos was also present when the contextual information was only partial. However, the role of various cues in learning words changed with age. The color and logo cues had a larger effect in 3- and 4- year-olds than in 5-year-olds, while the font type cue played a greater role in 5-year-olds than in the other two groups. Our findings demonstrated that young children did not easily learn words by extracting their visual form information even from familiar environmental prints. However, children aged 5 begin to pay more attention to the visual form information of words in highly familiar logos than those aged 3 and 4. PMID:24465677
Coltheart, V; Langdon, R
1998-03-01
Phonological similarity of visually presented list items impairs short-term serial recall. Lists of long words are also recalled less accurately than are lists of short words. These results have been attributed to phonological recoding and rehearsal. If subjects articulate irrelevant words during list presentation, both phonological similarity and word length effects are abolished. Experiments 1 and 2 examined effects of phonological similarity and recall instructions on recall of lists shown at fast rates (from one item per 0.114-0.50 sec), which might not permit phonological encoding and rehearsal. In Experiment 3, recall instructions and word length were manipulated using fast presentation rates. Both phonological similarity and word length effects were observed, and they were not dependent on recall instructions. Experiments 4 and 5 investigated the effects of irrelevant concurrent articulation on lists shown at fast rates. Both phonological similarity and word length effects were removed by concurrent articulation, as they were with slow presentation rates.
Reading laterally: the cerebral hemispheric use of spatial frequencies in visual word recognition.
Tadros, Karine; Dupuis-Roy, Nicolas; Fiset, Daniel; Arguin, Martin; Gosselin, Frédéric
2013-01-04
It is generally accepted that the left hemisphere (LH) is more capable for reading than the right hemisphere (RH). Left hemifield presentations (initially processed by the RH) lead to a globally higher error rate, slower word identification, and a significantly stronger word length effect (i.e., slower reaction times for longer words). Because the visuo-perceptual mechanisms of the brain for word recognition are primarily localized in the LH (Cohen et al., 2003), it is possible that this part of the brain possesses better spatial frequency (SF) tuning for processing the visual properties of words than the RH. The main objective of this study is to determine the SF tuning functions of the LH and RH for word recognition. Each word image was randomly sampled in the SF domain using the SF bubbles method (Willenbockel et al., 2010) and was presented laterally to the left or right visual hemifield. As expected, the LH requires less visual information than the RH to reach the same level of performance, illustrating the well-known LH advantage for word recognition. Globally, the SF tuning of both hemispheres is similar. However, these seemingly identical tuning functions hide important differences. Most importantly, we argue that the RH requires higher SFs to identify longer words because of crowding.
Effects of auditory and visual modalities in recall of words.
Gadzella, B M; Whitehead, D A
1975-02-01
Ten experimental conditions were used to study the effects of auditory and visual (printed words, uncolored and colored pictures) modalities and their various combinations with college students. A recall paradigm was employed in which subjects responded in a written test. Analysis of data showed the auditory modality was superior to visual (pictures) ones but was not significantly different from visual (printed words) modality. In visual modalities, printed words were superior to colored pictures. Generally, conditions with multiple modes of representation of stimuli were significantly higher than for conditions with single modes. Multiple modalities, consisting of two or three modes, did not differ significantly from each other. It was concluded that any two modalities of the stimuli presented simultaneously were just as effective as three in recall of stimulus words.
Risse, Sarah
2014-07-15
The visual span (or ‘‘uncrowded window’’), which limits the sensory information on each fixation, has been shown to determine reading speed in tasks involving rapid serial visual presentation of single words. The present study investigated whether this is also true for fixation durations during sentence reading when all words are presented at the same time and parafoveal preview of words prior to fixation typically reduces later word-recognition times. If so, a larger visual span may allow more efficient parafoveal processing and thus faster reading. In order to test this hypothesis, visual span profiles (VSPs) were collected from 60 participants and related to data from an eye-tracking reading experiment. The results confirmed a positive relationship between the readers’ VSPs and fixation-based reading speed. However, this relationship was not determined by parafoveal processing. There was no evidence that individual differences in VSPs predicted differences in parafoveal preview benefit. Nevertheless, preview benefit correlated with reading speed, suggesting an independent effect on oculomotor control during reading. In summary, the present results indicate a more complex relationship between the visual span, parafoveal processing, and reading speed than initially assumed. © 2014 ARVO.
Eye-fixation behavior, lexical storage, and visual word recognition in a split processing model.
Shillcock, R; Ellison, T M; Monaghan, P
2000-10-01
Some of the implications of a model of visual word recognition in which processing is conditioned by the anatomical splitting of the visual field between the two hemispheres of the brain are explored. The authors investigate the optimal processing of visually presented words within such an architecture, and, for a realistically sized lexicon of English, characterize a computationally optimal fixation point in reading. They demonstrate that this approach motivates a range of behavior observed in reading isolated words and text, including the optimal viewing position and its relationship with the preferred viewing location, the failure to fixate smaller words, asymmetries in hemisphere-specific processing, and the priority given to the exterior letters of words. The authors also show that split architectures facilitate the uptake of all the letter-position information necessary for efficient word recognition and that this information may be less specific than is normally assumed. A split model of word recognition captures a range of behavior in reading that is greater than that covered by existing models of visual word recognition.
The effect of compression and attention allocation on speech intelligibility. II
NASA Astrophysics Data System (ADS)
Choi, Sangsook; Carrell, Thomas
2004-05-01
Previous investigations of the effects of amplitude compression on measures of speech intelligibility have shown inconsistent results. Recently, a novel paradigm was used to investigate the possibility of more consistent findings with a measure of speech perception that is not based entirely on intelligibility (Choi and Carrell, 2003). That study exploited a dual-task paradigm using a pursuit rotor online visual-motor tracking task (Dlhopolsky, 2000) along with a word repetition task. Intensity-compressed words caused reduced performance on the tracking task as compared to uncompressed words when subjects engaged in a simultaneous word repetition task. This suggested an increased cognitive load when listeners processed compressed words. A stronger result might be obtained if a single resource (linguistic) is required rather than two (linguistic and visual-motor) resources. In the present experiment a visual lexical decision task and an auditory word repetition task were used. The visual stimuli for the lexical decision task were blurred and presented in a noise background. The compressed and uncompressed words for repetition were placed in speech-shaped noise. Participants with normal hearing and vision conducted word repetition and lexical decision tasks both independently and simultaneously. The pattern of results is discussed and compared to the previous study.
An ERP investigation of visual word recognition in syllabary scripts.
Okano, Kana; Grainger, Jonathan; Holcomb, Phillip J
2013-06-01
The bimodal interactive-activation model has been successfully applied to understanding the neurocognitive processes involved in reading words in alphabetic scripts, as reflected in the modulation of ERP components in masked repetition priming. In order to test the generalizability of this approach, in the present study we examined word recognition in a different writing system, the Japanese syllabary scripts hiragana and katakana. Native Japanese participants were presented with repeated or unrelated pairs of Japanese words in which the prime and target words were both in the same script (within-script priming, Exp. 1) or were in the opposite script (cross-script priming, Exp. 2). As in previous studies with alphabetic scripts, in both experiments the N250 (sublexical processing) and N400 (lexical-semantic processing) components were modulated by priming, although the time course was somewhat delayed. The earlier N/P150 effect (visual feature processing) was present only in "Experiment 1: Within-script priming", in which the prime and target words shared visual features. Overall, the results provide support for the hypothesis that visual word recognition involves a generalizable set of neurocognitive processes that operate in similar manners across different writing systems and languages, as well as pointing to the viability of the bimodal interactive-activation framework for modeling such processes.
Schurz, Matthias; Sturm, Denise; Richlan, Fabio; Kronbichler, Martin; Ladurner, Gunther; Wimmer, Heinz
2010-01-01
Based on our previous work, we expected the Visual Word Form Area (VWFA) in the left ventral visual pathway to be engaged by both whole-word recognition and by serial sublexical coding of letter strings. To examine this double function, a phonological lexical decision task (i.e., “Does xxx sound like an existing word?”) presented short and long letter strings of words, pseudohomophones, and pseudowords (e.g., Taxi, Taksi and Tazi). Main findings were that the length effect for words was limited to occipital regions and absent in the VWFA. In contrast, a marked length effect for pseudowords was found throughout the ventral visual pathway including the VWFA, as well as in regions presumably engaged by visual attention and silent-articulatory processes. The length by lexicality interaction on brain activation corresponds to well-established behavioral findings of a length by lexicality interaction on naming latencies and speaks for the engagement of the VWFA by both lexical and sublexical processes. PMID:19896538
Schurz, Matthias; Sturm, Denise; Richlan, Fabio; Kronbichler, Martin; Ladurner, Gunther; Wimmer, Heinz
2010-02-01
Based on our previous work, we expected the Visual Word Form Area (VWFA) in the left ventral visual pathway to be engaged by both whole-word recognition and by serial sublexical coding of letter strings. To examine this double function, a phonological lexical decision task (i.e., "Does xxx sound like an existing word?") presented short and long letter strings of words, pseudohomophones, and pseudowords (e.g., Taxi, Taksi and Tazi). Main findings were that the length effect for words was limited to occipital regions and absent in the VWFA. In contrast, a marked length effect for pseudowords was found throughout the ventral visual pathway including the VWFA, as well as in regions presumably engaged by visual attention and silent-articulatory processes. The length by lexicality interaction on brain activation corresponds to well-established behavioral findings of a length by lexicality interaction on naming latencies and speaks for the engagement of the VWFA by both lexical and sublexical processes. Copyright (c) 2009 Elsevier Inc. All rights reserved.
Perea, Manuel; Panadero, Victoria
2014-01-01
The vast majority of neural and computational models of visual-word recognition assume that lexical access is achieved via the activation of abstract letter identities. Thus, a word's overall shape should play no role in this process. In the present lexical decision experiment, we compared word-like pseudowords like viotín (same shape as its base word: violín) vs. viocín (different shape) in mature (college-aged skilled readers), immature (normally reading children), and immature/impaired (young readers with developmental dyslexia) word-recognition systems. Results revealed similar response times (and error rates) to consistent-shape and inconsistent-shape pseudowords for both adult skilled readers and normally reading children - this is consistent with current models of visual-word recognition. In contrast, young readers with developmental dyslexia made significantly more errors to viotín-like pseudowords than to viocín-like pseudowords. Thus, unlike normally reading children, young readers with developmental dyslexia are sensitive to a word's visual cues, presumably because of poor letter representations.
Kim, Kyung Hwan; Kim, Ja Hyun
2006-02-20
The aim of this study was to compare spatiotemporal cortical activation patterns during the visual perception of Korean, English, and Chinese words. The comparison of these three languages offers an opportunity to study the effect of written forms on cortical processing of visually presented words, because of partial similarity/difference among words of these languages, and the familiarity of native Koreans with these three languages at the word level. Single-character words and pictograms were excluded from the stimuli in order to activate neuronal circuitries that are involved only in word perception. Since a variety of cerebral processes are sequentially evoked during visual word perception, a high-temporal resolution is required and thus we utilized event-related potential (ERP) obtained from high-density electroencephalograms. The differences and similarities observed from statistical analyses of ERP amplitudes, the correlation between ERP amplitudes and response times, and the patterns of current source density, appear to be in line with demands of visual and semantic analysis resulting from the characteristics of each language, and the expected task difficulties for native Korean subjects.
Ostarek, Markus; Huettig, Falk
2017-03-01
The notion that processing spoken (object) words involves activation of category-specific representations in visual cortex is a key prediction of modality-specific theories of representation that contrasts with theories assuming dedicated conceptual representational systems abstracted away from sensorimotor systems. In the present study, we investigated whether participants can detect otherwise invisible pictures of objects when they are presented with the corresponding spoken word shortly before the picture appears. Our results showed facilitated detection for congruent ("bottle" → picture of a bottle) versus incongruent ("bottle" → picture of a banana) trials. A second experiment investigated the time-course of the effect by manipulating the timing of picture presentation relative to word onset and revealed that it arises as soon as 200-400 ms after word onset and decays at 600 ms after word onset. Together, these data strongly suggest that spoken words can rapidly activate low-level category-specific visual representations that affect the mere detection of a stimulus, that is, what we see. More generally, our findings fit best with the notion that spoken words activate modality-specific visual representations that are low level enough to provide information related to a given token and at the same time abstract enough to be relevant not only for previously seen tokens but also for generalizing to novel exemplars one has never seen before. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Representational neglect for words as revealed by bisection tasks.
Arduino, Lisa S; Marinelli, Chiara Valeria; Pasotti, Fabrizio; Ferrè, Elisa Raffaella; Bottini, Gabriella
2012-03-01
In the present study, we showed that a representational disorder for words can dissociate from both representational neglect for objects and neglect dyslexia. This study involved 14 brain-damaged patients with left unilateral spatial neglect and a group of normal subjects. Patients were divided into four groups based on presence of left neglect dyslexia and representational neglect for non-verbal material, as evaluated by the Clock Drawing test. The patients were presented with bisection tasks for words and lines. The word bisection tasks (with words of five and seven letters) comprised the following: (1) representational bisection: the experimenter pronounced a word and then asked the patient to name the letter in the middle position; (2) visual bisection: same as (1) with stimuli presented visually; and (3) motor bisection: the patient was asked to cross out the letter in the middle position. The standard line bisection task was presented using lines of different length. Consistent with the literature, long lines were bisected to the right and short lines, rendered comparable in length to the words of the word bisection test, deviated to the left (crossover effect). Both patients and controls showed the same leftward bias on words in the visual and motor bisection conditions. A significant difference emerged between the groups only in the case of the representational bisection task, whereas the group exhibiting neglect dyslexia associated with representational neglect for objects showed a significant rightward bias, while the other three patient groups and the controls showed a leftward bisection bias. Neither the presence of neglect alone nor the presence of visual neglect dyslexia was sufficient to produce a specific disorder in mental imagery. These results demonstrate a specific representational neglect for words independent of both representational neglect and neglect dyslexia. ©2011 The British Psychological Society.
Individual Differences in Visual Word Recognition: Insights from the English Lexicon Project
ERIC Educational Resources Information Center
Yap, Melvin J.; Balota, David A.; Sibley, Daragh E.; Ratcliff, Roger
2012-01-01
Empirical work and models of visual word recognition have traditionally focused on group-level performance. Despite the emphasis on the prototypical reader, there is clear evidence that variation in reading skill modulates word recognition performance. In the present study, we examined differences among individuals who contributed to the English…
Xue, Gui; Jiang, Ting; Chen, Chuansheng; Dong, Qi
2008-02-15
How language experience affects visual word recognition has been a topic of intense interest. Using event-related potentials (ERPs), the present study compared the early electrophysiological responses (i.e., N1) to familiar and unfamiliar writings under different conditions. Thirteen native Chinese speakers (with English as their second language) were recruited to passively view four types of scripts: Chinese (familiar logographic writings), English (familiar alphabetic writings), Korean Hangul (unfamiliar logographic writings), and Tibetan (unfamiliar alphabetic writings). Stimuli also differed in lexicality (words vs. non-words, for familiar writings only), length (characters/letters vs. words), and presentation duration (100 ms vs. 750 ms). We found no significant differences between words and non-words, and the effect of language experience (familiar vs. unfamiliar) was significantly modulated by stimulus length and writing system, and to a less degree, by presentation duration. That is, the language experience effect (i.e., a stronger N1 response to familiar writings than to unfamiliar writings) was significant only for alphabetic letters, but not for alphabetic and logographic words. The difference between Chinese characters and unfamiliar logographic characters was significant under the condition of short presentation duration, but not under the condition of long presentation duration. Long stimuli elicited a stronger N1 response than did short stimuli, but this effect was significantly attenuated for familiar writings. These results suggest that N1 response might not reliably differentiate familiar and unfamiliar writings. More importantly, our results suggest that N1 is modulated by visual, linguistic, and task factors, which has important implications for the visual expertise hypothesis.
Does Temporal Integration Occur for Unrecognizable Words in Visual Crowding?
Zhou, Jifan; Lee, Chia-Lin; Li, Kuei-An; Tien, Yung-Hsuan; Yeh, Su-Ling
2016-01-01
Visual crowding—the inability to see an object when it is surrounded by flankers in the periphery—does not block semantic activation: unrecognizable words due to visual crowding still generated robust semantic priming in subsequent lexical decision tasks. Based on the previous finding, the current study further explored whether unrecognizable crowded words can be temporally integrated into a phrase. By showing one word at a time, we presented Chinese four-word idioms with either a congruent or incongruent ending word in order to examine whether the three preceding crowded words can be temporally integrated to form a semantic context so as to affect the processing of the ending word. Results from both behavioral (Experiment 1) and Event-Related Potential (Experiment 2 and 3) measures showed congruency effect in only the non-crowded condition, which does not support the existence of unconscious multi-word integration. Aside from four-word idioms, we also found that two-word (modifier + adjective combination) integration—the simplest kind of temporal semantic integration—did not occur in visual crowding (Experiment 4). Our findings suggest that integration of temporally separated words might require conscious awareness, at least under the timing conditions tested in the current study. PMID:26890366
Functions of graphemic and phonemic codes in visual word-recognition.
Meyer, D E; Schvaneveldt, R W; Ruddy, M G
1974-03-01
Previous investigators have argued that printed words are recognized directly from visual representations and/or phonological representations obtained through phonemic recoding. The present research tested these hypotheses by manipulating graphemic and phonemic relations within various pairs of letter strings. Ss in two experiments classified the pairs as words or nonwords. Reaction times and error rates were relatively small for word pairs (e.g., BRIBE-TRIBE) that were both graphemically, and phonemically similar. Graphemic similarity alone inhibited performance on other word pairs (e.g., COUCH-TOUCH). These and other results suggest that phonological representations play a significant role in visual word recognition and that there is a dependence between successive phonemic-encoding operations. An encoding-bias model is proposed to explain the data.
Encourage Students to Read through the Use of Data Visualization
ERIC Educational Resources Information Center
Bandeen, Heather M.; Sawin, Jason E.
2012-01-01
Instructors are always looking for new ways to engage students in reading assignments. The authors present a few techniques that rely on a web-based data visualization tool called Wordle (wordle.net). Wordle creates word frequency representations called word clouds. The larger a word appears within a cloud, the more frequently it occurs within a…
Jackson, Margaret C.; Linden, David E. J.; Raymond, Jane E.
2012-01-01
We are often required to filter out distraction in order to focus on a primary task during which working memory (WM) is engaged. Previous research has shown that negative versus neutral distracters presented during a visual WM maintenance period significantly impair memory for neutral information. However, the contents of WM are often also emotional in nature. The question we address here is how incidental information might impact upon visual WM when both this and the memory items contain emotional information. We presented emotional versus neutral words during the maintenance interval of an emotional visual WM faces task. Participants encoded two angry or happy faces into WM, and several seconds into a 9 s maintenance period a negative, positive, or neutral word was flashed on the screen three times. A single neutral test face was presented for retrieval with a face identity that was either present or absent in the preceding study array. WM for angry face identities was significantly better when an emotional (negative or positive) versus neutral (or no) word was presented. In contrast, WM for happy face identities was not significantly affected by word valence. These findings suggest that the presence of emotion within an intervening stimulus boosts the emotional value of threat-related information maintained in visual WM and thus improves performance. In addition, we show that incidental events that are emotional in nature do not always distract from an ongoing WM task. PMID:23112782
Morphable Word Clouds for Time-Varying Text Data Visualization.
Chi, Ming-Te; Lin, Shih-Syun; Chen, Shiang-Yi; Lin, Chao-Hung; Lee, Tong-Yee
2015-12-01
A word cloud is a visual representation of a collection of text documents that uses various font sizes, colors, and spaces to arrange and depict significant words. The majority of previous studies on time-varying word clouds focuses on layout optimization and temporal trend visualization. However, they do not fully consider the spatial shapes and temporal motions of word clouds, which are important factors for attracting people's attention and are also important cues for human visual systems in capturing information from time-varying text data. This paper presents a novel method that uses rigid body dynamics to arrange multi-temporal word-tags in a specific shape sequence under various constraints. Each word-tag is regarded as a rigid body in dynamics. With the aid of geometric, aesthetic, and temporal coherence constraints, the proposed method can generate a temporally morphable word cloud that not only arranges word-tags in their corresponding shapes but also smoothly transforms the shapes of word clouds over time, thus yielding a pleasing time-varying visualization. Using the proposed frame-by-frame and morphable word clouds, people can observe the overall story of a time-varying text data from the shape transition, and people can also observe the details from the word clouds in frames. Experimental results on various data demonstrate the feasibility and flexibility of the proposed method in morphable word cloud generation. In addition, an application that uses the proposed word clouds in a simulated exhibition demonstrates the usefulness of the proposed method.
Embedded Words in Visual Word Recognition: Does the Left Hemisphere See the Rain in Brain?
ERIC Educational Resources Information Center
McCormick, Samantha F.; Davis, Colin J.; Brysbaert, Marc
2010-01-01
To examine whether interhemispheric transfer during foveal word recognition entails a discontinuity between the information presented to the left and right of fixation, we presented target words in such a way that participants fixated immediately left or right of an embedded word (as in "gr*apple", "bull*et") or in the middle…
Bicknell, Klinton; Levy, Roger
2012-01-01
Decades of empirical work have shown that a range of eye movement phenomena in reading are sensitive to the details of the process of word identification. Despite this, major models of eye movement control in reading do not explicitly model word identification from visual input. This paper presents a argument for developing models of eye movements that do include detailed models of word identification. Specifically, we argue that insights into eye movement behavior can be gained by understanding which phenomena naturally arise from an account in which the eyes move for efficient word identification, and that one important use of such models is to test which eye movement phenomena can be understood this way. As an extended case study, we present evidence from an extension of a previous model of eye movement control in reading that does explicitly model word identification from visual input, Mr. Chips (Legge, Klitz, & Tjan, 1997), to test two proposals for the effect of using linguistic context on reading efficiency. PMID:23074362
Do handwritten words magnify lexical effects in visual word recognition?
Perea, Manuel; Gil-López, Cristina; Beléndez, Victoria; Carreiras, Manuel
2016-01-01
An examination of how the word recognition system is able to process handwritten words is fundamental to formulate a comprehensive model of visual word recognition. Previous research has revealed that the magnitude of lexical effects (e.g., the word-frequency effect) is greater with handwritten words than with printed words. In the present lexical decision experiments, we examined whether the quality of handwritten words moderates the recruitment of top-down feedback, as reflected in word-frequency effects. Results showed a reading cost for difficult-to-read and easy-to-read handwritten words relative to printed words. But the critical finding was that difficult-to-read handwritten words, but not easy-to-read handwritten words, showed a greater word-frequency effect than printed words. Therefore, the inherent physical variability of handwritten words does not necessarily boost the magnitude of lexical effects.
Dysfunctional visual word form processing in progressive alexia
Rising, Kindle; Stib, Matthew T.; Rapcsak, Steven Z.; Beeson, Pélagie M.
2013-01-01
Progressive alexia is an acquired reading deficit caused by degeneration of brain regions that are essential for written word processing. Functional imaging studies have shown that early processing of the visual word form depends on a hierarchical posterior-to-anterior processing stream in occipito-temporal cortex, whereby successive areas code increasingly larger and more complex perceptual attributes of the letter string. A region located in the left lateral occipito-temporal sulcus and adjacent fusiform gyrus shows maximal selectivity for words and has been dubbed the ‘visual word form area’. We studied two patients with progressive alexia in order to determine whether their reading deficits were associated with structural and/or functional abnormalities in this visual word form system. Voxel-based morphometry showed left-lateralized occipito-temporal atrophy in both patients, very mild in one, but moderate to severe in the other. The two patients, along with 10 control subjects, were scanned with functional magnetic resonance imaging as they viewed rapidly presented words, false font strings, or a fixation crosshair. This paradigm was optimized to reliably map brain regions involved in orthographic processing in individual subjects. All 10 control subjects showed a posterior-to-anterior gradient of selectivity for words, and all 10 showed a functionally defined visual word form area in the left hemisphere that was activated for words relative to false font strings. In contrast, neither of the two patients with progressive alexia showed any evidence for a selectivity gradient or for word-specific activation of the visual word form area. The patient with mild atrophy showed normal responses to both words and false font strings in the posterior part of the visual word form system, but a failure to develop selectivity for words in the more anterior part of the system. In contrast, the patient with moderate to severe atrophy showed minimal activation of any part of the visual word form system for either words or false font strings. Our results suggest that progressive alexia is associated with a dysfunctional visual word form system, with or without substantial cortical atrophy. Furthermore, these findings demonstrate that functional MRI has the potential to reveal the neural bases of cognitive deficits in neurodegenerative patients at very early stages, in some cases before the development of extensive atrophy. PMID:23471694
Dysfunctional visual word form processing in progressive alexia.
Wilson, Stephen M; Rising, Kindle; Stib, Matthew T; Rapcsak, Steven Z; Beeson, Pélagie M
2013-04-01
Progressive alexia is an acquired reading deficit caused by degeneration of brain regions that are essential for written word processing. Functional imaging studies have shown that early processing of the visual word form depends on a hierarchical posterior-to-anterior processing stream in occipito-temporal cortex, whereby successive areas code increasingly larger and more complex perceptual attributes of the letter string. A region located in the left lateral occipito-temporal sulcus and adjacent fusiform gyrus shows maximal selectivity for words and has been dubbed the 'visual word form area'. We studied two patients with progressive alexia in order to determine whether their reading deficits were associated with structural and/or functional abnormalities in this visual word form system. Voxel-based morphometry showed left-lateralized occipito-temporal atrophy in both patients, very mild in one, but moderate to severe in the other. The two patients, along with 10 control subjects, were scanned with functional magnetic resonance imaging as they viewed rapidly presented words, false font strings, or a fixation crosshair. This paradigm was optimized to reliably map brain regions involved in orthographic processing in individual subjects. All 10 control subjects showed a posterior-to-anterior gradient of selectivity for words, and all 10 showed a functionally defined visual word form area in the left hemisphere that was activated for words relative to false font strings. In contrast, neither of the two patients with progressive alexia showed any evidence for a selectivity gradient or for word-specific activation of the visual word form area. The patient with mild atrophy showed normal responses to both words and false font strings in the posterior part of the visual word form system, but a failure to develop selectivity for words in the more anterior part of the system. In contrast, the patient with moderate to severe atrophy showed minimal activation of any part of the visual word form system for either words or false font strings. Our results suggest that progressive alexia is associated with a dysfunctional visual word form system, with or without substantial cortical atrophy. Furthermore, these findings demonstrate that functional MRI has the potential to reveal the neural bases of cognitive deficits in neurodegenerative patients at very early stages, in some cases before the development of extensive atrophy.
ERIC Educational Resources Information Center
Lavidor, Michal; Hayes, Adrian; Shillcock, Richard; Ellis, Andrew W.
2004-01-01
The split fovea theory proposes that visual word recognition of centrally presented words is mediated by the splitting of the foveal image, with letters to the left of fixation being projected to the right hemisphere (RH) and letters to the right of fixation being projected to the left hemisphere (LH). Two lexical decision experiments aimed to…
Reading Habits, Perceptual Learning, and Recognition of Printed Words
ERIC Educational Resources Information Center
Nazir, Tatjana A.; Ben-Boutayab, Nadia; Decoppet, Nathalie; Deutsch, Avital; Frost, Ram
2004-01-01
The present work aims at demonstrating that visual training associated with the act of reading modifies the way we perceive printed words. As reading does not train all parts of the retina in the same way but favors regions on the side in the direction of scanning, visual word recognition should be better at retinal locations that are frequently…
Shtyrov, Yury; MacGregor, Lucy J
2016-05-24
Rapid and efficient processing of external information by the brain is vital to survival in a highly dynamic environment. The key channel humans use to exchange information is language, but the neural underpinnings of its processing are still not fully understood. We investigated the spatio-temporal dynamics of neural access to word representations in the brain by scrutinising the brain's activity elicited in response to psycholinguistically, visually and phonologically matched groups of familiar words and meaningless pseudowords. Stimuli were briefly presented on the visual-field periphery to experimental participants whose attention was occupied with a non-linguistic visual feature-detection task. The neural activation elicited by these unattended orthographic stimuli was recorded using multi-channel whole-head magnetoencephalography, and the timecourse of lexically-specific neuromagnetic responses was assessed in sensor space as well as at the level of cortical sources, estimated using individual MR-based distributed source reconstruction. Our results demonstrate a neocortical signature of automatic near-instant access to word representations in the brain: activity in the perisylvian language network characterised by specific activation enhancement for familiar words, starting as early as ~70 ms after the onset of unattended word stimuli and underpinned by temporal and inferior-frontal cortices.
Chen, Yi-Chuan; Spence, Charles
2013-01-01
The time-course of cross-modal semantic interactions between pictures and either naturalistic sounds or spoken words was compared. Participants performed a speeded picture categorization task while hearing a task-irrelevant auditory stimulus presented at various stimulus onset asynchronies (SOAs) with respect to the visual picture. Both naturalistic sounds and spoken words gave rise to cross-modal semantic congruency effects (i.e., facilitation by semantically congruent sounds and inhibition by semantically incongruent sounds, as compared to a baseline noise condition) when the onset of the sound led that of the picture by 240 ms or more. Both naturalistic sounds and spoken words also gave rise to inhibition irrespective of their semantic congruency when presented within 106 ms of the onset of the picture. The peak of this cross-modal inhibitory effect occurred earlier for spoken words than for naturalistic sounds. These results therefore demonstrate that the semantic priming of visual picture categorization by auditory stimuli only occurs when the onset of the sound precedes that of the visual stimulus. The different time-courses observed for naturalistic sounds and spoken words likely reflect the different processing pathways to access the relevant semantic representations.
Shen, Wei; Qu, Qingqing; Tong, Xiuhong
2018-05-01
The aim of this study was to investigate the extent to which phonological information mediates the visual attention shift to printed Chinese words in spoken word recognition by using an eye-movement technique with a printed-word paradigm. In this paradigm, participants are visually presented with four printed words on a computer screen, which include a target word, a phonological competitor, and two distractors. Participants are then required to select the target word using a computer mouse, and the eye movements are recorded. In Experiment 1, phonological information was manipulated at the full-phonological overlap; in Experiment 2, phonological information at the partial-phonological overlap was manipulated; and in Experiment 3, the phonological competitors were manipulated to share either fulloverlap or partial-overlap with targets directly. Results of the three experiments showed that the phonological competitor effects were observed at both the full-phonological overlap and partial-phonological overlap conditions. That is, phonological competitors attracted more fixations than distractors, which suggested that phonological information mediates the visual attention shift during spoken word recognition. More importantly, we found that the mediating role of phonological information varies as a function of the phonological similarity between target words and phonological competitors.
An ERP Investigation of Visual Word Recognition in Syllabary Scripts
Okano, Kana; Grainger, Jonathan; Holcomb, Phillip J.
2013-01-01
The bi-modal interactive-activation model has been successfully applied to understanding the neuro-cognitive processes involved in reading words in alphabetic scripts, as reflected in the modulation of ERP components in masked repetition priming. In order to test the generalizability of this approach, the current study examined word recognition in a different writing system, the Japanese syllabary scripts Hiragana and Katakana. Native Japanese participants were presented with repeated or unrelated pairs of Japanese words where the prime and target words were both in the same script (within-script priming, Experiment 1) or were in the opposite script (cross-script priming, Experiment 2). As in previous studies with alphabetic scripts, in both experiments the N250 (sub-lexical processing) and N400 (lexical-semantic processing) components were modulated by priming, although the time-course was somewhat delayed. The earlier N/P150 effect (visual feature processing) was present only in Experiment 1 where prime and target words shared visual features. Overall, the results provide support for the hypothesis that visual word recognition involves a generalizable set of neuro-cognitive processes that operate in a similar manner across different writing systems and languages, as well as pointing to the viability of the bi-modal interactive activation framework for modeling such processes. PMID:23378278
Hemispheric asymmetry in holistic processing of words.
Ventura, Paulo; Delgado, João; Ferreira, Miguel; Farinha-Fernandes, António; Guerreiro, José C; Faustino, Bruno; Leite, Isabel; Wong, Alan C-N
2018-05-13
Holistic processing has been regarded as a hallmark of face perception, indicating the automatic and obligatory tendency of the visual system to process all face parts as a perceptual unit rather than in isolation. Studies involving lateralized stimulus presentation suggest that the right hemisphere dominates holistic face processing. Holistic processing can also be shown with other categories such as words and thus it is not specific to faces or face-like expertize. Here, we used divided visual field presentation to investigate the possibly different contributions of the two hemispheres for holistic word processing. Observers performed same/different judgment on the cued parts of two sequentially presented words in the complete composite paradigm. Our data indicate a right hemisphere specialization for holistic word processing. Thus, these markers of expert object recognition are domain general.
The Impact of Visual-Spatial Attention on Reading and Spelling in Chinese Children
ERIC Educational Resources Information Center
Liu, Duo; Chen, Xi; Wang, Ying
2016-01-01
The present study investigated the associations of visual-spatial attention with word reading fluency and spelling in 92 third grade Hong Kong Chinese children. Word reading fluency was measured with a timed reading task whereas spelling was measured with a dictation task. Results showed that visual-spatial attention was a unique predictor of…
Word position affects stimulus recognition: evidence for early ERP short-term plastic modulation.
Spironelli, Chiara; Galfano, Giovanni; Umiltà, Carlo; Angrilli, Alessandro
2011-12-01
The present study was aimed at investigating the short-term plastic changes that follow word learning at a neurophysiological level. The main hypothesis was that word position (left or right visual field, LVF/RH or RVF/LH) in the initial learning phase would leave a trace that affected, in the subsequent recognition phase, the Recognition Potential (i.e., the first negative component distinguishing words from other stimuli) elicited 220-240 ms after centrally presented stimuli. Forty-eight students were administered, in the learning phase, 125 words for 4s, randomly presented half in the left and half in the right visual field. In the recognition phase, participants were split into two equal groups, one was assigned to the Word task, the other to the Picture task (in which half of the 125 pictures were new, and half matched prior studied words). During the Word task, old RVF/LH words elicited significantly greater negativity in left posterior sites with respect to old LVF/RH words, which in turn showed the same pattern of activation evoked by new words. Therefore, correspondence between stimulus spatial position and hemisphere specialized in automatic word recognition created a robust prime for subsequent recognition. During the Picture task, pictures matching old RVF/LH words showed no differences compared with new pictures, but evoked significantly greater negativity than pictures matching old LVF/RH words. Thus, the priming effect vanished when the task required a switch from visual analysis to stored linguistic information, whereas the lack of correspondence between stimulus position and network specialized in automatic word recognition (i.e., when words were presented to the LVF/RH) revealed the implicit costs for recognition. Results support the view that short-term plastic changes occurring in a linguistic learning task interact with both stimulus position and modality (written word vs. picture representation). Copyright © 2011 Elsevier B.V. All rights reserved.
Intrusive effects of implicitly processed information on explicit memory.
Sentz, Dustin F; Kirkhart, Matthew W; LoPresto, Charles; Sobelman, Steven
2002-02-01
This study described the interference of implicitly processed information on the memory for explicitly processed information. Participants studied a list of words either auditorily or visually under instructions to remember the words (explicit study). They were then visually presented another word list under instructions which facilitate implicit but not explicit processing. Following a distractor task, memory for the explicit study list was tested with either a visual or auditory recognition task that included new words, words from the explicit study list, and words implicitly processed. Analysis indicated participants both failed to recognize words from the explicit study list and falsely recognized words that were implicitly processed as originating from the explicit study list. However, this effect only occurred when the testing modality was visual, thereby matching the modality for the implicitly processed information, regardless of the modality of the explicit study list. This "modality effect" for explicit memory was interpreted as poor source memory for implicitly processed information and in light of the procedures used. as well as illustrating an example of "remembering causing forgetting."
Roverud, Elin; Best, Virginia; Mason, Christine R; Streeter, Timothy; Kidd, Gerald
2017-12-15
The "visually guided hearing aid" (VGHA), consisting of a beamforming microphone array steered by eye gaze, is an experimental device being tested for effectiveness in laboratory settings. Previous studies have found that beamforming without visual steering can provide significant benefits (relative to natural binaural listening) for speech identification in spatialized speech or noise maskers when sound sources are fixed in location. The aim of the present study was to evaluate the performance of the VGHA in listening conditions in which target speech could switch locations unpredictably, requiring visual steering of the beamforming. To address this aim, the present study tested an experimental simulation of the VGHA in a newly designed dynamic auditory-visual word congruence task. Ten young normal-hearing (NH) and 11 young hearing-impaired (HI) adults participated. On each trial, three simultaneous spoken words were presented from three source positions (-30, 0, and 30 azimuth). An auditory-visual word congruence task was used in which participants indicated whether there was a match between the word printed on a screen at a location corresponding to the target source and the spoken target word presented acoustically from that location. Performance was compared for a natural binaural condition (stimuli presented using impulse responses measured on KEMAR), a simulated VGHA condition (BEAM), and a hybrid condition that combined lowpass-filtered KEMAR and highpass-filtered BEAM information (BEAMAR). In some blocks, the target remained fixed at one location across trials, and in other blocks, the target could transition in location between one trial and the next with a fixed but low probability. Large individual variability in performance was observed. There were significant benefits for the hybrid BEAMAR condition relative to the KEMAR condition on average for both NH and HI groups when the targets were fixed. Although not apparent in the averaged data, some individuals showed BEAM benefits relative to KEMAR. Under dynamic conditions, BEAM and BEAMAR performance dropped significantly immediately following a target location transition. However, performance recovered by the second word in the sequence and was sustained until the next transition. When performance was assessed using an auditory-visual word congruence task, the benefits of beamforming reported previously were generally preserved under dynamic conditions in which the target source could move unpredictably from one location to another (i.e., performance recovered rapidly following source transitions) while the observer steered the beamforming via eye gaze, for both young NH and young HI groups.
Mechanisms of attention in reading parafoveal words: a cross-linguistic study in children.
Siéroff, Eric; Dahmen, Riadh; Fagard, Jacqueline
2012-05-01
The right visual field superiority (RVFS) for words may be explained by the cerebral lateralization for language, the scanning habits in relation to script direction, and spatial attention. The present study explored the influence of spatial attention on the RVFS in relation to scanning habits in school-age children. French second- and fourth-graders identified briefly presented French parafoveal words. Tunisian second- and fourth-graders identified Arabic words, and Tunisian fourth-graders identified French words. The distribution of spatial attention was evaluated by using a distracter in the visual field opposite the word. The results of the correct identification score showed that reading direction had only a partial effect on the identification of parafoveal words and the distribution of attention, with a clear RVFS and a larger effect of the distracter in the left visual field in French children reading French words, and an absence of asymmetry when Tunisian children read Arabic words. Fourth-grade Tunisian children also showed an RVFS when reading French words without an asymmetric distribution of attention, suggesting that their native language may have partially influenced reading strategies in the newly learned language. However, the mode of letter processing, evaluated by a qualitative error score, was only influenced by reading direction, with more sequential processing in the visual field where reading "begins." The distribution of attention when reading parafoveal words is better explained by the interaction between left hemisphere activation and strategies related to reading direction. We discuss these results in light of an attentional theory that dissociates selection and preparation.
Effect of study context on item recollection.
Skinner, Erin I; Fernandes, Myra A
2010-07-01
We examined how visual context information provided during encoding, and unrelated to the target word, affected later recollection for words presented alone using a remember-know paradigm. Experiments 1A and 1B showed that participants had better overall memory-specifically, recollection-for words studied with pictures of intact faces than for words studied with pictures of scrambled or inverted faces. Experiment 2 replicated these results and showed that recollection was higher for words studied with pictures of faces than when no image accompanied the study word. In Experiment 3 participants showed equivalent memory for words studied with unique faces as for those studied with a repeatedly presented face. Results suggest that recollection benefits when visual context information high in meaningful content accompanies study words and that this benefit is not related to the uniqueness of the context. We suggest that participants use elaborative processes to integrate item and meaningful contexts into ensemble information, improving subsequent item recollection.
The impact of task demand on visual word recognition.
Yang, J; Zevin, J
2014-07-11
The left occipitotemporal cortex has been found sensitive to the hierarchy of increasingly complex features in visually presented words, from individual letters to bigrams and morphemes. However, whether this sensitivity is a stable property of the brain regions engaged by word recognition is still unclear. To address the issue, the current study investigated whether different task demands modify this sensitivity. Participants viewed real English words and stimuli with hierarchical word-likeness while performing a lexical decision task (i.e., to decide whether each presented stimulus is a real word) and a symbol detection task. General linear model and independent component analysis indicated strong activation in the fronto-parietal and temporal regions during the two tasks. Furthermore, the bilateral inferior frontal gyrus and insula showed significant interaction effects between task demand and stimulus type in the pseudoword condition. The occipitotemporal cortex showed strong main effects for task demand and stimulus type, but no sensitivity to the hierarchical word-likeness was found. These results suggest that different task demands on semantic, phonological and orthographic processes can influence the involvement of the relevant regions during visual word recognition. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.
Modulation of human extrastriate visual processing by selective attention to colours and words.
Nobre, A C; Allison, T; McCarthy, G
1998-07-01
The present study investigated the effect of visual selective attention upon neural processing within functionally specialized regions of the human extrastriate visual cortex. Field potentials were recorded directly from the inferior surface of the temporal lobes in subjects with epilepsy. The experimental task required subjects to focus attention on words from one of two competing texts. Words were presented individually and foveally. Texts were interleaved randomly and were distinguishable on the basis of word colour. Focal field potentials were evoked by words in the posterior part of the fusiform gyrus. Selective attention strongly modulated long-latency potentials evoked by words. The attention effect co-localized with word-related potentials in the posterior fusiform gyrus, and was independent of stimulus colour. The results demonstrated that stimuli receive differential processing within specialized regions of the extrastriate cortex as a function of attention. The late onset of the attention effect and its co-localization with letter string-related potentials but not with colour-related potentials recorded from nearby regions of the fusiform gyrus suggest that the attention effect is due to top-down influences from downstream regions involved in word processing.
Can colours be used to segment words when reading?
Perea, Manuel; Tejero, Pilar; Winskel, Heather
2015-07-01
Rayner, Fischer, and Pollatsek (1998, Vision Research) demonstrated that reading unspaced text in Indo-European languages produces a substantial reading cost in word identification (as deduced from an increased word-frequency effect on target words embedded in the unspaced vs. spaced sentences) and in eye movement guidance (as deduced from landing sites closer to the beginning of the words in unspaced sentences). However, the addition of spaces between words comes with a cost: nearby words may fall outside high-acuity central vision, thus reducing the potential benefits of parafoveal processing. In the present experiment, we introduced a salient visual cue intended to facilitate the process of word segmentation without compromising visual acuity: each alternating word was printed in a different colour (i.e., ). Results only revealed a small reading cost of unspaced alternating colour sentences relative to the spaced sentences. Thus, present data are a demonstration that colour can be useful to segment words for readers of spaced orthographies. Copyright © 2015 Elsevier B.V. All rights reserved.
Don't words come easy? A psychophysical exploration of word superiority
Starrfelt, Randi; Petersen, Anders; Vangkilde, Signe
2013-01-01
Words are made of letters, and yet sometimes it is easier to identify a word than a single letter. This word superiority effect (WSE) has been observed when written stimuli are presented very briefly or degraded by visual noise. We compare performance with letters and words in three experiments, to explore the extents and limits of the WSE. Using a carefully controlled list of three letter words, we show that a WSE can be revealed in vocal reaction times even to undegraded stimuli. With a novel combination of psychophysics and mathematical modeling, we further show that the typical WSE is specifically reflected in perceptual processing speed: single words are simply processed faster than single letters. Intriguingly, when multiple stimuli are presented simultaneously, letters are perceived more easily than words, and this is reflected both in perceptual processing speed and visual short term memory (VSTM) capacity. So, even if single words come easy, there is a limit to the WSE. PMID:24027510
Huettig, Falk; Altmann, Gerry T M
2005-05-01
When participants are presented simultaneously with spoken language and a visual display depicting objects to which that language refers, participants spontaneously fixate the visual referents of the words being heard [Cooper, R. M. (1974). The control of eye fixation by the meaning of spoken language: A new methodology for the real-time investigation of speech perception, memory, and language processing. Cognitive Psychology, 6(1), 84-107; ]. We demonstrate here that such spontaneous fixation can be driven by partial semantic overlap between a word and a visual object. Participants heard the word 'piano' when (a) a piano was depicted amongst unrelated distractors; (b) a trumpet was depicted amongst those same distractors; and (c), both the piano and trumpet were depicted. The probability of fixating the piano and the trumpet in the first two conditions rose as the word 'piano' unfolded. In the final condition, only fixations to the piano rose, although the trumpet was fixated more than the distractors. We conclude that eye movements are driven by the degree of match, along various dimensions that go beyond simple visual form, between a word and the mental representations of objects in the concurrent visual field.
Wu, Helen C.; Nagasawa, Tetsuro; Brown, Erik C.; Juhasz, Csaba; Rothermel, Robert; Hoechstetter, Karsten; Shah, Aashit; Mittal, Sandeep; Fuerst, Darren; Sood, Sandeep; Asano, Eishi
2011-01-01
Objective We measured cortical gamma-oscillations in response to visual-language tasks consisting of picture naming and word reading in an effort to better understand human visual-language pathways. Methods We studied six patients with focal epilepsy who underwent extraoperative electrocorticography (ECoG) recording. Patients were asked to overtly name images presented sequentially in the picture naming task and to overtly read written words in the reading task. Results Both tasks commonly elicited gamma-augmentation (maximally at 80–100 Hz) on ECoG in the occipital, inferior-occipital-temporal and inferior-Rolandic areas, bilaterally. Picture naming, compared to reading task, elicited greater gamma-augmentation in portions of pre-motor areas as well as occipital and inferior-occipital-temporal areas, bilaterally. In contrast, word reading elicited greater gamma-augmentation in portions of bilateral occipital, left occipital-temporal and left superior-posterior-parietal areas. Gamma-attenuation was elicited by both tasks in portions of posterior cingulate and ventral premotor-prefrontal areas bilaterally. The number of letters in a presented word was positively correlated to the degree of gamma-augmentation in the medial occipital areas. Conclusions Gamma-augmentation measured on ECoG identified cortical areas commonly and differentially involved in picture naming and reading tasks. Longer words may activate the primary visual cortex for the more peripheral field. Significance The present study increases our understanding of the visual-language pathways. PMID:21498109
Visual Word Recognition Across the Adult Lifespan
Cohen-Shikora, Emily R.; Balota, David A.
2016-01-01
The current study examines visual word recognition in a large sample (N = 148) across the adult lifespan and across a large set of stimuli (N = 1187) in three different lexical processing tasks (pronunciation, lexical decision, and animacy judgments). Although the focus of the present study is on the influence of word frequency, a diverse set of other variables are examined as the system ages and acquires more experience with language. Computational models and conceptual theories of visual word recognition and aging make differing predictions for age-related changes in the system. However, these have been difficult to assess because prior studies have produced inconsistent results, possibly due to sample differences, analytic procedures, and/or task-specific processes. The current study confronts these potential differences by using three different tasks, treating age and word variables as continuous, and exploring the influence of individual differences such as vocabulary, vision, and working memory. The primary finding is remarkable stability in the influence of a diverse set of variables on visual word recognition across the adult age spectrum. This pattern is discussed in reference to previous inconsistent findings in the literature and implications for current models of visual word recognition. PMID:27336629
Klop, D; Engelbrecht, L
2013-12-01
This study investigated whether a dynamic visual presentation method (a soundless animated video presentation) would elicit better narratives than a static visual presentation method (a wordless picture book). Twenty mainstream grade 3 children were randomly assigned to two groups and assessed with one of the visual presentation methods. Narrative performance was measured in terms of micro- and macrostructure variables. Microstructure variables included productivity (total number of words, total number of T-units), syntactic complexity (mean length of T-unit) and lexical diversity measures (number of different words). Macrostructure variables included episodic structure in terms of goal-attempt-outcome (GAO) sequences. Both visual presentation modalities elicited narratives of similar quantity and quality in terms of the micro- and macrostructure variables that were investigated. Animation of picture stimuli did not elicit better narratives than static picture stimuli.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arendt, Dustin L.; Volkova, Svitlana
Analyzing and visualizing large amounts of social media communications and contrasting short-term conversation changes over time and geo-locations is extremely important for commercial and government applications. Earlier approaches for large-scale text stream summarization used dynamic topic models and trending words. Instead, we rely on text embeddings – low-dimensional word representations in a continuous vector space where similar words are embedded nearby each other. This paper presents ESTEEM,1 a novel tool for visualizing and evaluating spatiotemporal embeddings learned from streaming social media texts. Our tool allows users to monitor and analyze query words and their closest neighbors with an interactive interface.more » We used state-of- the-art techniques to learn embeddings and developed a visualization to represent dynamically changing relations between words in social media over time and other dimensions. This is the first interactive visualization of streaming text representations learned from social media texts that also allows users to contrast differences across multiple dimensions of the data.« less
ERIC Educational Resources Information Center
Sauval, Karinne; Perre, Laetitia; Casalis, Séverine
2017-01-01
The present study aimed to investigate the development of automatic phonological processes involved in visual word recognition during reading acquisition in French. A visual masked priming lexical decision experiment was carried out with third, fifth graders and adult skilled readers. Three different types of partial overlap between the prime and…
Manfredi, Mirella; Cohn, Neil; Kutas, Marta
2017-06-01
Researchers have long questioned whether information presented through different sensory modalities involves distinct or shared semantic systems. We investigated uni-sensory cross-modal processing by recording event-related brain potentials to words replacing the climactic event in a visual narrative sequence (comics). We compared Onomatopoeic words, which phonetically imitate action sounds (Pow!), with Descriptive words, which describe an action (Punch!), that were (in)congruent within their sequence contexts. Across two experiments, larger N400s appeared to Anomalous Onomatopoeic or Descriptive critical panels than to their congruent counterparts, reflecting a difficulty in semantic access/retrieval. Also, Descriptive words evinced a greater late frontal positivity compared to Onomatopoetic words, suggesting that, though plausible, they may be less predictable/expected in visual narratives. Our results indicate that uni-sensory cross-model integration of word/letter-symbol strings within visual narratives elicit ERP patterns typically observed for written sentence processing, thereby suggesting the engagement of similar domain-independent integration/interpretation mechanisms. Copyright © 2017 Elsevier Inc. All rights reserved.
Manfredi, Mirella; Cohn, Neil; Kutas, Marta
2017-01-01
Researchers have long questioned whether information presented through different sensory modalities involves distinct or shared semantic systems. We investigated uni-sensory cross-modal processing by recording event-related brain potentials to words replacing the climactic event in a visual narrative sequence (comics). We compared Onomatopoeic words, which phonetically imitate action sounds (Pow!), with Descriptive words, which describe an action (Punch!), that were (in)congruent within their sequence contexts. Across two experiments, larger N400s appeared to Anomalous Onomatopoeic or Descriptive critical panels than to their congruent counterparts, reflecting a difficulty in semantic access/retrieval. Also, Descriptive words evinced a greater late frontal positivity compared to Onomatopoetic words, suggesting that, though plausible, they may be less predictable/expected in visual narratives. Our results indicate that uni-sensory cross-model integration of word/letter-symbol strings within visual narratives elicit ERP patterns typically observed for written sentence processing, thereby suggesting the engagement of similar domain-independent integration/interpretation mechanisms. PMID:28242517
Interference Effects on the Recall of Pictures, Printed Words and Spoken Words.
ERIC Educational Resources Information Center
Burton, John K.; Bruning, Roger H.
Thirty college undergraduates participated in a study of the effects of acoustic and visual interference on the recall of word and picture triads in both short-term and long-term memory. The subjects were presented 24 triads of monosyllabic nouns representing all of the possible combinations of presentation types: pictures, printed words, and…
Ease of identifying words degraded by visual noise.
Barber, P; de la Mahotière, C
1982-08-01
A technique is described for investigating word recognition involving the superimposition of 'noise' on the visual target word. For this task a word is printed in the form of letters made up of separate elements; noise consists of additional elements which serve to reduce the ease whereby the words may be recognized, and a threshold-like measure can be obtained in terms of the amount of noise. A word frequency effect was obtained for the noise task, and for words presented tachistoscopically but in conventional typography. For the tachistoscope task, however, the frequency effect depended on the method of presentation. A second study showed no effect of inspection interval on performance on the noise task. A word-frequency effect was also found in a third experiment with tachistoscopic exposure of the noise task stimuli in undegraded form. The question of whether common processes are drawn on by tasks entailing different ways of varying ease of recognition is addressed, and the suitability of different tasks for word recognition research is discussed.
The (lack of) effect of dynamic visual noise on the concreteness effect in short-term memory.
Castellà, Judit; Campoy, Guillermo
2018-05-17
It has been suggested that the concreteness effect in short-term memory (STM) is a consequence of concrete words having more distinctive and richer semantic representations. The generation and storage of visual codes in STM could also play a crucial role on the effect because concrete words are more imaginable than abstract words. If this were the case, the introduction of a visual interference task would be expected to disrupt recall of concrete words. A Dynamic Visual Noise (DVN) display, which has been proven to eliminate the concreteness effect on long-term memory (LTM), was presented along encoding of concrete and abstract words in a STM serial recall task. Results showed a main effect of word type, with more item errors in abstract words, a main effect of DVN, which impaired global performance due to more order errors, but no interaction, suggesting that DVN did not have any impact on the concreteness effect. These findings are discussed in terms of LTM participation through redintegration processes and in terms of the language-based models of verbal STM.
Stimulus-driven changes in the direction of neural priming during visual word recognition.
Pas, Maciej; Nakamura, Kimihiro; Sawamoto, Nobukatsu; Aso, Toshihiko; Fukuyama, Hidenao
2016-01-15
Visual object recognition is generally known to be facilitated when targets are preceded by the same or relevant stimuli. For written words, however, the beneficial effect of priming can be reversed when primes and targets share initial syllables (e.g., "boca" and "bono"). Using fMRI, the present study explored neuroanatomical correlates of this negative syllabic priming. In each trial, participants made semantic judgment about a centrally presented target, which was preceded by a masked prime flashed either to the left or right visual field. We observed that the inhibitory priming during reading was associated with a left-lateralized effect of repetition enhancement in the inferior frontal gyrus (IFG), rather than repetition suppression in the ventral visual region previously associated with facilitatory behavioral priming. We further performed a second fMRI experiment using a classical whole-word repetition priming paradigm with the same hemifield procedure and task instruction, and obtained well-known effects of repetition suppression in the left occipito-temporal cortex. These results therefore suggest that the left IFG constitutes a fast word processing system distinct from the posterior visual word-form system and that the directions of repetition effects can change with intrinsic properties of stimuli even when participants' cognitive and attentional states are kept constant. Copyright © 2015 Elsevier Inc. All rights reserved.
A Novel Image Retrieval Based on Visual Words Integration of SIFT and SURF
Ali, Nouman; Bajwa, Khalid Bashir; Sablatnig, Robert; Chatzichristofis, Savvas A.; Iqbal, Zeshan; Rashid, Muhammad; Habib, Hafiz Adnan
2016-01-01
With the recent evolution of technology, the number of image archives has increased exponentially. In Content-Based Image Retrieval (CBIR), high-level visual information is represented in the form of low-level features. The semantic gap between the low-level features and the high-level image concepts is an open research problem. In this paper, we present a novel visual words integration of Scale Invariant Feature Transform (SIFT) and Speeded-Up Robust Features (SURF). The two local features representations are selected for image retrieval because SIFT is more robust to the change in scale and rotation, while SURF is robust to changes in illumination. The visual words integration of SIFT and SURF adds the robustness of both features to image retrieval. The qualitative and quantitative comparisons conducted on Corel-1000, Corel-1500, Corel-2000, Oliva and Torralba and Ground Truth image benchmarks demonstrate the effectiveness of the proposed visual words integration. PMID:27315101
Soares, Ana Paula; Perea, Manuel; Comesaña, Montserrat
2014-01-01
Recent research with skilled adult readers has consistently revealed an advantage of consonants over vowels in visual-word recognition (i.e., the so-called "consonant bias"). Nevertheless, little is known about how early in development the consonant bias emerges. This work aims to address this issue by studying the relative contribution of consonants and vowels at the early stages of visual-word recognition in developing readers (2(nd) and 4(th) Grade children) and skilled adult readers (college students) using a masked priming lexical decision task. Target words starting either with a consonant or a vowel were preceded by a briefly presented masked prime (50 ms) that could be the same as the target (e.g., pirata-PIRATA [pirate-PIRATE]), a consonant-preserving prime (e.g., pureto-PIRATA), a vowel-preserving prime (e.g., gicala-PIRATA), or an unrelated prime (e.g., bocelo -PIRATA). Results revealed significant priming effects for the identity and consonant-preserving conditions in adult readers and 4(th) Grade children, whereas 2(nd) graders only showed priming for the identity condition. In adult readers, the advantage of consonants was observed both for words starting with a consonant or a vowel, while in 4(th) graders this advantage was restricted to words with an initial consonant. Thus, the present findings suggest that a Consonant/Vowel skeleton should be included in future (developmental) models of visual-word recognition and reading.
Does a pear growl? Interference from semantic properties of orthographic neighbors.
Pecher, Diane; de Rooij, Jimmy; Zeelenberg, René
2009-07-01
In this study, we investigated whether semantic properties of a word's orthographic neighbors are activated during visual word recognition. In two experiments, words were presented with a property that was not true for the word itself. We manipulated whether the property was true for an orthographic neighbor of the word. Our results showed that rejection of the property was slower and less accurate when the property was true for a neighbor than when the property was not true for a neighbor. These findings indicate that semantic information is activated before orthographic processing is finished. The present results are problematic for the links model (Forster, 2006; Forster & Hector, 2002) that was recently proposed in order to bring form-first models of visual word recognition into line with previously reported findings (Forster & Hector, 2002; Pecher, Zeelenberg, & Wagenmakers, 2005; Rodd, 2004).
ERIC Educational Resources Information Center
Farley, Andrew P.; Ramonda, Kris; Liu, Xun
2012-01-01
According to the Dual-Coding Theory (Paivio & Desrochers, 1980), words that are associated with rich visual imagery are more easily learned than abstract words due to what is termed the concreteness effect (Altarriba & Bauer, 2004; de Groot, 1992, de Groot et al., 1994; ter Doest & Semin, 2005). The present study examined the effects of attaching…
Rapid modulation of spoken word recognition by visual primes.
Okano, Kana; Grainger, Jonathan; Holcomb, Phillip J
2016-02-01
In a masked cross-modal priming experiment with ERP recordings, spoken Japanese words were primed with words written in one of the two syllabary scripts of Japanese. An early priming effect, peaking at around 200ms after onset of the spoken word target, was seen in left lateral electrode sites for Katakana primes, and later effects were seen for both Hiragana and Katakana primes on the N400 ERP component. The early effect is thought to reflect the efficiency with which words in Katakana script make contact with sublexical phonological representations involved in spoken language comprehension, due to the particular way this script is used by Japanese readers. This demonstrates fast-acting influences of visual primes on the processing of auditory target words, and suggests that briefly presented visual primes can influence sublexical processing of auditory target words. The later N400 priming effects, on the other hand, most likely reflect cross-modal influences on activity at the level of whole-word phonology and semantics.
Rapid modulation of spoken word recognition by visual primes
Okano, Kana; Grainger, Jonathan; Holcomb, Phillip J.
2015-01-01
In a masked cross-modal priming experiment with ERP recordings, spoken Japanese words were primed with words written in one of the two syllabary scripts of Japanese. An early priming effect, peaking at around 200ms after onset of the spoken word target, was seen in left lateral electrode sites for Katakana primes, and later effects were seen for both Hiragana and Katakana primes on the N400 ERP component. The early effect is thought to reflect the efficiency with which words in Katakana script make contact with sublexical phonological representations involved in spoken language comprehension, due to the particular way this script is used by Japanese readers. This demonstrates fast-acting influences of visual primes on the processing of auditory target words, and suggests that briefly presented visual primes can influence sublexical processing of auditory target words. The later N400 priming effects, on the other hand, most likely reflect cross-modal influences on activity at the level of whole-word phonology and semantics. PMID:26516296
ERIC Educational Resources Information Center
Barca, Laura; Cornelissen, Piers; Simpson, Michael; Urooj, Uzma; Woods, Will; Ellis, Andrew W.
2011-01-01
Right-handed participants respond more quickly and more accurately to written words presented in the right visual field (RVF) than in the left visual field (LVF). Previous attempts to identify the neural basis of the RVF advantage have had limited success. Experiment 1 was a behavioral study of lateralized word naming which established that the…
Perea, Manuel; Jiménez, María; Martín-Suesta, Miguel; Gómez, Pablo
2015-04-01
This article explores how letter position coding is attained during braille reading and its implications for models of word recognition. When text is presented visually, the reading process easily adjusts to the jumbling of some letters (jugde-judge), with a small cost in reading speed. Two explanations have been proposed: One relies on a general mechanism of perceptual uncertainty at the visual level, and the other focuses on the activation of an abstract level of representation (i.e., bigrams) that is shared by all orthographic codes. Thus, these explanations make differential predictions about reading in a tactile modality. In the present study, congenitally blind readers read sentences presented on a braille display that tracked the finger position. The sentences either were intact or involved letter transpositions. A parallel experiment was conducted in the visual modality. Results revealed a substantially greater reading cost for the sentences with transposed-letter words in braille readers. In contrast with the findings with sighted readers, in which there is a cost of transpositions in the external (initial and final) letters, the reading cost in braille readers occurs serially, with a large cost for initial letter transpositions. Thus, these data suggest that the letter-position-related effects in visual word recognition are due to the characteristics of the visual stream.
Unconscious Cross-Modal Priming of Auditory Sound Localization by Visual Words
ERIC Educational Resources Information Center
Ansorge, Ulrich; Khalid, Shah; Laback, Bernhard
2016-01-01
Little is known about the cross-modal integration of unconscious and conscious information. In the current study, we therefore tested whether the spatial meaning of an unconscious visual word, such as "up", influences the perceived location of a subsequently presented auditory target. Although cross-modal integration of unconscious…
Bletzer, Keith V
2015-01-01
Satisfaction surveys are common in the field of health education, as a means of assisting organizations to improve the appropriateness of training materials and the effectiveness of facilitation-presentation. Data can be qualitative of which analysis often become specialized. This technical article aims to reveal whether qualitative survey results can be visualized by presenting them as a Word Cloud. Qualitative materials in the form of written comments on an agency-specific satisfaction survey were coded and quantified. The resulting quantitative data were used to convert comments into "input terms" to generate Word Clouds to increase comprehension and accessibility through visualization of the written responses. A three-tier display incorporated a Word Cloud at the top, followed by the corresponding frequency table, and a textual summary of the qualitative data represented by the Word Cloud imagery. This mixed format adheres to recognition that people vary in what format is most effective for assimilating new information. The combination of visual representation through Word Clouds complemented by quantified qualitative materials is one means of increasing comprehensibility for a range of stakeholders, who might not be familiar with numerical tables or statistical analyses.
Ludersdorfer, Philipp; Kronbichler, Martin; Wimmer, Heinz
2015-04-01
The present fMRI study used a spelling task to investigate the hypothesis that the left ventral occipitotemporal cortex (vOT) hosts neuronal representations of whole written words. Such an orthographic word lexicon is posited by cognitive dual-route theories of reading and spelling. In the scanner, participants performed a spelling task in which they had to indicate if a visually presented letter is present in the written form of an auditorily presented word. The main experimental manipulation distinguished between an orthographic word spelling condition in which correct spelling decisions had to be based on orthographic whole-word representations, a word spelling condition in which reliance on orthographic whole-word representations was optional and a phonological pseudoword spelling condition in which no reliance on such representations was possible. To evaluate spelling-specific activations the spelling conditions were contrasted with control conditions that also presented auditory words and pseudowords, but participants had to indicate if a visually presented letter corresponded to the gender of the speaker. We identified a left vOT cluster activated for the critical orthographic word spelling condition relative to both the control condition and the phonological pseudoword spelling condition. Our results suggest that activation of left vOT during spelling can be attributed to the retrieval of orthographic whole-word representations and, thus, support the position that the left vOT potentially represents the neuronal equivalent of the cognitive orthographic word lexicon. © 2014 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.
Schuster, Sarah; Hawelka, Stefan; Hutzler, Florian; Kronbichler, Martin; Richlan, Fabio
2016-01-01
Word length, frequency, and predictability count among the most influential variables during reading. Their effects are well-documented in eye movement studies, but pertinent evidence from neuroimaging primarily stem from single-word presentations. We investigated the effects of these variables during reading of whole sentences with simultaneous eye-tracking and functional magnetic resonance imaging (fixation-related fMRI). Increasing word length was associated with increasing activation in occipital areas linked to visual analysis. Additionally, length elicited a U-shaped modulation (i.e., least activation for medium-length words) within a brain stem region presumably linked to eye movement control. These effects, however, were diminished when accounting for multiple fixation cases. Increasing frequency was associated with decreasing activation within left inferior frontal, superior parietal, and occipito-temporal regions. The function of the latter region—hosting the putative visual word form area—was originally considered as limited to sublexical processing. An exploratory analysis revealed that increasing predictability was associated with decreasing activation within middle temporal and inferior frontal regions previously implicated in memory access and unification. The findings are discussed with regard to their correspondence with findings from single-word presentations and with regard to neurocognitive models of visual word recognition, semantic processing, and eye movement control during reading. PMID:27365297
Language Proficiency Modulates the Recruitment of Non-Classical Language Areas in Bilinguals
Leonard, Matthew K.; Torres, Christina; Travis, Katherine E.; Brown, Timothy T.; Hagler, Donald J.; Dale, Anders M.; Elman, Jeffrey L.; Halgren, Eric
2011-01-01
Bilingualism provides a unique opportunity for understanding the relative roles of proficiency and order of acquisition in determining how the brain represents language. In a previous study, we combined magnetoencephalography (MEG) and magnetic resonance imaging (MRI) to examine the spatiotemporal dynamics of word processing in a group of Spanish-English bilinguals who were more proficient in their native language. We found that from the earliest stages of lexical processing, words in the second language evoke greater activity in bilateral posterior visual regions, while activity to the native language is largely confined to classical left hemisphere fronto-temporal areas. In the present study, we sought to examine whether these effects relate to language proficiency or order of language acquisition by testing Spanish-English bilingual subjects who had become dominant in their second language. Additionally, we wanted to determine whether activity in bilateral visual regions was related to the presentation of written words in our previous study, so we presented subjects with both written and auditory words. We found greater activity for the less proficient native language in bilateral posterior visual regions for both the visual and auditory modalities, which started during the earliest word encoding stages and continued through lexico-semantic processing. In classical left fronto-temporal regions, the two languages evoked similar activity. Therefore, it is the lack of proficiency rather than secondary acquisition order that determines the recruitment of non-classical areas for word processing. PMID:21455315
Wu, Helen C; Nagasawa, Tetsuro; Brown, Erik C; Juhasz, Csaba; Rothermel, Robert; Hoechstetter, Karsten; Shah, Aashit; Mittal, Sandeep; Fuerst, Darren; Sood, Sandeep; Asano, Eishi
2011-10-01
We measured cortical gamma-oscillations in response to visual-language tasks consisting of picture naming and word reading in an effort to better understand human visual-language pathways. We studied six patients with focal epilepsy who underwent extraoperative electrocorticography (ECoG) recording. Patients were asked to overtly name images presented sequentially in the picture naming task and to overtly read written words in the reading task. Both tasks commonly elicited gamma-augmentation (maximally at 80-100 Hz) on ECoG in the occipital, inferior-occipital-temporal and inferior-Rolandic areas, bilaterally. Picture naming, compared to reading task, elicited greater gamma-augmentation in portions of pre-motor areas as well as occipital and inferior-occipital-temporal areas, bilaterally. In contrast, word reading elicited greater gamma-augmentation in portions of bilateral occipital, left occipital-temporal and left superior-posterior-parietal areas. Gamma-attenuation was elicited by both tasks in portions of posterior cingulate and ventral premotor-prefrontal areas bilaterally. The number of letters in a presented word was positively correlated to the degree of gamma-augmentation in the medial occipital areas. Gamma-augmentation measured on ECoG identified cortical areas commonly and differentially involved in picture naming and reading tasks. Longer words may activate the primary visual cortex for the more peripheral field. The present study increases our understanding of the visual-language pathways. Copyright © 2011 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.
Metusalem, Ross; Kutas, Marta; Urbach, Thomas P.; Elman, Jeffrey L.
2016-01-01
During incremental language comprehension, the brain activates knowledge of described events, including knowledge elements that constitute semantic anomalies in their linguistic context. The present study investigates hemispheric asymmetries in this process, with the aim of advancing our understanding of the neural basis and functional properties of event knowledge activation during incremental comprehension. In a visual half-field event-related brain potential (ERP) experiment, participants read brief discourses in which the third sentence contained a word that was either highly expected, semantically anomalous but related to the described event, or semantically anomalous but unrelated to the described event. For both visual fields of target word presentation, semantically anomalous words elicited N400 ERP components of greater amplitude than did expected words. Crucially, event-related anomalous words elicited a reduced N400 relative to event-unrelated anomalous words only with left visual field/right hemisphere presentation. This result suggests that right hemisphere processes are critical to the activation of event knowledge elements that violate the linguistic context, and in doing so informs existing theories of hemispheric asymmetries in semantic processing during language comprehension. Additionally, this finding coincides with past research suggesting a crucial role for the right hemisphere in elaborative inference generation, raises interesting questions regarding hemispheric coordination in generating event-specific linguistic expectancies, and more generally highlights the possibility of functional dissociation between event knowledge activation for the generation of elaborative inferences and for linguistic expectancies. PMID:26878980
Metusalem, Ross; Kutas, Marta; Urbach, Thomas P; Elman, Jeffrey L
2016-04-01
During incremental language comprehension, the brain activates knowledge of described events, including knowledge elements that constitute semantic anomalies in their linguistic context. The present study investigates hemispheric asymmetries in this process, with the aim of advancing our understanding of the neural basis and functional properties of event knowledge activation during incremental comprehension. In a visual half-field event-related brain potential (ERP) experiment, participants read brief discourses in which the third sentence contained a word that was either highly expected, semantically anomalous but related to the described event (Event-Related), or semantically anomalous but unrelated to the described event (Event-Unrelated). For both visual fields of target word presentation, semantically anomalous words elicited N400 ERP components of greater amplitude than did expected words. Crucially, Event-Related anomalous words elicited a reduced N400 relative to Event-Unrelated anomalous words only with left visual field/right hemisphere presentation. This result suggests that right hemisphere processes are critical to the activation of event knowledge elements that violate the linguistic context, and in doing so informs existing theories of hemispheric asymmetries in semantic processing during language comprehension. Additionally, this finding coincides with past research suggesting a crucial role for the right hemisphere in elaborative inference generation, raises interesting questions regarding hemispheric coordination in generating event-specific linguistic expectancies, and more generally highlights the possibility of functional dissociation of event knowledge activation for the generation of elaborative inferences and for linguistic expectancies. Copyright © 2016 Elsevier Ltd. All rights reserved.
Li, Sara Tze Kwan; Hsiao, Janet Hui-Wen
2018-07-01
Music notation and English word reading both involve mapping horizontally arranged visual components to components in sound, in contrast to reading in logographic languages such as Chinese. Accordingly, music-reading expertise may influence English word processing more than Chinese character processing. Here we showed that musicians named English words significantly faster than non-musicians when words were presented in the left visual field/right hemisphere (RH) or the center position, suggesting an advantage of RH processing due to music reading experience. This effect was not observed in Chinese character naming. A follow-up ERP study showed that in a sequential matching task, musicians had reduced RH N170 responses to English non-words under the processing of musical segments as compared with non-musicians, suggesting a shared visual processing mechanism in the RH between music notation and English non-word reading. This shared mechanism may be related to the letter-by-letter, serial visual processing that characterizes RH English word recognition (e.g., Lavidor & Ellis, 2001), which may consequently facilitate English word processing in the RH in musicians. Thus, music reading experience may have differential influences on the processing of different languages, depending on their similarities in the cognitive processes involved. Copyright © 2018 Elsevier B.V. All rights reserved.
Beginning Readers Activate Semantics from Sub-Word Orthography
ERIC Educational Resources Information Center
Nation, Kate; Cocksey, Joanne
2009-01-01
Two experiments assessed whether 7-year-old children activate semantic information from sub-word orthography. Children made category decisions to visually-presented words, some of which contained an embedded word (e.g., "hip" in s"hip"). In Experiment 1 children were slower and less accurate to classify words if they contained an embedded word…
Sommers, Mitchell S.; Phelps, Damian
2016-01-01
One goal of the present study was to establish whether providing younger and older adults with visual speech information (both seeing and hearing a talker compared with listening alone) would reduce listening effort for understanding speech in noise. In addition, we used an individual differences approach to assess whether changes in listening effort were related to changes in visual enhancement – the improvement in speech understanding in going from an auditory-only (A-only) to an auditory-visual condition (AV) condition. To compare word recognition in A-only and AV modalities, younger and older adults identified words in both A-only and AV conditions in the presence of six-talker babble. Listening effort was assessed using a modified version of a serial recall task. Participants heard (A-only) or saw and heard (AV) a talker producing individual words without background noise. List presentation was stopped randomly and participants were then asked to repeat the last 3 words that were presented. Listening effort was assessed using recall performance in the 2-back and 3-back positions. Younger, but not older, adults exhibited reduced listening effort as indexed by greater recall in the 2- and 3-back positions for the AV compared with the A-only presentations. For younger, but not older adults, changes in performance from the A-only to the AV condition were moderately correlated with visual enhancement. Results are discussed within a limited-resource model of both A-only and AV speech perception. PMID:27355772
Object activation in semantic memory from visual multimodal feature input.
Kraut, Michael A; Kremen, Sarah; Moo, Lauren R; Segal, Jessica B; Calhoun, Vincent; Hart, John
2002-01-01
The human brain's representation of objects has been proposed to exist as a network of coactivated neural regions present in multiple cognitive systems. However, it is not known if there is a region specific to the process of activating an integrated object representation in semantic memory from multimodal feature stimuli (e.g., picture-word). A previous study using word-word feature pairs as stimulus input showed that the left thalamus is integrally involved in object activation (Kraut, Kremen, Segal, et al., this issue). In the present study, participants were presented picture-word pairs that are features of objects, with the task being to decide if together they "activated" an object not explicitly presented (e.g., picture of a candle and the word "icing" activate the internal representation of a "cake"). For picture-word pairs that combine to elicit an object, signal change was detected in the ventral temporo-occipital regions, pre-SMA, left primary somatomotor cortex, both caudate nuclei, and the dorsal thalami bilaterally. These findings suggest that the left thalamus is engaged for either picture or word stimuli, but the right thalamus appears to be involved when picture stimuli are also presented with words in semantic object activation tasks. The somatomotor signal changes are likely secondary to activation of the semantic object representations from multimodal visual stimuli.
Using complex auditory-visual samples to produce emergent relations in children with autism.
Groskreutz, Nicole C; Karsina, Allen; Miguel, Caio F; Groskreutz, Mark P
2010-03-01
Six participants with autism learned conditional relations between complex auditory-visual sample stimuli (dictated words and pictures) and simple visual comparisons (printed words) using matching-to-sample training procedures. Pre- and posttests examined potential stimulus control by each element of the complex sample when presented individually and emergence of additional conditional relations and oral labeling. Tests revealed class-consistent performance for all participants following training.
The effects of bilateral presentations on lateralized lexical decision.
Fernandino, Leonardo; Iacoboni, Marco; Zaidel, Eran
2007-06-01
We investigated how lateralized lexical decision is affected by the presence of distractors in the visual hemifield contralateral to the target. The study had three goals: first, to determine how the presence of a distractor (either a word or a pseudoword) affects visual field differences in the processing of the target; second, to identify the stage of the process in which the distractor is affecting the decision about the target; and third, to determine whether the interaction between the lexicality of the target and the lexicality of the distractor ("lexical redundancy effect") is due to facilitation or inhibition of lexical processing. Unilateral and bilateral trials were presented in separate blocks. Target stimuli were always underlined. Regarding our first goal, we found that bilateral presentations (a) increased the effect of visual hemifield of presentation (right visual field advantage) for words by slowing down the processing of word targets presented to the left visual field, and (b) produced an interaction between visual hemifield of presentation (VF) and target lexicality (TLex), which implies the use of different strategies by the two hemispheres in lexical processing. For our second goal of determining the processing stage that is affected by the distractor, we introduced a third condition in which targets were always accompanied by "perceptual" distractors consisting of sequences of the letter "x" (e.g., xxxx). Performance on these trials indicated that most of the interaction occurs during lexical access (after basic perceptual analysis but before response programming). Finally, a comparison between performance patterns on the trials containing perceptual and lexical distractors indicated that the lexical redundancy effect is mainly due to inhibition of word processing by pseudoword distractors.
Aural, visual, and pictorial stimulus formats in false recall.
Beauchamp, Heather M
2002-12-01
The present investigation is an initial simultaneous examination of the influence of three stimulus formats on false memories. Several pilot tests were conducted to develop new category associate stimulus lists. 73 women and 26 men (M age=21.1 yr.) were in one of three conditions: they either heard words, were shown words, or were shown pictures highly related to critical nonpresented items. As expected, recall of critical nonpresented stimuli was significantly greater for aural lists than for visually presented words and pictorial images. These findings demonstrate that the accuracy of memory is influenced by the format of the information encoded.
Conrad, Markus; Carreiras, Manuel; Tamm, Sascha; Jacobs, Arthur M
2009-04-01
Over the last decade, there has been increasing evidence for syllabic processing during visual word recognition. If syllabic effects prove to be independent from orthographic redundancy, this would seriously challenge the ability of current computational models to account for the processing of polysyllabic words. Three experiments are presented to disentangle effects of the frequency of syllabic units and orthographic segments in lexical decision. In Experiment 1 the authors obtained an inhibitory syllable frequency effect that was unaffected by the presence or absence of a bigram trough at the syllable boundary. In Experiments 2 and 3 an inhibitory effect of initial syllable frequency but a facilitative effect of initial bigram frequency emerged when manipulating 1 of the 2 measures and controlling for the other in Spanish words starting with consonant-vowel syllables. The authors conclude that effects of syllable frequency and letter-cluster frequency are independent and arise at different processing levels of visual word recognition. Results are discussed within the framework of an interactive activation model of visual word recognition. (c) 2009 APA, all rights reserved.
Effect of word familiarity on visually evoked magnetic fields.
Harada, N; Iwaki, S; Nakagawa, S; Yamaguchi, M; Tonoike, M
2004-11-30
This study investigated the effect of word familiarity of visual stimuli on the word recognizing function of the human brain. Word familiarity is an index of the relative ease of word perception, and is characterized by facilitation and accuracy on word recognition. We studied the effect of word familiarity, using "Hiragana" (phonetic characters in Japanese orthography) characters as visual stimuli, on the elicitation of visually evoked magnetic fields with a word-naming task. The words were selected from a database of lexical properties of Japanese. The four "Hiragana" characters used were grouped and presented in 4 classes of degree of familiarity. The three components were observed in averaged waveforms of the root mean square (RMS) value on latencies at about 100 ms, 150 ms and 220 ms. The RMS value of the 220 ms component showed a significant positive correlation (F=(3/36); 5.501; p=0.035) with the value of familiarity. ECDs of the 220 ms component were observed in the intraparietal sulcus (IPS). Increments in the RMS value of the 220 ms component, which might reflect ideographical word recognition, retrieving "as a whole" were enhanced with increments of the value of familiarity. The interaction of characters, which increased with the value of familiarity, might function "as a large symbol"; and enhance a "pop-out" function with an escaping character inhibiting other characters and enhancing the segmentation of the character (as a figure) from the ground.
Does Emotion Help or Hinder Immediate Memory?: Arousal Versus Priority-Binding Mechanisms
ERIC Educational Resources Information Center
Hadley, Christopher B.; MacKay, Donald G.
2006-01-01
People recall taboo words better than neutral words in many experimental contexts. The present rapid serial visual presentation (RSVP) experiments demonstrated this taboo-superiority effect for immediate recall of mixed lists containing taboo and neutral words matched for familiarity, length, and category coherence. Under binding theory (MacKay et…
The Internal Structure of "Chaos": Letter Category Determines Visual Word Perceptual Units
ERIC Educational Resources Information Center
Chetail, Fabienne; Content, Alain
2012-01-01
The processes and the cues determining the orthographic structure of polysyllabic words remain far from clear. In the present study, we investigated the role of letter category (consonant vs. vowels) in the perceptual organization of letter strings. In the syllabic counting task, participants were presented with written words matched for the…
Hemispheric Differences in Bilingual Word and Language Recognition.
ERIC Educational Resources Information Center
Roberts, William T.; And Others
The linguistic role of the right hemisphere in bilingual language processing was examined. Ten right-handed Spanish-English bilinguals were tachistoscopically presented with mixed lists of Spanish and English words to either the right or left visual field and asked to identify the language and the word presented. Five of the subjects identified…
Stephan-Otto, Christian; Siddi, Sara; Senior, Carl; Muñoz-Samons, Daniel; Ochoa, Susana; Sánchez-Laforga, Ana María; Brébion, Gildas
2017-01-01
Visual mental imagery might be critical in the ability to discriminate imagined from perceived pictures. Our aim was to investigate the neural bases of this specific type of reality-monitoring process in individuals with high visual imagery abilities. A reality-monitoring task was administered to twenty-six healthy participants using functional magnetic resonance imaging. During the encoding phase, 45 words designating common items, and 45 pictures of other common items, were presented in random order. During the recall phase, participants were required to remember whether a picture of the item had been presented, or only a word. Two subgroups of participants with a propensity for high vs. low visual imagery were contrasted. Activation of the amygdala, left inferior occipital gyrus, insula, and precuneus were observed when high visual imagers encoded words later remembered as pictures. At the recall phase, these same participants activated the middle frontal gyrus and inferior and superior parietal lobes when erroneously remembering pictures. The formation of visual mental images might activate visual brain areas as well as structures involved in emotional processing. High visual imagers demonstrate increased activation of a fronto-parietal source-monitoring network that enables distinction between imagined and perceived pictures.
Effects of Multimodal Information on Learning Performance and Judgment of Learning
ERIC Educational Resources Information Center
Chen, Gongxiang; Fu, Xiaolan
2003-01-01
Two experiments were conducted to investigate the effects of multimodal information on learning performance and judgment of learning (JOL). Experiment 1 examined the effects of representation type (word-only versus word-plus-picture) and presentation channel (visual-only versus visual-plus-auditory) on recall and immediate-JOL in fixed-rate…
ERIC Educational Resources Information Center
Duyck, Wouter; Van Assche, Eva; Drieghe, Denis; Hartsuiker, Robert J.
2007-01-01
Recent research on bilingualism has shown that lexical access in visual word recognition by bilinguals is not selective with respect to language. In the present study, the authors investigated language-independent lexical access in bilinguals reading sentences, which constitutes a strong unilingual linguistic context. In the first experiment,…
Soares, Ana Paula; Perea, Manuel; Comesaña, Montserrat
2014-01-01
Recent research with skilled adult readers has consistently revealed an advantage of consonants over vowels in visual-word recognition (i.e., the so-called “consonant bias”). Nevertheless, little is known about how early in development the consonant bias emerges. This work aims to address this issue by studying the relative contribution of consonants and vowels at the early stages of visual-word recognition in developing readers (2nd and 4th Grade children) and skilled adult readers (college students) using a masked priming lexical decision task. Target words starting either with a consonant or a vowel were preceded by a briefly presented masked prime (50 ms) that could be the same as the target (e.g., pirata-PIRATA [pirate-PIRATE]), a consonant-preserving prime (e.g., pureto-PIRATA), a vowel-preserving prime (e.g., gicala-PIRATA), or an unrelated prime (e.g., bocelo -PIRATA). Results revealed significant priming effects for the identity and consonant-preserving conditions in adult readers and 4th Grade children, whereas 2nd graders only showed priming for the identity condition. In adult readers, the advantage of consonants was observed both for words starting with a consonant or a vowel, while in 4th graders this advantage was restricted to words with an initial consonant. Thus, the present findings suggest that a Consonant/Vowel skeleton should be included in future (developmental) models of visual-word recognition and reading. PMID:24523917
Auditory emotional cues enhance visual perception.
Zeelenberg, René; Bocanegra, Bruno R
2010-04-01
Recent studies show that emotional stimuli impair performance to subsequently presented neutral stimuli. Here we show a cross-modal perceptual enhancement caused by emotional cues. Auditory cue words were followed by a visually presented neutral target word. Two-alternative forced-choice identification of the visual target was improved by emotional cues as compared to neutral cues. When the cue was presented visually we replicated the emotion-induced impairment found in other studies. Our results suggest emotional stimuli have a twofold effect on perception. They impair perception by reflexively attracting attention at the expense of competing stimuli. However, emotional stimuli also induce a nonspecific perceptual enhancement that carries over onto other stimuli when competition is reduced, for example, by presenting stimuli in different modalities. Copyright 2009 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Zhang, Qingfang; Chen, Hsuan-Chih; Weekes, Brendan Stuart; Yang, Yufang
2009-01-01
A picture-word interference paradigm with visually presented distractors was used to investigate the independent effects of orthographic and phonological facilitation on Mandarin monosyllabic word production. Both the stimulus-onset asynchrony (SOA) and the picture-word relationship along different lexical dimensions were varied. We observed a…
Cao, Hongwen; Gao, Min; Yan, Hongmei
2016-01-01
The attentional blink (AB) is the phenomenon in which the identification of the second of two targets (T2) is attenuated if it is presented less than 500 ms after the first target (T1). Although the AB is eliminated in canonical word conditions, it remains unclear whether the character order in compound words affects the magnitude of the AB. Morpheme decomposition and transposition of Chinese two-character compound words can provide an effective means to examine AB priming and to assess combinations of the component representations inherent to visual word identification. In the present study, we examined the processing of consecutive targets in a rapid serial visual presentation (RSVP) paradigm using Chinese two-character compound words in which the two characters were transposed to form meaningful words or meaningless combinations (reversible, transposed, or canonical words). We found that when two Chinese characters that form a compound word, regardless of their order, are presented in an RSVP sequence, the likelihood of an AB for the second character is greatly reduced or eliminated compared to when the two characters constitute separate words rather than a compound word. Moreover, the order of the report for the two characters is more likely to be reversed when the normal order of the two characters in a compound word is reversed, especially when the interval between the presentation of the two characters is extremely short. These findings are more consistent with the cognitive strategy hypothesis than the resource-limited hypothesis during character decomposition and transposition of Chinese two-character compound words. These results suggest that compound characters are perceived as a unit, rather than two separate words. The data further suggest that readers could easily understand the text with character transpositions in compound words during Chinese reading.
Brébion, G; Ohlsen, R I; Bressan, R A; David, A S
2012-12-01
Previous research has shown associations between source memory errors and hallucinations in patients with schizophrenia. We bring together here findings from a broad memory investigation to specify better the type of source memory failure that is associated with auditory and visual hallucinations. Forty-one patients with schizophrenia and 43 healthy participants underwent a memory task involving recall and recognition of lists of words, recognition of pictures, memory for temporal and spatial context of presentation of the stimuli, and remembering whether target items were presented as words or pictures. False recognition of words and pictures was associated with hallucination scores. The extra-list intrusions in free recall were associated with verbal hallucinations whereas the intra-list intrusions were associated with a global hallucination score. Errors in discriminating the temporal context of word presentation and the spatial context of picture presentation were associated with auditory hallucinations. The tendency to remember verbal labels of items as pictures of these items was associated with visual hallucinations. Several memory errors were also inversely associated with affective flattening and anhedonia. Verbal and visual hallucinations are associated with confusion between internal verbal thoughts or internal visual images and perception. In addition, auditory hallucinations are associated with failure to process or remember the context of presentation of the events. Certain negative symptoms have an opposite effect on memory errors.
Zator, Krysten; Katz, Albert N
2017-07-01
Here, we examined linguistic differences in the reports of memories produced by three cueing methods. Two groups of young adults were cued visually either by words representing events or popular cultural phenomena that took place when they were 5, 10, or 16 years of age, or by words referencing a general lifetime period word cue directing them to that period in their life. A third group heard 30-second long musical clips of songs popular during the same three time periods. In each condition, participants typed a specific event memory evoked by the cue and these typed memories were subjected to analysis by the Linguistic Inquiry and Word Count (LIWC) program. Differences in the reports produced indicated that listening to music evoked memories embodied in motor-perceptual systems more so than memories evoked by our word-cueing conditions. Additionally, relative to music cues, lifetime period word cues produced memories with reliably more uses of personal pronouns, past tense terms, and negative emotions. The findings provide evidence for the embodiment of autobiographical memories, and how those differ when the cues emphasise different aspects of the encoded events.
Visual half-field presentations of incongruent color words: effects of gender and handedness.
Franzon, M; Hugdahl, K
1986-09-01
Right-handed (dextral) and left-handed (sinistral) males and females (N = 15) were compared for language lateralization in a visual half-field (VHF) incongruent color-words paradigm. The paradigm consists of repeated brief (less than 200 msec) presentations of color-words written in an incongruent color. Presentations are either to the right or to the left of center fixation. The task of the subject is to report the color the word is written in on each trial, ignoring the color-word. Color-bars and congruent color-words were used as control stimuli. Vocal reaction time (VRT) and error frequency were used as dependent measures. The logic behind the paradigm is that incongruent color-words should lead to a greater cognitive conflict when presented in the half-field contralateral to the dominant hemisphere. The results showed significantly longer VRTs in the right half-field for the dextral subjects. Furthermore, significantly more errors were observed in the male dextral group when the incongruent stimuli were presented in the right half-field. There was a similar trend in the data for the sinistral males. No differences between half-fields were observed for the female groups. It is concluded that the present results strengthen previous findings from our laboratory (Hugdahl and Franzon, 1985) that the incongruent color-words paradigm is a useful non-invasive technique for the study of lateralization in the intact brain.
Ludersdorfer, Philipp; Kronbichler, Martin; Wimmer, Heinz
2015-01-01
The present fMRI study used a spelling task to investigate the hypothesis that the left ventral occipitotemporal cortex (vOT) hosts neuronal representations of whole written words. Such an orthographic word lexicon is posited by cognitive dual-route theories of reading and spelling. In the scanner, participants performed a spelling task in which they had to indicate if a visually presented letter is present in the written form of an auditorily presented word. The main experimental manipulation distinguished between an orthographic word spelling condition in which correct spelling decisions had to be based on orthographic whole-word representations, a word spelling condition in which reliance on orthographic whole-word representations was optional and a phonological pseudoword spelling condition in which no reliance on such representations was possible. To evaluate spelling-specific activations the spelling conditions were contrasted with control conditions that also presented auditory words and pseudowords, but participants had to indicate if a visually presented letter corresponded to the gender of the speaker. We identified a left vOT cluster activated for the critical orthographic word spelling condition relative to both the control condition and the phonological pseudoword spelling condition. Our results suggest that activation of left vOT during spelling can be attributed to the retrieval of orthographic whole-word representations and, thus, support the position that the left vOT potentially represents the neuronal equivalent of the cognitive orthographic word lexicon. Hum Brain Mapp, 36:1393–1406, 2015. © 2014 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc. PMID:25504890
Different patterns of modality dominance across development.
Barnhart, Wesley R; Rivera, Samuel; Robinson, Christopher W
2018-01-01
The present study sought to better understand how children, young adults, and older adults attend and respond to multisensory information. In Experiment 1, young adults were presented with two spoken words, two pictures, or two word-picture pairings and they had to determine if the two stimuli/pairings were exactly the same or different. Pairing the words and pictures together slowed down visual but not auditory response times and delayed the latency of first fixations, both of which are consistent with a proposed mechanism underlying auditory dominance. Experiment 2 examined the development of modality dominance in children, young adults, and older adults. Cross-modal presentation attenuated visual accuracy and slowed down visual response times in children, whereas older adults showed the opposite pattern, with cross-modal presentation attenuating auditory accuracy and slowing down auditory response times. Cross-modal presentation also delayed first fixations in children and young adults. Mechanisms underlying modality dominance and multisensory processing are discussed. Copyright © 2017 Elsevier B.V. All rights reserved.
Kelly, R R; Tomlison-Keasey, C
1976-12-01
Eleven hearing-impaired children and 11 normal-hearing children (mean = four years 11 months) were visually presented familiar items in either picture or word form. Subjects were asked to recognize the stimuli they had seen from cue cards consisting of pictures or words. They were then asked to recall the sequence of stimuli by arranging the cue cards selected. The hearing-impaired group and normal-hearing subjects performed differently with the picture/picture (P/P) and word/word (W/W) modes in the recognition phase. The hearing impaired performed equally well with both modes (P/P and W/W), while the normal hearing did significantly better on the P/P mode. Furthermore, the normal-hearing group showed no difference in processing like modes (P/P and W/W) when compared to unlike modes (W/P and P/W). In contrast, the hearing-impaired subjects did better on like modes. The results were interpreted, in part, as supporting the position that young normal-hearing children dual code their visual information better than hearing-impaired children.
Optimal viewing position in vertically and horizontally presented Japanese words.
Kajii, N; Osaka, N
2000-11-01
In the present study, the optimal viewing position (OVP) phenomenon in Japanese Hiragana was investigated, with special reference to a comparison between the vertical and the horizontal meridians in the visual field. In the first experiment, word recognition scores were determined while the eyes were fixating predetermined locations in vertically and horizontally displayed words. Similar to what has been reported for Roman scripts, OVP curves, which were asymmetric with respect to the beginning of words, were observed in both conditions. However, this asymmetry was less pronounced for vertically than for horizontally displayed words. In the second experiment, the visibility of individual characters within strings was examined for the vertical and horizontal meridians. As for Roman characters, letter identification scores were better in the right than in the left visual field. However, identification scores did not differ between the upper and the lower sides of fixation along the vertical meridian. The results showed that the model proposed by Nazir, O'Regan, and Jacobs (1991) cannot entirely account for the OVP phenomenon. A model in which visual and lexical factors are combined is proposed instead.
Ostrand, Rachel; Blumstein, Sheila E.; Ferreira, Victor S.; Morgan, James L.
2016-01-01
Human speech perception often includes both an auditory and visual component. A conflict in these signals can result in the McGurk illusion, in which the listener perceives a fusion of the two streams, implying that information from both has been integrated. We report two experiments investigating whether auditory-visual integration of speech occurs before or after lexical access, and whether the visual signal influences lexical access at all. Subjects were presented with McGurk or Congruent primes and performed a lexical decision task on related or unrelated targets. Although subjects perceived the McGurk illusion, McGurk and Congruent primes with matching real-word auditory signals equivalently primed targets that were semantically related to the auditory signal, but not targets related to the McGurk percept. We conclude that the time course of auditory-visual integration is dependent on the lexicality of the auditory and visual input signals, and that listeners can lexically access one word and yet consciously perceive another. PMID:27011021
Semantically induced distortions of visual awareness in a patient with Balint's syndrome.
Soto, David; Humphreys, Glyn W
2009-02-01
We present data indicating that visual awareness for a basic perceptual feature (colour) can be influenced by the relation between the feature and the semantic properties of the stimulus. We examined semantic interference from the meaning of a colour word (''RED") on simple colour (ink related) detection responses in a patient with simultagnosia due to bilateral parietal lesions. We found that colour detection was influenced by the congruency between the meaning of the word and the relevant ink colour, with impaired performance when the word and the colour mismatched (on incongruent trials). This result held even when remote associations between meaning and colour were used (i.e. the word ''PEA" influenced detection of the ink colour red). The results are consistent with a late locus of conscious visual experience that is derived at post-semantic levels. The implications for the understanding of the role of parietal cortex in object binding and visual awareness are discussed.
Visual Presentation Effects on Identification of Multiple Environmental Sounds
Masakura, Yuko; Ichikawa, Makoto; Shimono, Koichi; Nakatsuka, Reio
2016-01-01
This study examined how the contents and timing of a visual stimulus affect the identification of mixed sounds recorded in a daily life environment. For experiments, we presented four environment sounds as auditory stimuli for 5 s along with a picture or a written word as a visual stimulus that might or might not denote the source of one of the four sounds. Three conditions of temporal relations between the visual stimuli and sounds were used. The visual stimulus was presented either: (a) for 5 s simultaneously with the sound; (b) for 5 s, 1 s before the sound (SOA between the audio and visual stimuli was 6 s); or (c) for 33 ms, 1 s before the sound (SOA was 1033 ms). Participants reported all identifiable sounds for those audio–visual stimuli. To characterize the effects of visual stimuli on sound identification, the following were used: the identification rates of sounds for which the visual stimulus denoted its sound source, the rates of other sounds for which the visual stimulus did not denote the sound source, and the frequency of false hearing of a sound that was not presented for each sound set. Results of the four experiments demonstrated that a picture or a written word promoted identification of the sound when it was related to the sound, particularly when the visual stimulus was presented for 5 s simultaneously with the sounds. However, a visual stimulus preceding the sounds had a benefit only for the picture, not for the written word. Furthermore, presentation with a picture denoting a sound simultaneously with the sound reduced the frequency of false hearing. These results suggest three ways that presenting a visual stimulus affects identification of the auditory stimulus. First, activation of the visual representation extracted directly from the picture promotes identification of the denoted sound and suppresses the processing of sounds for which the visual stimulus did not denote the sound source. Second, effects based on processing of the conceptual information promote identification of the denoted sound and suppress the processing of sounds for which the visual stimulus did not denote the sound source. Third, processing of the concurrent visual representation suppresses false hearing. PMID:26973478
Interhemispheric interaction in the split-brain.
Lambert, A J
1991-01-01
An experiment is reported in which a split-brain patient (LB) was simultaneously presented with two words, one to the left and one to the right of fixation. He was instructed to categorize the right sided word (living vs non-living), and to ignore anything appearing to the left of fixation. LB's performance on this task closely resembled that of normal neurologically intact individuals. Manual response speed was slower when the unattended (left visual field) word belonged to the same category as the right visual field word. Implications of this finding for views of the split-brain syndrome are discussed.
ERIC Educational Resources Information Center
von Feldt, James R.; Subtelny, Joanne
The Webster diacritical system provides a discrete symbol for each sound and designates the appropriate syllable to be stressed in any polysyllabic word; the symbol system presents cues for correct production, auditory discriminiation, and visual recognition of new words in print and as visual speech gestures. The Webster's Diacritical CAI Program…
Individual differences in solving arithmetic word problems
2013-01-01
Background With the present functional magnetic resonance imaging (fMRI) study at 3 T, we investigated the neural correlates of visualization and verbalization during arithmetic word problem solving. In the domain of arithmetic, visualization might mean to visualize numbers and (intermediate) results while calculating, and verbalization might mean that numbers and (intermediate) results are verbally repeated during calculation. If the brain areas involved in number processing are domain-specific as assumed, that is, that the left angular gyrus (AG) shows an affinity to the verbal domain, and that the left and right intraparietal sulcus (IPS) shows an affinity to the visual domain, the activation of these areas should show a dependency on an individual’s cognitive style. Methods 36 healthy young adults participated in the fMRI study. The participants habitual use of visualization and verbalization during solving arithmetic word problems was assessed with a short self-report assessment. During the fMRI measurement, arithmetic word problems that had to be solved by the participants were presented in an event-related design. Results We found that visualizers showed greater brain activation in brain areas involved in visual processing, and that verbalizers showed greater brain activation within the left angular gyrus. Conclusions Our results indicate that cognitive styles or preferences play an important role in understanding brain activation. Our results confirm, that strong visualizers use mental imagery more strongly than weak visualizers during calculation. Moreover, our results suggest that the left AG shows a specific affinity to the verbal domain and subserves number processing in a modality-specific way. PMID:23883107
Behavioral and Neural Representations of Spatial Directions across Words, Schemas, and Images.
Weisberg, Steven M; Marchette, Steven A; Chatterjee, Anjan
2018-05-23
Modern spatial navigation requires fluency with multiple representational formats, including visual scenes, signs, and words. These formats convey different information. Visual scenes are rich and specific but contain extraneous details. Arrows, as an example of signs, are schematic representations in which the extraneous details are eliminated, but analog spatial properties are preserved. Words eliminate all spatial information and convey spatial directions in a purely abstract form. How does the human brain compute spatial directions within and across these formats? To investigate this question, we conducted two experiments on men and women: a behavioral study that was preregistered and a neuroimaging study using multivoxel pattern analysis of fMRI data to uncover similarities and differences among representational formats. Participants in the behavioral study viewed spatial directions presented as images, schemas, or words (e.g., "left"), and responded to each trial, indicating whether the spatial direction was the same or different as the one viewed previously. They responded more quickly to schemas and words than images, despite the visual complexity of stimuli being matched. Participants in the fMRI study performed the same task but responded only to occasional catch trials. Spatial directions in images were decodable in the intraparietal sulcus bilaterally but were not in schemas and words. Spatial directions were also decodable between all three formats. These results suggest that intraparietal sulcus plays a role in calculating spatial directions in visual scenes, but this neural circuitry may be bypassed when the spatial directions are presented as schemas or words. SIGNIFICANCE STATEMENT Human navigators encounter spatial directions in various formats: words ("turn left"), schematic signs (an arrow showing a left turn), and visual scenes (a road turning left). The brain must transform these spatial directions into a plan for action. Here, we investigate similarities and differences between neural representations of these formats. We found that bilateral intraparietal sulci represent spatial directions in visual scenes and across the three formats. We also found that participants respond quickest to schemas, then words, then images, suggesting that spatial directions in abstract formats are easier to interpret than concrete formats. These results support a model of spatial direction interpretation in which spatial directions are either computed for real world action or computed for efficient visual comparison. Copyright © 2018 the authors 0270-6474/18/384996-12$15.00/0.
Reading speed benefits from increased vertical word spacing in normal peripheral vision.
Chung, Susana T L
2004-07-01
Crowding, the adverse spatial interaction due to proximity of adjacent targets, has been suggested as an explanation for slow reading in peripheral vision. The purposes of this study were to (1) demonstrate that crowding exists at the word level and (2) examine whether or not reading speed in central and peripheral vision can be enhanced with increased vertical word spacing. Five normal observers read aloud sequences of six unrelated four-letter words presented on a computer monitor, one word at a time, using rapid serial visual presentation (RSVP). Reading speeds were calculated based on the RSVP exposure durations yielding 80% correct. Testing was conducted at the fovea and at 5 degrees and 10 degrees in the inferior visual field. Critical print size (CPS) for each observer and at each eccentricity was first determined by measuring reading speeds for four print sizes using unflanked words. We then presented words at 0.8x or 1.4x CPS, with each target word flanked by two other words, one above and one below the target word. Reading speeds were determined for vertical word spacings (baseline-to-baseline separation between two vertically separated words) ranging from 0.8x to 2x the standard single-spacing, as well as the unflanked condition. At the fovea, reading speed increased with vertical word spacing up to about 1.2x to 1.5x the standard spacing and remained constant and similar to the unflanked reading speed at larger vertical word spacings. In the periphery, reading speed also increased with vertical word spacing, but it remained below the unflanked reading speed for all spacings tested. At 2x the standard spacing, peripheral reading speed was still about 25% lower than the unflanked reading speed for both eccentricities and print sizes. Results from a control experiment showed that the greater reliance of peripheral reading speed on vertical word spacing was also found in the right visual field. Increased vertical word spacing, which presumably decreases the adverse effect of crowding between adjacent lines of text, benefits reading speed. This benefit is greater in peripheral than central vision.
Morphological Structures in Visual Word Recognition: The Case of Arabic
ERIC Educational Resources Information Center
Abu-Rabia, Salim; Awwad, Jasmin (Shalhoub)
2004-01-01
This research examined the function within lexical access of the main morphemic units from which most Arabic words are assembled, namely roots and word patterns. The present study focused on the derivation of nouns, in particular, whether the lexical representation of Arabic words reflects their morphological structure and whether recognition of a…
Visual Cortical Representation of Whole Words and Hemifield-split Word Parts.
Strother, Lars; Coros, Alexandra M; Vilis, Tutis
2016-02-01
Reading requires the neural integration of visual word form information that is split between our retinal hemifields. We examined multiple visual cortical areas involved in this process by measuring fMRI responses while observers viewed words that changed or repeated in one or both hemifields. We were specifically interested in identifying brain areas that exhibit decreased fMRI responses as a result of repeated versus changing visual word form information in each visual hemifield. Our method yielded highly significant effects of word repetition in a previously reported visual word form area (VWFA) in occipitotemporal cortex, which represents hemifield-split words as whole units. We also identified a more posterior occipital word form area (OWFA), which represents word form information in the right and left hemifields independently and is thus both functionally and anatomically distinct from the VWFA. Both the VWFA and the OWFA were left-lateralized in our study and strikingly symmetric in anatomical location relative to known face-selective visual cortical areas in the right hemisphere. Our findings are consistent with the observation that category-selective visual areas come in pairs and support the view that neural mechanisms in left visual cortex--especially those that evolved to support the visual processing of faces--are developmentally malleable and become incorporated into a left-lateralized visual word form network that supports rapid word recognition and reading.
Ruthmann, Katja; Schacht, Annekathrin
2017-01-01
Abstract Emotional stimuli attract attention and lead to increased activity in the visual cortex. The present study investigated the impact of personal relevance on emotion processing by presenting emotional words within sentences that referred to participants’ significant others or to unknown agents. In event-related potentials, personal relevance increased visual cortex activity within 100 ms after stimulus onset and the amplitudes of the Late Positive Complex (LPC). Moreover, personally relevant contexts gave rise to augmented pupillary responses and higher arousal ratings, suggesting a general boost of attention and arousal. Finally, personal relevance increased emotion-related ERP effects starting around 200 ms after word onset; effects for negative words compared to neutral words were prolonged in duration. Source localizations of these interactions revealed activations in prefrontal regions, in the visual cortex and in the fusiform gyrus. Taken together, these results demonstrate the high impact of personal relevance on reading in general and on emotion processing in particular. PMID:28541505
Generating descriptive visual words and visual phrases for large-scale image applications.
Zhang, Shiliang; Tian, Qi; Hua, Gang; Huang, Qingming; Gao, Wen
2011-09-01
Bag-of-visual Words (BoWs) representation has been applied for various problems in the fields of multimedia and computer vision. The basic idea is to represent images as visual documents composed of repeatable and distinctive visual elements, which are comparable to the text words. Notwithstanding its great success and wide adoption, visual vocabulary created from single-image local descriptors is often shown to be not as effective as desired. In this paper, descriptive visual words (DVWs) and descriptive visual phrases (DVPs) are proposed as the visual correspondences to text words and phrases, where visual phrases refer to the frequently co-occurring visual word pairs. Since images are the carriers of visual objects and scenes, a descriptive visual element set can be composed by the visual words and their combinations which are effective in representing certain visual objects or scenes. Based on this idea, a general framework is proposed for generating DVWs and DVPs for image applications. In a large-scale image database containing 1506 object and scene categories, the visual words and visual word pairs descriptive to certain objects or scenes are identified and collected as the DVWs and DVPs. Experiments show that the DVWs and DVPs are informative and descriptive and, thus, are more comparable with the text words than the classic visual words. We apply the identified DVWs and DVPs in several applications including large-scale near-duplicated image retrieval, image search re-ranking, and object recognition. The combination of DVW and DVP performs better than the state of the art in large-scale near-duplicated image retrieval in terms of accuracy, efficiency and memory consumption. The proposed image search re-ranking algorithm: DWPRank outperforms the state-of-the-art algorithm by 12.4% in mean average precision and about 11 times faster in efficiency.
Mark My Words: Tone of Voice Changes Affective Word Representations in Memory
Schirmer, Annett
2010-01-01
The present study explored the effect of speaker prosody on the representation of words in memory. To this end, participants were presented with a series of words and asked to remember the words for a subsequent recognition test. During study, words were presented auditorily with an emotional or neutral prosody, whereas during test, words were presented visually. Recognition performance was comparable for words studied with emotional and neutral prosody. However, subsequent valence ratings indicated that study prosody changed the affective representation of words in memory. Compared to words with neutral prosody, words with sad prosody were later rated as more negative and words with happy prosody were later rated as more positive. Interestingly, the participants' ability to remember study prosody failed to predict this effect, suggesting that changes in word valence were implicit and associated with initial word processing rather than word retrieval. Taken together these results identify a mechanism by which speakers can have sustained effects on listener attitudes towards word referents. PMID:20169154
Amsel, Ben D
2011-04-01
Empirically derived semantic feature norms categorized into different types of knowledge (e.g., visual, functional, auditory) can be summed to create number-of-feature counts per knowledge type. Initial evidence suggests several such knowledge types may be recruited during language comprehension. The present study provides a more detailed understanding of the timecourse and intensity of influence of several such knowledge types on real-time neural activity. A linear mixed-effects model was applied to single trial event-related potentials for 207 visually presented concrete words measured on total number of features (semantic richness), imageability, and number of visual motion, color, visual form, smell, taste, sound, and function features. Significant influences of multiple feature types occurred before 200ms, suggesting parallel neural computation of word form and conceptual knowledge during language comprehension. Function and visual motion features most prominently influenced neural activity, underscoring the importance of action-related knowledge in computing word meaning. The dynamic time courses and topographies of these effects are most consistent with a flexible conceptual system wherein temporally dynamic recruitment of representations in modal and supramodal cortex are a crucial element of the constellation of processes constituting word meaning computation in the brain. Copyright © 2011 Elsevier Ltd. All rights reserved.
Schröter, Pauline; Schroeder, Sascha
2017-12-01
With the Developmental Lexicon Project (DeveL), we present a large-scale study that was conducted to collect data on visual word recognition in German across the lifespan. A total of 800 children from Grades 1 to 6, as well as two groups of younger and older adults, participated in the study and completed a lexical decision and a naming task. We provide a database for 1,152 German words, comprising behavioral data from seven different stages of reading development, along with sublexical and lexical characteristics for all stimuli. The present article describes our motivation for this project, explains the methods we used to collect the data, and reports analyses on the reliability of our results. In addition, we explored developmental changes in three marker effects in psycholinguistic research: word length, word frequency, and orthographic similarity. The database is available online.
Moradi, Shahram; Lidestam, Björn; Rönnberg, Jerker
2016-06-17
The present study compared elderly hearing aid (EHA) users (n = 20) with elderly normal-hearing (ENH) listeners (n = 20) in terms of isolation points (IPs, the shortest time required for correct identification of a speech stimulus) and accuracy of audiovisual gated speech stimuli (consonants, words, and final words in highly and less predictable sentences) presented in silence. In addition, we compared the IPs of audiovisual speech stimuli from the present study with auditory ones extracted from a previous study, to determine the impact of the addition of visual cues. Both participant groups achieved ceiling levels in terms of accuracy in the audiovisual identification of gated speech stimuli; however, the EHA group needed longer IPs for the audiovisual identification of consonants and words. The benefit of adding visual cues to auditory speech stimuli was more evident in the EHA group, as audiovisual presentation significantly shortened the IPs for consonants, words, and final words in less predictable sentences; in the ENH group, audiovisual presentation only shortened the IPs for consonants and words. In conclusion, although the audiovisual benefit was greater for EHA group, this group had inferior performance compared with the ENH group in terms of IPs when supportive semantic context was lacking. Consequently, EHA users needed the initial part of the audiovisual speech signal to be longer than did their counterparts with normal hearing to reach the same level of accuracy in the absence of a semantic context. © The Author(s) 2016.
Stephan-Otto, Christian; Siddi, Sara; Senior, Carl; Muñoz-Samons, Daniel; Ochoa, Susana; Sánchez-Laforga, Ana María; Brébion, Gildas
2017-01-01
Background Visual mental imagery might be critical in the ability to discriminate imagined from perceived pictures. Our aim was to investigate the neural bases of this specific type of reality-monitoring process in individuals with high visual imagery abilities. Methods A reality-monitoring task was administered to twenty-six healthy participants using functional magnetic resonance imaging. During the encoding phase, 45 words designating common items, and 45 pictures of other common items, were presented in random order. During the recall phase, participants were required to remember whether a picture of the item had been presented, or only a word. Two subgroups of participants with a propensity for high vs. low visual imagery were contrasted. Results Activation of the amygdala, left inferior occipital gyrus, insula, and precuneus were observed when high visual imagers encoded words later remembered as pictures. At the recall phase, these same participants activated the middle frontal gyrus and inferior and superior parietal lobes when erroneously remembering pictures. Conclusions The formation of visual mental images might activate visual brain areas as well as structures involved in emotional processing. High visual imagers demonstrate increased activation of a fronto-parietal source-monitoring network that enables distinction between imagined and perceived pictures. PMID:28046076
Encoding context and false recognition memories.
Bruce, Darryl; Phillips-Grant, Kimberly; Conrad, Nicole; Bona, Susan
2004-09-01
False recognition of an extralist word that is thematically related to all words of a study list may reflect internal activation of the theme word during encoding followed by impaired source monitoring at retrieval, that is, difficulty in determining whether the word had actually been experienced or merely thought of. To assist source monitoring, distinctive visual or verbal contexts were added to study words at input. Both types of context produced similar effects: False alarms to theme-word (critical) lures were reduced; remember judgements of critical lures called old were lower; and if contextual information had been added to lists, subjects indicated as much for list items and associated critical foils identified as old. The visual and verbal contexts used in the present studies were held to disrupt semantic categorisation of list words at input and to facilitate source monitoring at output.
Modality dependency of familiarity ratings of Japanese words.
Amano, S; Kondo, T; Kakehi, K
1995-07-01
Familiarity ratings for a large number of aurally and visually presented Japanese words wer measured for 11 subjects, in order to investigate the modality dependency of familiarity. The correlation coefficient between auditory and visual ratings was .808, which is lower than that observed for English words, suggesting that a substantial portion of the mental lexicon is modality dependent. It was shown that the modality dependency is greater for low-familiarity words than it is for medium- or high-familiarity words. This difference between the low- and the medium- or high-familiarity words has a relationship to orthography. That is, the dependency is larger in words consisting only of kanji, which may have multiple pronunciations and usually represent meaning, than it is in words consisting only of hiragana or katakana, which have a single pronunciation and usually do not represent meaning. These results indicate that the idiosyncratic characteristics of Japanese orthography contribute to the modality dependency.
The effect of visual and verbal modes of presentation on children's retention of images and words
NASA Astrophysics Data System (ADS)
Vasu, Ellen Storey; Howe, Ann C.
This study tested the hypothesis that the use of two modes of presenting information to children has an additive memory effect for the retention of both images and words. Subjects were 22 first-grade and 22 fourth-grade children randomly assigned to visual and visual-verbal treatment groups. The visual-verbal group heard a description while observing an object; the visual group observed the same object but did not hear a description. Children were tested individually immediately after presentation of stimuli and two weeks later. They were asked to represent the information recalled through a drawing and an oral verbal description. In general, results supported the hypothesis and indicated, in addition, that children represent more information in iconic (pictorial) form than in symbolic (verbal) form. Strategies for using these results to enhance science learning at the elementary school level are discussed.
Singh, Niharika; Mishra, Ramesh Kumar
2015-01-01
Using a variant of the visual world eye tracking paradigm, we examined if language non- selective activation of translation equivalents leads to attention capture and distraction in a visual task in bilinguals. High and low proficient Hindi-English speaking bilinguals were instructed to programme a saccade towards a line drawing which changed colour among other distractor objects. A spoken word, irrelevant to the main task, was presented before the colour change. On critical trials, one of the line drawings was a phonologically related word of the translation equivalent of the spoken word. Results showed that saccade latency was significantly higher towards the target in the presence of this cross-linguistic translation competitor compared to when the display contained completely unrelated objects. Participants were also slower when the display contained the referent of the spoken word among the distractors. However, the bilingual groups did not differ with regard to the interference effect observed. These findings suggest that spoken words activates translation equivalent which bias attention leading to interference in goal directed action in the visual domain. PMID:25775184
Effects of Orthographic and Phonological Word Length on Memory for Lists Shown at RSVP and STM Rates
ERIC Educational Resources Information Center
Coltheart, Veronika; Mondy, Stephen; Dux, Paul E.; Stephenson, Lisa
2004-01-01
This article reports 3 experiments in which effects of orthographic and phonological word length on memory were examined for short lists shown at rapid serial visual presentation (RSVP) and short-term memory (STM) rates. Only visual-orthographic length reduced RSVP serial recall, whereas both orthographic and phonological length lowered recall for…
Semantic priming from crowded words.
Yeh, Su-Ling; He, Sheng; Cavanagh, Patrick
2012-06-01
Vision in a cluttered scene is extremely inefficient. This damaging effect of clutter, known as crowding, affects many aspects of visual processing (e.g., reading speed). We examined observers' processing of crowded targets in a lexical decision task, using single-character Chinese words that are compact but carry semantic meaning. Despite being unrecognizable and indistinguishable from matched nonwords, crowded prime words still generated robust semantic-priming effects on lexical decisions for test words presented in isolation. Indeed, the semantic-priming effect of crowded primes was similar to that of uncrowded primes. These findings show that the meanings of words survive crowding even when the identities of the words do not, suggesting that crowding does not prevent semantic activation, a process that may have evolved in the context of a cluttered visual environment.
Implicit integration in a case of integrative visual agnosia.
Aviezer, Hillel; Landau, Ayelet N; Robertson, Lynn C; Peterson, Mary A; Soroker, Nachum; Sacher, Yaron; Bonneh, Yoram; Bentin, Shlomo
2007-05-15
We present a case (SE) with integrative visual agnosia following ischemic stroke affecting the right dorsal and the left ventral pathways of the visual system. Despite his inability to identify global hierarchical letters [Navon, D. (1977). Forest before trees: The precedence of global features in visual perception. Cognitive Psychology, 9, 353-383], and his dense object agnosia, SE showed normal global-to-local interference when responding to local letters in Navon hierarchical stimuli and significant picture-word identity priming in a semantic decision task for words. Since priming was absent if these features were scrambled, it stands to reason that these effects were not due to priming by distinctive features. The contrast between priming effects induced by coherent and scrambled stimuli is consistent with implicit but not explicit integration of features into a unified whole. We went on to show that possible/impossible object decisions were facilitated by words in a word-picture priming task, suggesting that prompts could activate perceptually integrated images in a backward fashion. We conclude that the absence of SE's ability to identify visual objects except through tedious serial construction reflects a deficit in accessing an integrated visual representation through bottom-up visual processing alone. However, top-down generated images can help activate these visual representations through semantic links.
ERIC Educational Resources Information Center
Malins, Jeffrey G.; Joanisse, Marc F.
2012-01-01
We investigated the influences of phonological similarity on the time course of spoken word processing in Mandarin Chinese. Event related potentials were recorded while adult native speakers of Mandarin ("N" = 19) judged whether auditory words matched or mismatched visually presented pictures. Mismatching words were of the following…
School-aged children can benefit from audiovisual semantic congruency during memory encoding.
Heikkilä, Jenni; Tiippana, Kaisa
2016-05-01
Although we live in a multisensory world, children's memory has been usually studied concentrating on only one sensory modality at a time. In this study, we investigated how audiovisual encoding affects recognition memory. Children (n = 114) from three age groups (8, 10 and 12 years) memorized auditory or visual stimuli presented with a semantically congruent, incongruent or non-semantic stimulus in the other modality during encoding. Subsequent recognition memory performance was better for auditory or visual stimuli initially presented together with a semantically congruent stimulus in the other modality than for stimuli accompanied by a non-semantic stimulus in the other modality. This congruency effect was observed for pictures presented with sounds, for sounds presented with pictures, for spoken words presented with pictures and for written words presented with spoken words. The present results show that semantically congruent multisensory experiences during encoding can improve memory performance in school-aged children.
Learning of grammar-like visual sequences by adults with and without language-learning disabilities.
Aguilar, Jessica M; Plante, Elena
2014-08-01
Two studies examined learning of grammar-like visual sequences to determine whether a general deficit in statistical learning characterizes this population. Furthermore, we tested the hypothesis that difficulty in sustaining attention during the learning task might account for differences in statistical learning. In Study 1, adults with normal language (NL) or language-learning disability (LLD) were familiarized with the visual artificial grammar and then tested using items that conformed or deviated from the grammar. In Study 2, a 2nd sample of adults with NL and LLD were presented auditory word pairs with weak semantic associations (e.g., groom + clean) along with the visual learning task. Participants were instructed to attend to visual sequences and to ignore the auditory stimuli. Incidental encoding of these words would indicate reduced attention to the primary task. In Studies 1 and 2, both groups demonstrated learning and generalization of the artificial grammar. In Study 2, neither the NL nor the LLD group appeared to encode the words presented during the learning phase. The results argue against a general deficit in statistical learning for individuals with LLD and demonstrate that both NL and LLD learners can ignore extraneous auditory stimuli during visual learning.
Choi, Jong Moon; Cho, Yang Seok; Proctor, Robert W
2009-09-01
A Stroop task with separate color bar and color word stimuli was combined with an inhibition-of-return procedure to examine whether visual attention modulates color word processing. In Experiment 1, the color bar was presented at the cued location and the color word at the uncued location, or vice versa, with a 100- or 1,050-msec stimulus onset asynchrony (SOA) between cue and Stroop stimuli. In Experiment 2, on Stroop trials, the color bar was presented at a central fixated location and the color word at a cued or uncued location above or below the color bar. In both experiments, with a 100-msec SOA, the Stroop effect was numerically larger when the color word was displayed at the cued location than when it was displayed at the uncued location, but with the 1,050-msec SOA, this relation between Stroop effect magnitude and location was reversed. These results provide evidence that processing of the color word in the Stroop task is modulated by the location to which visual attention is directed.
[Representation of letter position in visual word recognition process].
Makioka, S
1994-08-01
Two experiments investigated the representation of letter position in visual word recognition process. In Experiment 1, subjects (12 undergraduates and graduates) were asked to detect a target word in a briefly-presented probe. Probes consisted of two kanji words. The latters which formed targets (critical letters) were always contained in probes. (e.g. target: [symbol: see text] probe: [symbol: see text]) High false alarm rate was observed when critical letters occupied the same within-word relative position (left or right within the word) in the probe words as in the target word. In Experiment 2 (subject were ten undergraduates and graduates), spaces adjacent to probe words were replaced by randomly chosen hiragana letters (e.g. [symbol: see text]), because spaces are not used to separate words in regular Japanese sentences. In addition to the effect of within-word relative position as in Experiment 1, the effect of between-word relative position (left or right across the probe words) was observed. These results suggest that information about within-word relative position of a letter is used in word recognition process. The effect of within-word relative position was explained by a connectionist model of word recognition.
Orthographic Processing in Visual Word Identification.
ERIC Educational Resources Information Center
Humphreys, Glyn W.; And Others
1990-01-01
A series of 6 experiments involving 210 subjects from a college subject pool examined orthographic priming effects between briefly presented pairs of letter strings. A theory of othographic priming is presented, and the implications of the findings for understanding word recognition and reading are discussed. (SLD)
Object-based attentional selection modulates anticipatory alpha oscillations
Knakker, Balázs; Weiss, Béla; Vidnyánszky, Zoltán
2015-01-01
Visual cortical alpha oscillations are involved in attentional gating of incoming visual information. It has been shown that spatial and feature-based attentional selection result in increased alpha oscillations over the cortical regions representing sensory input originating from the unattended visual field and task-irrelevant visual features, respectively. However, whether attentional gating in the case of object based selection is also associated with alpha oscillations has not been investigated before. Here we measured anticipatory electroencephalography (EEG) alpha oscillations while participants were cued to attend to foveal face or word stimuli, the processing of which is known to have right and left hemispheric lateralization, respectively. The results revealed that in the case of simultaneously displayed, overlapping face and word stimuli, attending to the words led to increased power of parieto-occipital alpha oscillations over the right hemisphere as compared to when faces were attended. This object category-specific modulation of the hemispheric lateralization of anticipatory alpha oscillations was maintained during sustained attentional selection of sequentially presented face and word stimuli. These results imply that in the case of object-based attentional selection—similarly to spatial and feature-based attention—gating of visual information processing might involve visual cortical alpha oscillations. PMID:25628554
Vernier But Not Grating Acuity Contributes to an Early Stage of Visual Word Processing.
Tan, Yufei; Tong, Xiuhong; Chen, Wei; Weng, Xuchu; He, Sheng; Zhao, Jing
2018-03-28
The process of reading words depends heavily on efficient visual skills, including analyzing and decomposing basic visual features. Surprisingly, previous reading-related studies have almost exclusively focused on gross aspects of visual skills, while only very few have investigated the role of finer skills. The present study filled this gap and examined the relations of two finer visual skills measured by grating acuity (the ability to resolve periodic luminance variations across space) and Vernier acuity (the ability to detect/discriminate relative locations of features) to Chinese character-processing as measured by character form-matching and lexical decision tasks in skilled adult readers. The results showed that Vernier acuity was significantly correlated with performance in character form-matching but not visual symbol form-matching, while no correlation was found between grating acuity and character processing. Interestingly, we found no correlation of the two visual skills with lexical decision performance. These findings provide for the first time empirical evidence that the finer visual skills, particularly as reflected in Vernier acuity, may directly contribute to an early stage of hierarchical word processing.
Is the masked priming same-different task a pure measure of prelexical processing?
Kelly, Andrew N; van Heuven, Walter J B; Pitchford, Nicola J; Ledgeway, Timothy
2013-01-01
To study prelexical processes involved in visual word recognition a task is needed that only operates at the level of abstract letter identities. The masked priming same-different task has been purported to do this, as the same pattern of priming is shown for words and nonwords. However, studies using this task have consistently found a processing advantage for words over nonwords, indicating a lexicality effect. We investigated the locus of this word advantage. Experiment 1 used conventional visually-presented reference stimuli to test previous accounts of the lexicality effect. Results rule out the use of different strategies, or strength of representations, for words and nonwords. No interaction was shown between prime type and word type, but a consistent word advantage was found. Experiment 2 used novel auditorally-presented reference stimuli to restrict nonword matching to the sublexical level. This abolished scrambled priming for nonwords, but not words. Overall this suggests the processing advantage for words over nonwords results from activation of whole-word, lexical representations. Furthermore, the number of shared open-bigrams between primes and targets could account for scrambled priming effects. These results have important implications for models of orthographic processing and studies that have used this task to investigate prelexical processes.
Category Membership and Semantic Coding in the Cerebral Hemispheres.
Turner, Casey E; Kellogg, Ronald T
2016-01-01
Although a gradient of category membership seems to form the internal structure of semantic categories, it is unclear whether the 2 hemispheres of the brain differ in terms of this gradient. The 2 experiments reported here examined this empirical question and explored alternative theoretical interpretations. Participants viewed category names centrally and determined whether a closely related or distantly related word presented to either the left visual field/right hemisphere (LVF/RH) or the right visual field/left hemisphere (RVF/LH) was a member of the category. Distantly related words were categorized more slowly in the LVF/RH relative to the RVF/LH, with no difference for words close to the prototype. The finding resolved past mixed results showing an unambiguous typicality effect for both visual field presentations. Furthermore, we examined items near the fuzzy border that were sometimes rejected as nonmembers of the category and found both hemispheres use the same category boundary. In Experiment 2, we presented 2 target words to be categorized, with the expectation of augmenting the speed advantage for the RVF/LH if the 2 hemispheres differ structurally. Instead the results showed a weakening of the hemispheric difference, arguing against a structural in favor of a processing explanation.
How Yellow Is Your Banana? Toddlers' Language-Mediated Visual Search in Referent-Present Tasks
ERIC Educational Resources Information Center
Mani, Nivedita; Johnson, Elizabeth; McQueen, James M.; Huettig, Falk
2013-01-01
What is the relative salience of different aspects of word meaning in the developing lexicon? The current study examines the time-course of retrieval of semantic and color knowledge associated with words during toddler word recognition: At what point do toddlers orient toward an image of a yellow cup upon hearing color-matching words such as…
Ma, Wei Ji; Zhou, Xiang; Ross, Lars A; Foxe, John J; Parra, Lucas C
2009-01-01
Watching a speaker's facial movements can dramatically enhance our ability to comprehend words, especially in noisy environments. From a general doctrine of combining information from different sensory modalities (the principle of inverse effectiveness), one would expect that the visual signals would be most effective at the highest levels of auditory noise. In contrast, we find, in accord with a recent paper, that visual information improves performance more at intermediate levels of auditory noise than at the highest levels, and we show that a novel visual stimulus containing only temporal information does the same. We present a Bayesian model of optimal cue integration that can explain these conflicts. In this model, words are regarded as points in a multidimensional space and word recognition is a probabilistic inference process. When the dimensionality of the feature space is low, the Bayesian model predicts inverse effectiveness; when the dimensionality is high, the enhancement is maximal at intermediate auditory noise levels. When the auditory and visual stimuli differ slightly in high noise, the model makes a counterintuitive prediction: as sound quality increases, the proportion of reported words corresponding to the visual stimulus should first increase and then decrease. We confirm this prediction in a behavioral experiment. We conclude that auditory-visual speech perception obeys the same notion of optimality previously observed only for simple multisensory stimuli.
Koban, Leonie; Ninck, Markus; Li, Jun; Gisler, Thomas; Kissler, Johanna
2010-07-27
Emotional stimuli are preferentially processed compared to neutral ones. Measuring the magnetic resonance blood-oxygen level dependent (BOLD) response or EEG event-related potentials, this has also been demonstrated for emotional versus neutral words. However, it is currently unclear whether emotion effects in word processing can also be detected with other measures such as EEG steady-state visual evoked potentials (SSVEPs) or optical brain imaging techniques. In the present study, we simultaneously performed SSVEP measurements and near-infrared diffusing-wave spectroscopy (DWS), a new optical technique for the non-invasive measurement of brain function, to measure brain responses to neutral, pleasant, and unpleasant nouns flickering at a frequency of 7.5 Hz. The power of the SSVEP signal was significantly modulated by the words' emotional content at occipital electrodes, showing reduced SSVEP power during stimulation with pleasant compared to neutral nouns. By contrast, the DWS signal measured over the visual cortex showed significant differences between stimulation with flickering words and baseline periods, but no modulation in response to the words' emotional significance. This study is the first investigation of brain responses to emotional words using simultaneous measurements of SSVEPs and DWS. Emotional modulation of word processing was detected with EEG SSVEPs, but not by DWS. SSVEP power for emotional, specifically pleasant, compared to neutral words was reduced, which contrasts with previous results obtained when presenting emotional pictures. This appears to reflect processing differences between symbolic and pictorial emotional stimuli. While pictures prompt sustained perceptual processing, decoding the significance of emotional words requires more internal associative processing. Reasons for an absence of emotion effects in the DWS signal are discussed.
2013-01-01
Background Event-related brain potentials (ERPs) were used to investigate training-related changes in fast visual word recognition of functionally illiterate adults. Analyses focused on the left-lateralized occipito-temporal N170, which represents the earliest processing of visual word forms. Event-related brain potentials were recorded from 20 functional illiterates receiving intensive literacy training for adults, 10 functional illiterates not participating in the training and 14 regular readers while they read words, pseudowords or viewed symbol strings. Subjects were required to press a button whenever a stimulus was immediately repeated. Results Attending intensive literacy training was associated with improvements in reading and writing skills and with an increase of the word-related N170 amplitude. For untrained functional illiterates and regular readers no changes in literacy skills or N170 amplitude were observed. Conclusions Results of the present study suggest that the word-related N170 can still be modulated in adulthood as a result of the improvements in literacy skills. PMID:24330622
Boltzmann, Melanie; Rüsseler, Jascha
2013-12-13
Event-related brain potentials (ERPs) were used to investigate training-related changes in fast visual word recognition of functionally illiterate adults. Analyses focused on the left-lateralized occipito-temporal N170, which represents the earliest processing of visual word forms. Event-related brain potentials were recorded from 20 functional illiterates receiving intensive literacy training for adults, 10 functional illiterates not participating in the training and 14 regular readers while they read words, pseudowords or viewed symbol strings. Subjects were required to press a button whenever a stimulus was immediately repeated. Attending intensive literacy training was associated with improvements in reading and writing skills and with an increase of the word-related N170 amplitude. For untrained functional illiterates and regular readers no changes in literacy skills or N170 amplitude were observed. Results of the present study suggest that the word-related N170 can still be modulated in adulthood as a result of the improvements in literacy skills.
Letter position coding across modalities: the case of Braille readers.
Perea, Manuel; García-Chamorro, Cristina; Martín-Suesta, Miguel; Gómez, Pablo
2012-01-01
The question of how the brain encodes letter position in written words has attracted increasing attention in recent years. A number of models have recently been proposed to accommodate the fact that transposed-letter stimuli like jugde or caniso are perceptually very close to their base words. Here we examined how letter position coding is attained in the tactile modality via Braille reading. The idea is that Braille word recognition may provide more serial processing than the visual modality, and this may produce differences in the input coding schemes employed to encode letters in written words. To that end, we conducted a lexical decision experiment with adult Braille readers in which the pseudowords were created by transposing/replacing two letters. We found a word-frequency effect for words. In addition, unlike parallel experiments in the visual modality, we failed to find any clear signs of transposed-letter confusability effects. This dissociation highlights the differences between modalities. The present data argue against models of letter position coding that assume that transposed-letter effects (in the visual modality) occur at a relatively late, abstract locus.
Using spoken words to guide open-ended category formation.
Chauhan, Aneesh; Seabra Lopes, Luís
2011-11-01
Naming is a powerful cognitive tool that facilitates categorization by forming an association between words and their referents. There is evidence in child development literature that strong links exist between early word-learning and conceptual development. A growing view is also emerging that language is a cultural product created and acquired through social interactions. Inspired by these studies, this paper presents a novel learning architecture for category formation and vocabulary acquisition in robots through active interaction with humans. This architecture is open-ended and is capable of acquiring new categories and category names incrementally. The process can be compared to language grounding in children at single-word stage. The robot is embodied with visual and auditory sensors for world perception. A human instructor uses speech to teach the robot the names of the objects present in a visually shared environment. The robot uses its perceptual input to ground these spoken words and dynamically form/organize category descriptions in order to achieve better categorization. To evaluate the learning system at word-learning and category formation tasks, two experiments were conducted using a simple language game involving naming and corrective feedback actions from the human user. The obtained results are presented and discussed in detail.
Auditory, Visual, and Auditory-Visual Perception of Vowels by Hearing-Impaired Children.
ERIC Educational Resources Information Center
Hack, Zarita Caplan; Erber, Norman P.
1982-01-01
Vowels were presented through auditory, visual, and auditory-visual modalities to 18 hearing impaired children (12 to 15 years old) having good, intermediate, and poor auditory word recognition skills. All the groups had difficulty with acoustic information and visual information alone. The first two groups had only moderate difficulty identifying…
Visual recognition of permuted words
NASA Astrophysics Data System (ADS)
Rashid, Sheikh Faisal; Shafait, Faisal; Breuel, Thomas M.
2010-02-01
In current study we examine how letter permutation affects in visual recognition of words for two orthographically dissimilar languages, Urdu and German. We present the hypothesis that recognition or reading of permuted and non-permuted words are two distinct mental level processes, and that people use different strategies in handling permuted words as compared to normal words. A comparison between reading behavior of people in these languages is also presented. We present our study in context of dual route theories of reading and it is observed that the dual-route theory is consistent with explanation of our hypothesis of distinction in underlying cognitive behavior for reading permuted and non-permuted words. We conducted three experiments in lexical decision tasks to analyze how reading is degraded or affected by letter permutation. We performed analysis of variance (ANOVA), distribution free rank test, and t-test to determine the significance differences in response time latencies for two classes of data. Results showed that the recognition accuracy for permuted words is decreased 31% in case of Urdu and 11% in case of German language. We also found a considerable difference in reading behavior for cursive and alphabetic languages and it is observed that reading of Urdu is comparatively slower than reading of German due to characteristics of cursive script.
Amsel, Ben D; Kutas, Marta; Coulson, Seana
2017-10-01
In grapheme-color synesthesia, seeing particular letters or numbers evokes the experience of specific colors. We investigate the brain's real-time processing of words in this population by recording event-related brain potentials (ERPs) from 15 grapheme-color synesthetes and 15 controls as they judged the validity of word pairs ('yellow banana' vs. 'blue banana') presented under high and low visual contrast. Low contrast words elicited delayed P1/N170 visual ERP components in both groups, relative to high contrast. When color concepts were conveyed to synesthetes by individually tailored achromatic grapheme strings ('55555 banana'), visual contrast effects were like those in color words: P1/N170 components were delayed but unchanged in amplitude. When controls saw equivalent colored grapheme strings, visual contrast modulated P1/N170 amplitude but not latency. Color induction in synesthetes thus differs from color perception in controls. Independent from experimental effects, all orthographic stimuli elicited larger N170 and P2 in synesthetes than controls. While P2 (150-250ms) enhancement was similar in all synesthetes, N170 (130-210ms) amplitude varied with individual differences in synesthesia and visual imagery. Results suggest immediate cross-activation in visual areas processing color and shape is most pronounced in so-called projector synesthetes whose concurrent colors are experienced as originating in external space.
ERIC Educational Resources Information Center
Mayer, Richard E.; Moreno, Roxana
1998-01-01
Multimedia learners (n=146 college students) were able to integrate words and computer-presented pictures more easily when the words were presented aurally rather than visually. This split-attention effect is consistent with a dual-processing model of working memory. (SLD)
Does letter rotation slow down orthographic processing in word recognition?
Perea, Manuel; Marcet, Ana; Fernández-López, María
2018-02-01
Leading neural models of visual word recognition assume that letter rotation slows down the conversion of the visual input to a stable orthographic representation (e.g., local detectors combination model; Dehaene, Cohen, Sigman, & Vinckier, 2005, Trends in Cognitive Sciences, 9, 335-341). If this premise is true, briefly presented rotated primes should be less effective at activating word representations than those primes with upright letters. To test this question, we conducted a masked priming lexical decision experiment with vertically presented words either rotated 90° or in marquee format (i.e., vertically but with upright letters). We examined the impact of the format on both letter identity (masked identity priming: identity vs. unrelated) and letter position (masked transposed-letter priming: transposed-letter prime vs. replacement-letter prime). Results revealed sizeable masked identity and transposed-letter priming effects that were similar in magnitude for rotated and marquee words. Therefore, the reading cost from letter rotation does not arise in the initial access to orthographic/lexical representations.
Picturing words? Sensorimotor cortex activation for printed words in child and adult readers
Dekker, Tessa M.; Mareschal, Denis; Johnson, Mark H.; Sereno, Martin I.
2014-01-01
Learning to read involves associating abstract visual shapes with familiar meanings. Embodiment theories suggest that word meaning is at least partially represented in distributed sensorimotor networks in the brain (Barsalou, 2008; Pulvermueller, 2013). We explored how reading comprehension develops by tracking when and how printed words start activating these “semantic” sensorimotor representations as children learn to read. Adults and children aged 7–10 years showed clear category-specific cortical specialization for tool versus animal pictures during a one-back categorisation task. Thus, sensorimotor representations for these categories were in place at all ages. However, co-activation of these same brain regions by the visual objects’ written names was only present in adults, even though all children could read and comprehend all presented words, showed adult-like task performance, and older children were proficient readers. It thus takes years of training and expert reading skill before spontaneous processing of printed words’ sensorimotor meanings develops in childhood. PMID:25463817
Words, shape, visual search and visual working memory in 3-year-old children.
Vales, Catarina; Smith, Linda B
2015-01-01
Do words cue children's visual attention, and if so, what are the relevant mechanisms? Across four experiments, 3-year-old children (N = 163) were tested in visual search tasks in which targets were cued with only a visual preview versus a visual preview and a spoken name. The experiments were designed to determine whether labels facilitated search times and to examine one route through which labels could have their effect: By influencing the visual working memory representation of the target. The targets and distractors were pictures of instances of basic-level known categories and the labels were the common name for the target category. We predicted that the label would enhance the visual working memory representation of the target object, guiding attention to objects that better matched the target representation. Experiments 1 and 2 used conjunctive search tasks, and Experiment 3 varied shape discriminability between targets and distractors. Experiment 4 compared the effects of labels to repeated presentations of the visual target, which should also influence the working memory representation of the target. The overall pattern fits contemporary theories of how the contents of visual working memory interact with visual search and attention, and shows that even in very young children heard words affect the processing of visual information. © 2014 John Wiley & Sons Ltd.
Morphological Effects in Children Word Reading: A Priming Study in Fourth Graders
ERIC Educational Resources Information Center
Casalis, Severine; Dusautoir, Marion; Cole, Pascale; Ducrot, Stephanie
2009-01-01
A growing corpus of evidence suggests that morphology could play a role in reading acquisition, and that young readers could be sensitive to the morphemic structure of written words. In the present experiment, we examined whether and when morphological information is activated in word recognition. French fourth graders made visual lexical…
Phonologic Processing in Adults Who Stutter: Electrophysiological and Behavioral Evidence.
ERIC Educational Resources Information Center
Weber-Fox, Christine; Spencer, Rebecca M.C.; Spruill, John E., III; Smith, Anne
2004-01-01
Event-related brain potentials (ERPs), judgment accuracy, and reaction times (RTs) were obtained for 11 adults who stutter and 11 normally fluent speakers as they performed a rhyme judgment task of visually presented word pairs. Half of the word pairs (i.e., prime and target) were phonologically and orthographically congruent across words. That…
Effects of Numerical Surface Form in Arithmetic Word Problems
ERIC Educational Resources Information Center
Orrantia, Josetxu; Múñez, David; San Romualdo, Sara; Verschaffel, Lieven
2015-01-01
Adults' simple arithmetic performance is more efficient when operands are presented in Arabic digit (3 + 5) than in number word (three + five) formats. An explanation provided is that visual familiarity with digits is higher respect to number words. However, most studies have been limited to single-digit addition and multiplication problems. In…
Words, Hemispheres, and Processing Mechanisms: A Response to Marsolek and Deason (2007)
ERIC Educational Resources Information Center
Ellis, Andrew W.; Ansorge, Lydia; Lavidor, Michal
2007-01-01
Ellis, Ansorge and Lavidor (2007) [Ellis, A.W., Ansorge, L., & Lavidor, M. (2007). Words, hemispheres, and dissociable subsystems: The effects of exposure duration, case alternation, priming and continuity of form on word recognition in the left and right visual fields. "Brain and Language," 103, 292-303.] presented three experiments investigating…
Cross-Language Priming of Word Meaning during Second Language Sentence Comprehension
ERIC Educational Resources Information Center
Yuan, Yanli; Woltz, Dan; Zheng, Robert
2010-01-01
The experiment investigated the benefit to second language (L2) sentence comprehension of priming word meanings with brief visual exposure to first language (L1) translation equivalents. Native English speakers learning Mandarin evaluated the validity of aurally presented Mandarin sentences. For selected words in half of the sentences there was…
ERIC Educational Resources Information Center
Halas, John
Visual scripting is the coordination of words with pictures in sequence. This book presents the methods and viewpoints on visual scripting of fourteen film makers, from nine countries, who are involved in animated cinema; it contains concise examples of how a storybook and preproduction script can be prepared in visual terms; and it includes a…
Teaching the Visual Learner: The Use of Visual Summaries in Marketing Education
ERIC Educational Resources Information Center
Clarke, Irvine, III.; Flaherty, Theresa B.; Yankey, Michael
2006-01-01
Approximately 40% of college students are visual learners, preferring to be taught through pictures, diagrams, flow charts, timelines, films, and demonstrations. Yet marketing instruction remains heavily reliant on presenting content primarily through verbal cues such as written or spoken words. Without visual instruction, some students may be…
Scoring nuclear pleomorphism using a visual BoF modulated by a graph structure
NASA Astrophysics Data System (ADS)
Moncayo-Martínez, Ricardo; Romo-Bucheli, David; Arias, Viviana; Romero, Eduardo
2017-11-01
Nuclear pleomorphism has been recognized as a key histological criterium in breast cancer grading systems (such as Bloom Richardson and Nothingham grading systems). However, the nuclear pleomorphism assessment is subjective and presents high inter-reader variability. Automatic algorithms might facilitate quantitative estimation of nuclear variations in shape and size. Nevertheless, the automatic segmentation of the nuclei is difficult and still and open research problem. This paper presents a method using a bag of multi-scale visual features, modulated by a graph structure, to grade nuclei in breast cancer microscopical fields. This strategy constructs hematoxylin-eosin image patches, each containing a nucleus that is represented by a set of visual words in the BoF. The contribution of each visual word is computed by examining the visual words in an associated graph built when projecting the multi-dimensional BoF to a bi-dimensional plane where local relationships are conserved. The methodology was evaluated using 14 breast cancer cases of the Cancer Genome Atlas database. From these cases, a set of 134 microscopical fields was extracted, and under a leave-one-out validation scheme, an average F-score of 0.68 was obtained.
Van der Haegen, Lise; Brysbaert, Marc; Davis, Colin J
2009-02-01
It has recently been shown that interhemispheric communication is needed for the processing of foveally presented words. In this study, we examine whether the integration of information happens at an early stage, before word recognition proper starts, or whether the integration is part of the recognition process itself. Two lexical decision experiments are reported in which words were presented at different fixation positions. In Experiment 1, a masked form priming task was used with primes that had two adjacent letters transposed. The results showed that although the fixation position had a substantial influence on the transposed letter priming effect, the priming was not smaller when the transposed letters were sent to different hemispheres than when they were projected to the same hemisphere. In Experiment 2, stimuli were presented that either had high frequency hemifield competitors or could be identified unambiguously on the basis of the information in one hemifield. Again, the lexical decision times did not vary as a function of hemifield competitors. These results are consistent with the early integration account, as presented in the SERIOL model of visual word recognition.
Dynamic spatial organization of the occipito-temporal word form area for second language processing.
Gao, Yue; Sun, Yafeng; Lu, Chunming; Ding, Guosheng; Guo, Taomei; Malins, Jeffrey G; Booth, James R; Peng, Danling; Liu, Li
2017-08-01
Despite the left occipito-temporal region having shown consistent activation in visual word form processing across numerous studies in different languages, the mechanisms by which word forms of second languages are processed in this region remain unclear. To examine this more closely, 16 Chinese-English and 14 English-Chinese late bilinguals were recruited to perform lexical decision tasks to visually presented words in both their native and second languages (L1 and L2) during functional magnetic resonance imaging scanning. Here we demonstrate that visual word form processing for L1 versus L2 engaged different spatial areas of the left occipito-temporal region. Namely, the spatial organization of the visual word form processing in the left occipito-temporal region is more medial and posterior for L2 than L1 processing in Chinese-English bilinguals, whereas activation is more lateral and anterior for L2 in English-Chinese bilinguals. In addition, for Chinese-English bilinguals, more lateral recruitment of the occipito-temporal region was correlated with higher L2 proficiency, suggesting higher L2 proficiency is associated with greater involvement of L1-preferred mechanisms. For English-Chinese bilinguals, higher L2 proficiency was correlated with more lateral and anterior activation of the occipito-temporal region, suggesting higher L2 proficiency is associated with greater involvement of L2-preferred mechanisms. Taken together, our results indicate that L1 and L2 recruit spatially different areas of the occipito-temporal region in visual word processing when the two scripts belong to different writing systems, and that the spatial organization of this region for L2 visual word processing is dynamically modulated by L2 proficiency. Specifically, proficiency in L2 in Chinese-English is associated with assimilation to the native language mechanisms, whereas L2 in English-Chinese is associated with accommodation to second language mechanisms. Copyright © 2017. Published by Elsevier Ltd.
Affective Overload: The Effect of Emotive Visual Stimuli on Target Vocabulary Retrieval.
Çetin, Yakup; Griffiths, Carol; Özel, Zeynep Ebrar Yetkiner; Kinay, Hüseyin
2016-04-01
There has been considerable interest in cognitive load in recent years, but the effect of affective load and its relationship to mental functioning has not received as much attention. In order to investigate the effects of affective stimuli on cognitive function as manifest in the ability to remember foreign language vocabulary, two groups of student volunteers (N = 64) aged from 17 to 25 years were shown a Powerpoint presentation of 21 target language words with a picture, audio, and written form for every word. The vocabulary was presented in comfortable rooms with padded chairs and the participants were provided with snacks so that they would be comfortable and relaxed. After the Powerpoint they were exposed to two forms of visual stimuli for 27 min. The different formats contained either visually affective content (sexually suggestive, violent or frightening material) or neutral content (a nature documentary). The group which was exposed to the emotive visual stimuli remembered significantly fewer words than the group which watched the emotively neutral nature documentary. Implications of this finding are discussed and suggestions made for ongoing research.
Hazardous sign detection for safety applications in traffic monitoring
NASA Astrophysics Data System (ADS)
Benesova, Wanda; Kottman, Michal; Sidla, Oliver
2012-01-01
The transportation of hazardous goods in public streets systems can pose severe safety threats in case of accidents. One of the solutions for these problems is an automatic detection and registration of vehicles which are marked with dangerous goods signs. We present a prototype system which can detect a trained set of signs in high resolution images under real-world conditions. This paper compares two different methods for the detection: bag of visual words (BoW) procedure and our approach presented as pairs of visual words with Hough voting. The results of an extended series of experiments are provided in this paper. The experiments show that the size of visual vocabulary is crucial and can significantly affect the recognition success rate. Different code-book sizes have been evaluated for this detection task. The best result of the first method BoW was 67% successfully recognized hazardous signs, whereas the second method proposed in this paper - pairs of visual words and Hough voting - reached 94% of correctly detected signs. The experiments are designed to verify the usability of the two proposed approaches in a real-world scenario.
Flaisch, Tobias; Imhof, Martin; Schmälzle, Ralf; Wentz, Klaus-Ulrich; Ibach, Bernd; Schupp, Harald T
2015-01-01
The present study utilized functional magnetic resonance imaging (fMRI) to examine the neural processing of concurrently presented emotional stimuli under varying explicit and implicit attention demands. Specifically, in separate trials, participants indicated the category of either pictures or words. The words were placed over the center of the pictures and the picture-word compound-stimuli were presented for 1500 ms in a rapid event-related design. The results reveal pronounced main effects of task and emotion: the picture categorization task prompted strong activations in visual, parietal, temporal, frontal, and subcortical regions; the word categorization task evoked increased activation only in left extrastriate cortex. Furthermore, beyond replicating key findings regarding emotional picture and word processing, the results point to a dissociation of semantic-affective and sensory-perceptual processes for words: while emotional words engaged semantic-affective networks of the left hemisphere regardless of task, the increased activity in left extrastriate cortex associated with explicitly attending to words was diminished when the word was overlaid over an erotic image. Finally, we observed a significant interaction between Picture Category and Task within dorsal visual-associative regions, inferior parietal, and dorsolateral, and medial prefrontal cortices: during the word categorization task, activation was increased in these regions when the words were overlaid over erotic as compared to romantic pictures. During the picture categorization task, activity in these areas was relatively decreased when categorizing erotic as compared to romantic pictures. Thus, the emotional intensity of the pictures strongly affected brain regions devoted to the control of task-related word or picture processing. These findings are discussed with respect to the interplay of obligatory stimulus processing with task-related attentional control mechanisms.
Flaisch, Tobias; Imhof, Martin; Schmälzle, Ralf; Wentz, Klaus-Ulrich; Ibach, Bernd; Schupp, Harald T.
2015-01-01
The present study utilized functional magnetic resonance imaging (fMRI) to examine the neural processing of concurrently presented emotional stimuli under varying explicit and implicit attention demands. Specifically, in separate trials, participants indicated the category of either pictures or words. The words were placed over the center of the pictures and the picture-word compound-stimuli were presented for 1500 ms in a rapid event-related design. The results reveal pronounced main effects of task and emotion: the picture categorization task prompted strong activations in visual, parietal, temporal, frontal, and subcortical regions; the word categorization task evoked increased activation only in left extrastriate cortex. Furthermore, beyond replicating key findings regarding emotional picture and word processing, the results point to a dissociation of semantic-affective and sensory-perceptual processes for words: while emotional words engaged semantic-affective networks of the left hemisphere regardless of task, the increased activity in left extrastriate cortex associated with explicitly attending to words was diminished when the word was overlaid over an erotic image. Finally, we observed a significant interaction between Picture Category and Task within dorsal visual-associative regions, inferior parietal, and dorsolateral, and medial prefrontal cortices: during the word categorization task, activation was increased in these regions when the words were overlaid over erotic as compared to romantic pictures. During the picture categorization task, activity in these areas was relatively decreased when categorizing erotic as compared to romantic pictures. Thus, the emotional intensity of the pictures strongly affected brain regions devoted to the control of task-related word or picture processing. These findings are discussed with respect to the interplay of obligatory stimulus processing with task-related attentional control mechanisms. PMID:26733895
Morphological effects in children word reading: a priming study in fourth graders.
Casalis, Séverine; Dusautoir, Marion; Colé, Pascale; Ducrot, Stéphanie
2009-09-01
A growing corpus of evidence suggests that morphology could play a role in reading acquisition, and that young readers could be sensitive to the morphemic structure of written words. In the present experiment, we examined whether and when morphological information is activated in word recognition. French fourth graders made visual lexical decisions to derived words preceded by primes sharing either a morphological or an orthographic relationship with the target. Results showed significant and equivalent facilitation priming effects in cases of morphologically and orthographically related primes at the shortest prime duration, and a significant facilitation priming effect in the case of only morphologically related primes at the longer prime duration. Thus, these results strongly suggest that a morphological level is involved in children's visual word recognition, although it is not distinct from the formal one at an early stage of word processing.
Meyer, Aaron M.; Federmeier, Kara D.
2008-01-01
The visual half-field procedure was used to examine hemispheric asymmetries in meaning selection. Event-related potentials were recorded as participants decided if a lateralized ambiguous or unambiguous prime was related in meaning to a centrally-presented target. Prime-target pairs were preceded by a related or unrelated centrally-presented context word. To separate the effects of meaning frequency and associative strength, unambiguous words were paired with concordant weakly-related context words and strongly-related targets (e.g., taste-sweet-candy) that were similar in associative strength to discordant subordinate-related context words and dominant-related targets (e.g., river-bank-deposit) in the ambiguous condition. Context words and targets were reversed in a second experiment. In an unrelated (neutral) context, N400 responses were more positive than baseline (facilitated) in all ambiguous conditions except when subordinate targets were presented on left visual field-right hemisphere (LVF-RH) trials. Thus, in the absence of biasing context information, the hemispheres seem to be differentially affected by meaning frequency, with the left maintaining multiple meanings and the right selecting the dominant meaning. In the presence of discordant context information, N400 facilitation was absent in both visual fields, indicating that the contextually-consistent meaning of the ambiguous word had been selected. In contrast, N400 facilitation occurred in all of the unambiguous conditions; however, the left hemisphere (LH) showed less facilitation for the weakly-related target when a strongly-related context was presented. These findings indicate that both hemispheres use context to guide meaning selection, but that the LH is more likely to focus activation on a single, contextually-relevant sense. PMID:17936727
Tracking the Eye Movement of Four Years Old Children Learning Chinese Words.
Lin, Dan; Chen, Guangyao; Liu, Yingyi; Liu, Jiaxin; Pan, Jue; Mo, Lei
2018-02-01
Storybook reading is the major source of literacy exposure for beginning readers. The present study tracked 4-year-old Chinese children's eye movements while they were reading simulated storybook pages. Their eye-movement patterns were examined in relation to their word learning gains. The same reading list, consisting of 20 two-character Chinese words, was used in the pretest, 5-min eye-tracking learning session, and posttest. Additionally, visual spatial skill and phonological awareness were assessed in the pretest as cognitive controls. The results showed that the children's attention was attracted quickly by pictures, on which their attention was focused most, with only 13% of the time looking at words. Moreover, significant learning gains in word reading were observed, from the pretest to posttest, from 5-min exposure to simulated storybook pages with words, picture and pronunciation of two-character words present. Furthermore, the children's attention to words significantly predicted posttest reading beyond socioeconomic status, age, visual spatial skill, phonological awareness and pretest reading performance. This eye-movement evidence of storybook reading by children as young as four years, reading a non-alphabetic script (i.e., Chinese), has demonstrated exciting findings that children can learn words effectively with minimal exposure and little instruction; these findings suggest that learning to read requires attention to the basic words itself. The study contributes to our understanding of early reading acquisition with eye-movement evidence from beginning readers.
Crossmodal semantic priming by naturalistic sounds and spoken words enhances visual sensitivity.
Chen, Yi-Chuan; Spence, Charles
2011-10-01
We propose a multisensory framework based on Glaser and Glaser's (1989) general reading-naming interference model to account for the semantic priming effect by naturalistic sounds and spoken words on visual picture sensitivity. Four experiments were designed to investigate two key issues: First, can auditory stimuli enhance visual sensitivity when the sound leads the picture as well as when they are presented simultaneously? And, second, do naturalistic sounds (e.g., a dog's "woofing") and spoken words (e.g., /dɔg/) elicit similar semantic priming effects? Here, we estimated participants' sensitivity and response criterion using signal detection theory in a picture detection task. The results demonstrate that naturalistic sounds enhanced visual sensitivity when the onset of the sounds led that of the picture by 346 ms (but not when the sounds led the pictures by 173 ms, nor when they were presented simultaneously, Experiments 1-3A). At the same SOA, however, spoken words did not induce semantic priming effects on visual detection sensitivity (Experiments 3B and 4A). When using a dual picture detection/identification task, both kinds of auditory stimulus induced a similar semantic priming effect (Experiment 4B). Therefore, we suggest that there needs to be sufficient processing time for the auditory stimulus to access its associated meaning to modulate visual perception. Besides, the interactions between pictures and the two types of sounds depend not only on their processing route to access semantic representations, but also on the response to be made to fulfill the requirements of the task.
Robinson, Amanda K; Plaut, David C; Behrmann, Marlene
2017-07-01
Words and faces have vastly different visual properties, but increasing evidence suggests that word and face processing engage overlapping distributed networks. For instance, fMRI studies have shown overlapping activity for face and word processing in the fusiform gyrus despite well-characterized lateralization of these objects to the left and right hemispheres, respectively. To investigate whether face and word perception influences perception of the other stimulus class and elucidate the mechanisms underlying such interactions, we presented images using rapid serial visual presentations. Across 3 experiments, participants discriminated 2 face, word, and glasses targets (T1 and T2) embedded in a stream of images. As expected, T2 discrimination was impaired when it followed T1 by 200 to 300 ms relative to longer intertarget lags, the so-called attentional blink. Interestingly, T2 discrimination accuracy was significantly reduced at short intertarget lags when a face was followed by a word (face-word) compared with glasses-word and word-word combinations, indicating that face processing interfered with word perception. The reverse effect was not observed; that is, word-face performance was no different than the other object combinations. EEG results indicated the left N170 to T1 was correlated with the word decrement for face-word trials, but not for other object combinations. Taken together, the results suggest face processing interferes with word processing, providing evidence for overlapping neural mechanisms of these 2 object types. Furthermore, asymmetrical face-word interference points to greater overlap of face and word representations in the left than the right hemisphere. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Spurgeon, Jessica; Ward, Geoff; Matthews, William J
2014-11-01
Participants who are presented with a short list of words for immediate free recall (IFR) show a strong tendency to initiate their recall with the 1st list item and then proceed in forward serial order. We report 2 experiments that examined whether this tendency was underpinned by a short-term memory store, of the type that is argued by some to underpin recency effects in IFR. In Experiment 1, we presented 3 groups of participants with lists of between 2 and 12 words for IFR, delayed free recall, and continuous-distractor free recall. The to-be-remembered words were simultaneously spoken and presented visually, and the distractor task involved silently solving a series of self-paced, visually presented mathematical equations (e.g., 3 + 2 + 4 = ?). The tendency to initiate recall at the start of short lists was greatest in IFR but was also present in the 2 other recall conditions. This finding was replicated in Experiment 2, where the to-be-remembered items were presented visually in silence and the participants spoke aloud their answers to computer-paced mathematical equations. Our results necessitate that a short-term buffer cannot be fully responsible for the tendency to initiate recall from the beginning of a short list; rather, they suggest that the tendency represents a general property of episodic memory that occurs across a range of time scales. PsycINFO Database Record (c) 2014 APA, all rights reserved.
A multistream model of visual word recognition.
Allen, Philip A; Smith, Albert F; Lien, Mei-Ching; Kaut, Kevin P; Canfield, Angie
2009-02-01
Four experiments are reported that test a multistream model of visual word recognition, which associates letter-level and word-level processing channels with three known visual processing streams isolated in macaque monkeys: the magno-dominated (MD) stream, the interblob-dominated (ID) stream, and the blob-dominated (BD) stream (Van Essen & Anderson, 1995). We show that mixing the color of adjacent letters of words does not result in facilitation of response times or error rates when the spatial-frequency pattern of a whole word is familiar. However, facilitation does occur when the spatial-frequency pattern of a whole word is not familiar. This pattern of results is not due to different luminance levels across the different-colored stimuli and the background because isoluminant displays were used. Also, the mixed-case, mixed-hue facilitation occurred when different display distances were used (Experiments 2 and 3), so this suggests that image normalization can adjust independently of object size differences. Finally, we show that this effect persists in both spaced and unspaced conditions (Experiment 4)--suggesting that inappropriate letter grouping by hue cannot account for these results. These data support a model of visual word recognition in which lower spatial frequencies are processed first in the more rapid MD stream. The slower ID and BD streams may process some lower spatial frequency information in addition to processing higher spatial frequency information, but these channels tend to lose the processing race to recognition unless the letter string is unfamiliar to the MD stream--as with mixed-case presentation.
Knowledge of a Second Language Influences Auditory Word Recognition in the Native Language
ERIC Educational Resources Information Center
Lagrou, Evelyne; Hartsuiker, Robert J.; Duyck, Wouter
2011-01-01
Many studies in bilingual visual word recognition have demonstrated that lexical access is not language selective. However, research on bilingual word recognition in the auditory modality has been scarce, and it has yielded mixed results with regard to the degree of this language nonselectivity. In the present study, we investigated whether…
Parafoveal preview benefit in reading is only obtained from the saccade goal.
McDonald, Scott A
2006-12-01
Previous research has demonstrated that reading is less efficient when parafoveal visual information about upcoming words is invalid or unavailable; the benefit from a valid preview is realised as reduced reading times on the subsequently foveated word, and has been explained with reference to the allocation of attentional resources to parafoveal word(s). This paper presents eyetracking evidence that preview benefit is obtained only for words that are selected as the saccade target. Using a gaze-contingent display change paradigm (Rayner, K. (1975). The perceptual span and peripheral cues in reading. Cognitive Psychology, 7, 65-81), the position of the triggering boundary was set near the middle of the pretarget word. When a refixation saccade took the eye across the boundary in the pretarget word, there was no reliable effect of the validity of the target word preview. However, when the triggering boundary was positioned just after the pretarget word, a robust preview benefit was observed, replicating previous research. The current results complement findings from studies of basic visual function, suggesting that for the case of preview benefit in reading, attentional and oculomotor processes are obligatorily coupled.
Effect of auditory presentation of words on color naming: the intermodal Stroop effect.
Shimada, H
1990-06-01
To verify two hypotheses (the automatic parallel-processing model vs the feature integration theory) using the Stroop effect, an intermodal presentation method was introduced. The intermodal presentation (auditory presentation of the distractor word and visual presentation of color patch) separates completely the color and word information. Subjects were required to name the color patch on the CRT and to ignore the auditory color-word in the present experiment. A 5 (stimulus onset asynchronies) x 4 (levels of congruency) analysis of variance with repeated measures was performed on the response times. Two main effects and an interactive effect were significant. The findings indicate that without the presentation of color and word component in the same spatial location the Stroop effect occurs. These results suggest that the feature-integration theory cannot explain the mechanisms underlying the Stroop effect.
Visual attention based bag-of-words model for image classification
NASA Astrophysics Data System (ADS)
Wang, Qiwei; Wan, Shouhong; Yue, Lihua; Wang, Che
2014-04-01
Bag-of-words is a classical method for image classification. The core problem is how to count the frequency of the visual words and what visual words to select. In this paper, we propose a visual attention based bag-of-words model (VABOW model) for image classification task. The VABOW model utilizes visual attention method to generate a saliency map, and uses the saliency map as a weighted matrix to instruct the statistic process for the frequency of the visual words. On the other hand, the VABOW model combines shape, color and texture cues and uses L1 regularization logistic regression method to select the most relevant and most efficient features. We compare our approach with traditional bag-of-words based method on two datasets, and the result shows that our VABOW model outperforms the state-of-the-art method for image classification.
ERIC Educational Resources Information Center
Reinke, Karen; Fernandes, Myra; Schwindt, Graeme; O'Craven, Kathleen; Grady, Cheryl L.
2008-01-01
The functional specificity of the brain region known as the Visual Word Form Area (VWFA) was examined using fMRI. We explored whether this area serves a general role in processing symbolic stimuli, rather than being selective for the processing of words. Brain activity was measured during a visual 1-back task to English words, meaningful symbols…
ERIC Educational Resources Information Center
Cole, Charles; Mandelblatt, Bertie; Stevenson, John
2002-01-01
Discusses high recall search strategies for undergraduates and how to overcome information overload that results. Highlights include word-based versus visual-based schemes; five summarization and visualization schemes for presenting information retrieval citation output; and results of a study that recommend visualization schemes geared toward…
Pupillary Responses to Words That Convey a Sense of Brightness or Darkness
Mathôt, Sebastiaan; Grainger, Jonathan; Strijkers, Kristof
2017-01-01
Theories about embodiment of language hold that when you process a word’s meaning, you automatically simulate associated sensory input (e.g., perception of brightness when you process lamp) and prepare associated actions (e.g., finger movements when you process typing). To test this latter prediction, we measured pupillary responses to single words that conveyed a sense of brightness (e.g., day) or darkness (e.g., night) or were neutral (e.g., house). We found that pupils were largest for words conveying darkness, of intermediate size for neutral words, and smallest for words conveying brightness. This pattern was found for both visually presented and spoken words, which suggests that it was due to the words’ meanings, rather than to visual or auditory properties of the stimuli. Our findings suggest that word meaning is sufficient to trigger a pupillary response, even when this response is not imposed by the experimental task, and even when this response is beyond voluntary control. PMID:28613135
ERP correlates of letter identity and letter position are modulated by lexical frequency
Vergara-Martínez, Marta; Perea, Manuel; Gómez, Pablo; Swaab, Tamara Y.
2013-01-01
The encoding of letter position is a key aspect in all recently proposed models of visual-word recognition. We analyzed the impact of lexical frequency on letter position assignment by examining the temporal dynamics of lexical activation induced by pseudowords extracted from words of different frequencies. For each word (e.g., BRIDGE), we created two pseudowords: A transposed-letter (TL: BRIGDE) and a replaced-letter pseudoword (RL: BRITGE). ERPs were recorded while participants read words and pseudowords in two tasks: Semantic categorization (Experiment 1) and lexical decision (Experiment 2). For high-frequency stimuli, similar ERPs were obtained for words and TL-pseudowords, but the N400 component to words was reduced relative to RL-pseudowords, indicating less lexical/semantic activation. In contrast, TL- and RL-pseudowords created from low-frequency stimuli elicited similar ERPs. Behavioral responses in the lexical decision task paralleled this asymmetry. The present findings impose constraints on computational and neural models of visual-word recognition. PMID:23454070
Letter Position Coding Across Modalities: The Case of Braille Readers
Perea, Manuel; García-Chamorro, Cristina; Martín-Suesta, Miguel; Gómez, Pablo
2012-01-01
Background The question of how the brain encodes letter position in written words has attracted increasing attention in recent years. A number of models have recently been proposed to accommodate the fact that transposed-letter stimuli like jugde or caniso are perceptually very close to their base words. Methodology Here we examined how letter position coding is attained in the tactile modality via Braille reading. The idea is that Braille word recognition may provide more serial processing than the visual modality, and this may produce differences in the input coding schemes employed to encode letters in written words. To that end, we conducted a lexical decision experiment with adult Braille readers in which the pseudowords were created by transposing/replacing two letters. Principal Findings We found a word-frequency effect for words. In addition, unlike parallel experiments in the visual modality, we failed to find any clear signs of transposed-letter confusability effects. This dissociation highlights the differences between modalities. Conclusions The present data argue against models of letter position coding that assume that transposed-letter effects (in the visual modality) occur at a relatively late, abstract locus. PMID:23071522
Dictionary Pruning with Visual Word Significance for Medical Image Retrieval
Zhang, Fan; Song, Yang; Cai, Weidong; Hauptmann, Alexander G.; Liu, Sidong; Pujol, Sonia; Kikinis, Ron; Fulham, Michael J; Feng, David Dagan; Chen, Mei
2016-01-01
Content-based medical image retrieval (CBMIR) is an active research area for disease diagnosis and treatment but it can be problematic given the small visual variations between anatomical structures. We propose a retrieval method based on a bag-of-visual-words (BoVW) to identify discriminative characteristics between different medical images with Pruned Dictionary based on Latent Semantic Topic description. We refer to this as the PD-LST retrieval. Our method has two main components. First, we calculate a topic-word significance value for each visual word given a certain latent topic to evaluate how the word is connected to this latent topic. The latent topics are learnt, based on the relationship between the images and words, and are employed to bridge the gap between low-level visual features and high-level semantics. These latent topics describe the images and words semantically and can thus facilitate more meaningful comparisons between the words. Second, we compute an overall-word significance value to evaluate the significance of a visual word within the entire dictionary. We designed an iterative ranking method to measure overall-word significance by considering the relationship between all latent topics and words. The words with higher values are considered meaningful with more significant discriminative power in differentiating medical images. We evaluated our method on two public medical imaging datasets and it showed improved retrieval accuracy and efficiency. PMID:27688597
Dictionary Pruning with Visual Word Significance for Medical Image Retrieval.
Zhang, Fan; Song, Yang; Cai, Weidong; Hauptmann, Alexander G; Liu, Sidong; Pujol, Sonia; Kikinis, Ron; Fulham, Michael J; Feng, David Dagan; Chen, Mei
2016-02-12
Content-based medical image retrieval (CBMIR) is an active research area for disease diagnosis and treatment but it can be problematic given the small visual variations between anatomical structures. We propose a retrieval method based on a bag-of-visual-words (BoVW) to identify discriminative characteristics between different medical images with Pruned Dictionary based on Latent Semantic Topic description. We refer to this as the PD-LST retrieval. Our method has two main components. First, we calculate a topic-word significance value for each visual word given a certain latent topic to evaluate how the word is connected to this latent topic. The latent topics are learnt, based on the relationship between the images and words, and are employed to bridge the gap between low-level visual features and high-level semantics. These latent topics describe the images and words semantically and can thus facilitate more meaningful comparisons between the words. Second, we compute an overall-word significance value to evaluate the significance of a visual word within the entire dictionary. We designed an iterative ranking method to measure overall-word significance by considering the relationship between all latent topics and words. The words with higher values are considered meaningful with more significant discriminative power in differentiating medical images. We evaluated our method on two public medical imaging datasets and it showed improved retrieval accuracy and efficiency.
Borkowska, Aneta Rita; Francuz, Piotr; Soluch, Paweł; Wolak, Tomasz
2014-10-01
The present study aimed at defining the specific traits of brain activation in teenagers with isolated spelling disorder in comparison with good spellers. fMRI examination was performed where the subject's task involved taking a decision 1/whether the visually presented words were spelled correctly or not (the orthographic decision task), and 2/whether the two presented letters strings (pseudowords) were identical or not (the visual decision task). Half of the displays showing meaningful words with an orthographic difficulty contained pairs with both words spelled correctly, and half of them contained one misspelled word. Half of the pseudowords were identical, half of them were not. The participants of the study included 15 individuals with isolated spelling disorder and 14 good spellers, aged 13-15. The results demonstrated that the essential differences in brain activation between teenagers with isolated spelling disorder and good spellers were found in the left inferior frontal gyrus, left medial frontal gyrus and right cerebellum posterior lobe, i.e. structures important for language processes, working memory and automaticity of behaviour. Spelling disorder is not only an effect of language dysfunction, it could be a symptom of difficulties in learning and automaticity of motor and visual shapes of written words, rapid information processing as well as automating use of orthographic lexicon. Copyright © 2013 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.
Iconic Factors and Language Word Order
ERIC Educational Resources Information Center
Moeser, Shannon Dawn
1975-01-01
College students were presented with an artificial language in which spoken nonsense words were correlated with visual references. Inferences regarding vocabulary acquisition were drawn, and it was suggested that the processing of the language was mediated through a semantic memory system. (CK)
Boukadi, Mariem; Potvin, Karel; Macoir, Joël; Jr Laforce, Robert; Poulin, Stéphane; Brambati, Simona M; Wilson, Maximiliano A
2016-06-01
The co-occurrence of semantic impairment and surface dyslexia in the semantic variant of primary progressive aphasia (svPPA) has often been taken as supporting evidence for the central role of semantics in visual word processing. According to connectionist models, semantic access is needed to accurately read irregular words. They also postulate that reliance on semantics is necessary to perform the lexical decision task under certain circumstances (for example, when the stimulus list comprises pseudohomophones). In the present study, we report two svPPA cases: M.F. who presented with surface dyslexia but performed accurately on the lexical decision task with pseudohomophones, and R.L. who showed no surface dyslexia but performed below the normal range on the lexical decision task with pseudohomophones. This double dissociation between reading and lexical decision with pseudohomophones is in line with the dual-route cascaded (DRC) model of reading. According to this model, impairments in visual word processing in svPPA are not necessarily associated with the semantic deficits characterizing this disease. Our findings also call into question the central role given to semantics in visual word processing within the connectionist account. Copyright © 2016 Elsevier Ltd. All rights reserved.
Bentin, S; Mouchetant-Rostaing, Y; Giard, M H; Echallier, J F; Pernier, J
1999-05-01
The aim of the present study was to examine the time course and scalp distribution of electrophysiological manifestations of the visual word recognition mechanism. Event-related potentials (ERPs) elicited by visually presented lists of words were recorded while subjects were involved in a series of oddball tasks. The distinction between the designated target and nontarget stimuli was manipulated to induce a different level of processing in each session (visual, phonological/phonetic, phonological/lexical, and semantic). The ERPs of main interest in this study were those elicited by nontarget stimuli. In the visual task the targets were twice as big as the nontargets. Words, pseudowords, strings of consonants, strings of alphanumeric symbols, and strings of forms elicited a sharp negative peak at 170 msec (N170); their distribution was limited to the occipito-temporal sites. For the left hemisphere electrode sites, the N170 was larger for orthographic than for nonorthographic stimuli and vice versa for the right hemisphere. The ERPs elicited by all orthographic stimuli formed a clearly distinct cluster that was different from the ERPs elicited by nonorthographic stimuli. In the phonological/phonetic decision task the targets were words and pseudowords rhyming with the French word vitrail, whereas the nontargets were words, pseudowords, and strings of consonants that did not rhyme with vitrail. The most conspicuous potential was a negative peak at 320 msec, which was similarly elicited by pronounceable stimuli but not by nonpronounceable stimuli. The N320 was bilaterally distributed over the middle temporal lobe and was significantly larger over the left than over the right hemisphere. In the phonological/lexical processing task we compared the ERPs elicited by strings of consonants (among which words were selected), pseudowords (among which words were selected), and by words (among which pseudowords were selected). The most conspicuous potential in these tasks was a negative potential peaking at 350 msec (N350) elicited by phonologically legal but not by phonologically illegal stimuli. The distribution of the N350 was similar to that of the N320, but it was broader and including temporo-parietal areas that were not activated in the "rhyme" task. Finally, in the semantic task the targets were abstract words, and the nontargets were concrete words, pseudowords, and strings of consonants. The negative potential in this task peaked at 450 msec. Unlike the lexical decision, the negative peak in this task significantly distinguished not only between phonologically legal and illegal words but also between meaningful (words) and meaningless (pseudowords) phonologically legal structures. The distribution of the N450 included the areas activated in the lexical decision task but also areas in the fronto-central regions. The present data corroborated the functional neuroanatomy of word recognition systems suggested by other neuroimaging methods and described their timecourse, supporting a cascade-type process that involves different but interconnected neural modules, each responsible for a different level of processing word-related information.
Delogu, Franco; Lilla, Christopher C
2017-11-01
Contrasting results in visual and auditory spatial memory stimulate the debate over the role of sensory modality and attention in identity-to-location binding. We investigated the role of sensory modality in the incidental/deliberate encoding of the location of a sequence of items. In 4 separated blocks, 88 participants memorised sequences of environmental sounds, spoken words, pictures and written words, respectively. After memorisation, participants were asked to recognise old from new items in a new sequence of stimuli. They were also asked to indicate from which side of the screen (visual stimuli) or headphone channel (sounds) the old stimuli were presented in encoding. In the first block, participants were not aware of the spatial requirement while, in blocks 2, 3 and 4 they knew that their memory for item location was going to be tested. Results show significantly lower accuracy of object location memory for the auditory stimuli (environmental sounds and spoken words) than for images (pictures and written words). Awareness of spatial requirement did not influence localisation accuracy. We conclude that: (a) object location memory is more effective for visual objects; (b) object location is implicitly associated with item identity during encoding and (c) visual supremacy in spatial memory does not depend on the automaticity of object location binding.
Chen, Yi-Chuan; Spence, Charles
2018-04-30
We examined the time-courses and categorical specificity of the crossmodal semantic congruency effects elicited by naturalistic sounds and spoken words on the processing of visual pictures (Experiment 1) and printed words (Experiment 2). Auditory cues were presented at 7 different stimulus onset asynchronies (SOAs) with respect to the visual targets, and participants made speeded categorization judgments (living vs. nonliving). Three common effects were observed across 2 experiments: Both naturalistic sounds and spoken words induced a slowly emerging congruency effect when leading by 250 ms or more in the congruent compared with the incongruent condition, and a rapidly emerging inhibitory effect when leading by 250 ms or less in the incongruent condition as opposed to the noise condition. Only spoken words that did not match the visual targets elicited an additional inhibitory effect when leading by 100 ms or when presented simultaneously. Compared with nonlinguistic stimuli, the crossmodal congruency effects associated with linguistic stimuli occurred over a wider range of SOAs and occurred at a more specific level of the category hierarchy (i.e., the basic level) than was required by the task. A comprehensive framework is proposed to provide a dynamic view regarding how meaning is extracted during the processing of visual or auditory linguistic and nonlinguistic stimuli, therefore contributing to our understanding of multisensory semantic processing in humans. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Presentation format effects in working memory: the role of attention.
Foos, Paul W; Goolkasian, Paula
2005-04-01
Four experiments are reported in which participants attempted to remember three or six concrete nouns, presented as pictures, spoken words, or printed words, while also verifying the accuracy of sentences. Hypotheses meant to explain the higher recall of pictures and spoken words over printed words were tested. Increasing the difficulty and changing the type of processing task from arithmetic to a visual/spatial reasoning task did not influence recall. An examination of long-term modality effects showed that those effects were not sufficient to explain the superior performance with spoken words and pictures. Only when we manipulated the allocation of attention to the items in the storage task by requiring the participants to articulate the items and by presenting the stimulus items under a degraded condition were we able to reduce or remove the effect of presentation format. The findings suggest that the better recall of pictures and spoken words over printed words result from the fact that under normal presentation conditions, printed words receive less processing attention than pictures and spoken words do.
Neural events that underlie remembering something that never happened.
Gonsalves, B; Paller, K A
2000-12-01
We induced people to experience a false-memory illusion by first asking them to visualize common objects when cued with the corresponding word; on some trials, a photograph of the object was presented 1800 ms after the cue word. We then tested their memory for the photographs. Posterior brain potentials in response to words at encoding were more positive if the corresponding object was later falsely remembered as a photograph. Similar brain potentials during the memory test were more positive for true than for false memories. These results implicate visual imagery in the generation of false memories and provide neural correlates of processing differences between true and false memories.
The online social self: an open vocabulary approach to personality.
Kern, Margaret L; Eichstaedt, Johannes C; Schwartz, H Andrew; Dziurzynski, Lukasz; Ungar, Lyle H; Stillwell, David J; Kosinski, Michal; Ramones, Stephanie M; Seligman, Martin E P
2014-04-01
We present a new open language analysis approach that identifies and visually summarizes the dominant naturally occurring words and phrases that most distinguished each Big Five personality trait. Using millions of posts from 69,792 Facebook users, we examined the correlation of personality traits with online word usage. Our analysis method consists of feature extraction, correlational analysis, and visualization. The distinguishing words and phrases were face valid and provide insight into processes that underlie the Big Five traits. Open-ended data driven exploration of large datasets combined with established psychological theory and measures offers new tools to further understand the human psyche. © The Author(s) 2013.
The Role of Color in Search Templates for Real-world Target Objects.
Nako, Rebecca; Smith, Tim J; Eimer, Martin
2016-11-01
During visual search, target representations (attentional templates) control the allocation of attention to template-matching objects. The activation of new attentional templates can be prompted by verbal or pictorial target specifications. We measured the N2pc component of the ERP as a temporal marker of attentional target selection to determine the role of color signals in search templates for real-world search target objects that are set up in response to word or picture cues. On each trial run, a word cue (e.g., "apple") was followed by three search displays that contained the cued target object among three distractors. The selection of the first target was based on the word cue only, whereas selection of the two subsequent targets could be controlled by templates set up after the first visual presentation of the target (picture cue). In different trial runs, search displays either contained objects in their natural colors or monochromatic objects. These two display types were presented in different blocks (Experiment 1) or in random order within each block (Experiment 2). RTs were faster, and target N2pc components emerged earlier for the second and third display of each trial run relative to the first display, demonstrating that pictures are more effective than word cues in guiding search. N2pc components were triggered more rapidly for targets in the second and third display in trial runs with colored displays. This demonstrates that when visual target attributes are fully specified by picture cues, the additional presence of color signals in target templates facilitates the speed with which attention is allocated to template-matching objects. No such selection benefits for colored targets were found when search templates were set up in response to word cues. Experiment 2 showed that color templates activated by word cues can even impair the attentional selection of noncolored targets. Results provide new insights into the status of color during the guidance of visual search for real-world target objects. Color is a powerful guiding feature when the precise visual properties of these objects are known but seems to be less important when search targets are specified by word cues.
A Graph-Embedding Approach to Hierarchical Visual Word Mergence.
Wang, Lei; Liu, Lingqiao; Zhou, Luping
2017-02-01
Appropriately merging visual words are an effective dimension reduction method for the bag-of-visual-words model in image classification. The approach of hierarchically merging visual words has been extensively employed, because it gives a fully determined merging hierarchy. Existing supervised hierarchical merging methods take different approaches and realize the merging process with various formulations. In this paper, we propose a unified hierarchical merging approach built upon the graph-embedding framework. Our approach is able to merge visual words for any scenario, where a preferred structure and an undesired structure are defined, and, therefore, can effectively attend to all kinds of requirements for the word-merging process. In terms of computational efficiency, we show that our algorithm can seamlessly integrate a fast search strategy developed in our previous work and, thus, well maintain the state-of-the-art merging speed. To the best of our survey, the proposed approach is the first one that addresses the hierarchical visual word mergence in such a flexible and unified manner. As demonstrated, it can maintain excellent image classification performance even after a significant dimension reduction, and outperform all the existing comparable visual word-merging methods. In a broad sense, our work provides an open platform for applying, evaluating, and developing new criteria for hierarchical word-merging tasks.
Orthographic processing in pigeons (Columba livia)
Scarf, Damian; Boy, Karoline; Uber Reinert, Anelisie; Devine, Jack; Güntürkün, Onur; Colombo, Michael
2016-01-01
Learning to read involves the acquisition of letter–sound relationships (i.e., decoding skills) and the ability to visually recognize words (i.e., orthographic knowledge). Although decoding skills are clearly human-unique, given they are seated in language, recent research and theory suggest that orthographic processing may derive from the exaptation or recycling of visual circuits that evolved to recognize everyday objects and shapes in our natural environment. An open question is whether orthographic processing is limited to visual circuits that are similar to our own or a product of plasticity common to many vertebrate visual systems. Here we show that pigeons, organisms that separated from humans more than 300 million y ago, process words orthographically. Specifically, we demonstrate that pigeons trained to discriminate words from nonwords picked up on the orthographic properties that define words and used this knowledge to identify words they had never seen before. In addition, the pigeons were sensitive to the bigram frequencies of words (i.e., the common co-occurrence of certain letter pairs), the edit distance between nonwords and words, and the internal structure of words. Our findings demonstrate that visual systems organizationally distinct from the primate visual system can also be exapted or recycled to process the visual word form. PMID:27638211
Word Spelling Assessment Using ICT: The Effect of Presentation Modality
ERIC Educational Resources Information Center
Sarris, Menelaos; Panagiotakopoulos, Chris
2010-01-01
Up-to-date spelling process was assessed using typical spelling-to-dictation tasks, where children's performance was evaluated mainly in terms of spelling error scores. In the present work a simple graphical computer interface is reported, aiming to investigate the effects of input modality (e.g. visual and verbal) in word spelling. The software…
Effects of Study Task on the Neural Correlates of Source Encoding
ERIC Educational Resources Information Center
Park, Heekyeong; Uncapher, Melina R.; Rugg, Michael D.
2008-01-01
The present study investigated whether the neural correlates of source memory vary according to study task. Subjects studied visually presented words in one of two background contexts. In each test, subjects made old/new recognition and source memory judgments. In one study test cycle, study words were subjected to animacy judgments, whereas in…
ERIC Educational Resources Information Center
Woollams, Anna M.; Silani, Giorgia; Okada, Kayoko; Patterson, Karalyn; Price, Cathy J.
2011-01-01
Prior lesion and functional imaging studies have highlighted the importance of the left ventral occipito-temporal (LvOT) cortex for visual word recognition. Within this area, there is a posterior-anterior hierarchy of subregions that are specialized for different stages of orthographic processing. The aim of the present fMRI study was to…
ERIC Educational Resources Information Center
Wilson, Maximiliano A.; Cuetos, Fernando; Davies, Rob; Burani, Cristina
2013-01-01
Word age-of-acquisition (AoA) affects reading. The mapping hypothesis predicts AoA effects when input--output mappings are arbitrary. In Spanish, the orthography-to-phonology mappings required for word naming are consistent; therefore, no AoA effects are expected. Nevertheless, AoA effects have been found, motivating the present investigation of…
Cross-modal metaphorical mapping of spoken emotion words onto vertical space.
Montoro, Pedro R; Contreras, María José; Elosúa, María Rosa; Marmolejo-Ramos, Fernando
2015-01-01
From the field of embodied cognition, previous studies have reported evidence of metaphorical mapping of emotion concepts onto a vertical spatial axis. Most of the work on this topic has used visual words as the typical experimental stimuli. However, to our knowledge, no previous study has examined the association between affect and vertical space using a cross-modal procedure. The current research is a first step toward the study of the metaphorical mapping of emotions onto vertical space by means of an auditory to visual cross-modal paradigm. In the present study, we examined whether auditory words with an emotional valence can interact with the vertical visual space according to a 'positive-up/negative-down' embodied metaphor. The general method consisted in the presentation of a spoken word denoting a positive/negative emotion prior to the spatial localization of a visual target in an upper or lower position. In Experiment 1, the spoken words were passively heard by the participants and no reliable interaction between emotion concepts and bodily simulated space was found. In contrast, Experiment 2 required more active listening of the auditory stimuli. A metaphorical mapping of affect and space was evident but limited to the participants engaged in an emotion-focused task. Our results suggest that the association of affective valence and vertical space is not activated automatically during speech processing since an explicit semantic and/or emotional evaluation of the emotionally valenced stimuli was necessary to obtain an embodied effect. The results are discussed within the framework of the embodiment hypothesis.
Cross-modal metaphorical mapping of spoken emotion words onto vertical space
Montoro, Pedro R.; Contreras, María José; Elosúa, María Rosa; Marmolejo-Ramos, Fernando
2015-01-01
From the field of embodied cognition, previous studies have reported evidence of metaphorical mapping of emotion concepts onto a vertical spatial axis. Most of the work on this topic has used visual words as the typical experimental stimuli. However, to our knowledge, no previous study has examined the association between affect and vertical space using a cross-modal procedure. The current research is a first step toward the study of the metaphorical mapping of emotions onto vertical space by means of an auditory to visual cross-modal paradigm. In the present study, we examined whether auditory words with an emotional valence can interact with the vertical visual space according to a ‘positive-up/negative-down’ embodied metaphor. The general method consisted in the presentation of a spoken word denoting a positive/negative emotion prior to the spatial localization of a visual target in an upper or lower position. In Experiment 1, the spoken words were passively heard by the participants and no reliable interaction between emotion concepts and bodily simulated space was found. In contrast, Experiment 2 required more active listening of the auditory stimuli. A metaphorical mapping of affect and space was evident but limited to the participants engaged in an emotion-focused task. Our results suggest that the association of affective valence and vertical space is not activated automatically during speech processing since an explicit semantic and/or emotional evaluation of the emotionally valenced stimuli was necessary to obtain an embodied effect. The results are discussed within the framework of the embodiment hypothesis. PMID:26322007
[The role of external letter positions in visual word recognition].
Perea, Manuel; Lupker, Sthephen J
2007-11-01
A key issue for any computational model of visual word recognition is the choice of an input coding schema, which is responsible for assigning letter positions. Such a schema must reflect the fact that, according to recent research, nonwords created by transposing letters (e.g., caniso for CASINO ), typically, appear to be more similar to the word than nonwords created by replacing letters (e.g., caviro ). In the present research, we initially carried out a computational analysis examining the degree to which the position of the transposition influences transposed-letter similarity effects. We next conducted a masked priming experiment with the lexical decision task to determine whether a transposed-letter priming advantage occurs when the first letter position is involved. Primes were created by either transposing the first and third letters (démula-MEDULA ) or replacing the first and third letters (bérula-MEDULA). Results showed that there was no transposed-letter priming advantage in this situation. We discuss the implications of these results for models of visual word recognition.
W-tree indexing for fast visual word generation.
Shi, Miaojing; Xu, Ruixin; Tao, Dacheng; Xu, Chao
2013-03-01
The bag-of-visual-words representation has been widely used in image retrieval and visual recognition. The most time-consuming step in obtaining this representation is the visual word generation, i.e., assigning visual words to the corresponding local features in a high-dimensional space. Recently, structures based on multibranch trees and forests have been adopted to reduce the time cost. However, these approaches cannot perform well without a large number of backtrackings. In this paper, by considering the spatial correlation of local features, we can significantly speed up the time consuming visual word generation process while maintaining accuracy. In particular, visual words associated with certain structures frequently co-occur; hence, we can build a co-occurrence table for each visual word for a large-scale data set. By associating each visual word with a probability according to the corresponding co-occurrence table, we can assign a probabilistic weight to each node of a certain index structure (e.g., a KD-tree and a K-means tree), in order to re-direct the searching path to be close to its global optimum within a small number of backtrackings. We carefully study the proposed scheme by comparing it with the fast library for approximate nearest neighbors and the random KD-trees on the Oxford data set. Thorough experimental results suggest the efficiency and effectiveness of the new scheme.
Cross-modal working memory binding and word recognition skills: how specific is the link?
Wang, Shinmin; Allen, Richard J
2018-04-01
Recent research has suggested that the creation of temporary bound representations of information from different sources within working memory uniquely relates to word recognition abilities in school-age children. However, it is unclear to what extent this link is attributable specifically to the binding ability for cross-modal information. This study examined the performance of Grade 3 (8-9 years old) children on binding tasks requiring either temporary association formation of two visual items (i.e., within-modal binding) or pairs of visually presented abstract shapes and auditorily presented nonwords (i.e., cross-modal binding). Children's word recognition skills were related to performance on the cross-modal binding task but not on the within-modal binding task. Further regression models showed that cross-modal binding memory was a significant predictor of word recognition when memory for its constituent elements, general abilities, and crucially, within-modal binding memory were taken into account. These findings may suggest a specific link between the ability to bind information across modalities within working memory and word recognition skills.
Basu, Anamitra; Mandal, Manas K
2004-07-01
The present study examined visual-field advantage as a function of presentation mode (unilateral, bilateral), stimulus structure (facial, lexical), and stimulus content (emotional, neutral). The experiment was conducted in a split visual-field paradigm using a JAVA-based computer program with recognition accuracy as the dependent measure. Unilaterally, rather than bilaterally, presented stimuli were significantly better recognized. Words were significantly better recognized than faces in the right visual-field; the difference was nonsignificant in the left visual-field. Emotional content elicited left visual-field and neutral content elicited right visual-field advantages. Copyright Taylor and Francis Inc.
Lidestam, Björn; Rönnberg, Jerker
2016-01-01
The present study compared elderly hearing aid (EHA) users (n = 20) with elderly normal-hearing (ENH) listeners (n = 20) in terms of isolation points (IPs, the shortest time required for correct identification of a speech stimulus) and accuracy of audiovisual gated speech stimuli (consonants, words, and final words in highly and less predictable sentences) presented in silence. In addition, we compared the IPs of audiovisual speech stimuli from the present study with auditory ones extracted from a previous study, to determine the impact of the addition of visual cues. Both participant groups achieved ceiling levels in terms of accuracy in the audiovisual identification of gated speech stimuli; however, the EHA group needed longer IPs for the audiovisual identification of consonants and words. The benefit of adding visual cues to auditory speech stimuli was more evident in the EHA group, as audiovisual presentation significantly shortened the IPs for consonants, words, and final words in less predictable sentences; in the ENH group, audiovisual presentation only shortened the IPs for consonants and words. In conclusion, although the audiovisual benefit was greater for EHA group, this group had inferior performance compared with the ENH group in terms of IPs when supportive semantic context was lacking. Consequently, EHA users needed the initial part of the audiovisual speech signal to be longer than did their counterparts with normal hearing to reach the same level of accuracy in the absence of a semantic context. PMID:27317667
Is nevtral NEUTRAL? Visual similarity effects in the early phases of written-word recognition.
Marcet, Ana; Perea, Manuel
2017-08-01
For simplicity, contemporary models of written-word recognition and reading have unspecified feature/letter levels-they predict that the visually similar substituted-letter nonword PEQPLE is as effective at activating the word PEOPLE as the visually dissimilar substituted-letter nonword PEYPLE. Previous empirical evidence on the effects of visual similarly across letters during written-word recognition is scarce and nonconclusive. To examine whether visual similarity across letters plays a role early in word processing, we conducted two masked priming lexical decision experiments (stimulus-onset asynchrony = 50 ms). The substituted-letter primes were visually very similar to the target letters (u/v in Experiment 1 and i/j in Experiment 2; e.g., nevtral-NEUTRAL). For comparison purposes, we included an identity prime condition (neutral-NEUTRAL) and a dissimilar-letter prime condition (neztral-NEUTRAL). Results showed that the similar-letter prime condition produced faster word identification times than the dissimilar-letter prime condition. We discuss how models of written-word recognition should be amended to capture visual similarity effects across letters.
Masked Priming Is Abstract in the Left and Right Visual Fields
ERIC Educational Resources Information Center
Bowers, Jeffrey S.; Turner, Emma L.
2005-01-01
Two experiments assessed masked priming for words presented to the left and right visual fields in a lexical decision task. In both Experiments, the same magnitude and pattern of priming was obtained for visually similar ("kiss"-"KISS") and dissimilar ("read"-"READ") prime-target pairs. These findings…
Slipped Lips: Onset Asynchrony Detection of Auditory-Visual Language in Autism
ERIC Educational Resources Information Center
Grossman, Ruth B.; Schneps, Matthew H.; Tager-Flusberg, Helen
2009-01-01
Background: It has frequently been suggested that individuals with autism spectrum disorder (ASD) have deficits in auditory-visual (AV) sensory integration. Studies of language integration have mostly used non-word syllables presented in congruent and incongruent AV combinations and demonstrated reduced influence of visual speech in individuals…
Davies-Thompson, Jodie; Johnston, Samantha; Tashakkor, Yashar; Pancaroglu, Raika; Barton, Jason J S
2016-08-01
Visual words and faces activate similar networks but with complementary hemispheric asymmetries, faces being lateralized to the right and words to the left. A recent theory proposes that this reflects developmental competition between visual word and face processing. We investigated whether this results in an inverse correlation between the degree of lateralization of visual word and face activation in the fusiform gyri. 26 literate right-handed healthy adults underwent functional MRI with face and word localizers. We derived lateralization indices for cluster size and peak responses for word and face activity in left and right fusiform gyri, and correlated these across subjects. A secondary analysis examined all face- and word-selective voxels in the inferior occipitotemporal cortex. No negative correlations were found. There were positive correlations for the peak MR response between word and face activity within the left hemisphere, and between word activity in the left visual word form area and face activity in the right fusiform face area. The face lateralization index was positively rather than negatively correlated with the word index. In summary, we do not find a complementary relationship between visual word and face lateralization across subjects. The significance of the positive correlations is unclear: some may reflect the influences of general factors such as attention, but others may point to other factors that influence lateralization of function. Copyright © 2016 Elsevier B.V. All rights reserved.
Perea, Manuel; Marcet, Ana; Vergara-Martínez, Marta
2016-09-01
In masked priming lexical decision experiments, there is a matched-case identity advantage for nonwords, but not for words (e.g., ERTAR-ERTAR < ertar-ERTAR; ALTAR-ALTAR = altar-ALTAR). This dissociation has been interpreted in terms of feedback from higher levels of processing during orthographic encoding. Here, we examined whether a matched-case identity advantage also occurs for words when top-down feedback is minimized. We employed a task that taps prelexical orthographic processes: the masked prime same-different task. For "same" trials, results showed faster response times for targets when preceded by a briefly presented matched-case identity prime than when preceded by a mismatched-case identity prime. Importantly, this advantage was similar in magnitude for nonwords and words. This finding constrains the interplay of bottom-up versus top-down mechanisms in models of visual-word identification.
Representation of visual symbols in the visual word processing network.
Muayqil, Taim; Davies-Thompson, Jodie; Barton, Jason J S
2015-03-01
Previous studies have shown that word processing involves a predominantly left-sided occipitotemporal network. Words are a form of symbolic representation, in that they are arbitrary perceptual stimuli that represent other objects, actions or concepts. Lesions of parts of the visual word processing network can cause alexia, which can be associated with difficulty processing other types of symbols such as musical notation or road signs. We investigated whether components of the visual word processing network were also activated by other types of symbols. In 16 music-literate subjects, we defined the visual word network using fMRI and examined responses to four symbolic categories: visual words, musical notation, instructive symbols (e.g. traffic signs), and flags and logos. For each category we compared responses not only to scrambled stimuli, but also to similar stimuli that lacked symbolic meaning. The left visual word form area and a homologous right fusiform region responded similarly to all four categories, but equally to both symbolic and non-symbolic equivalents. Greater response to symbolic than non-symbolic stimuli occurred only in the left inferior frontal and middle temporal gyri, but only for words, and in the case of the left inferior frontal gyri, also for musical notation. A whole-brain analysis comparing symbolic versus non-symbolic stimuli revealed a distributed network of inferior temporooccipital and parietal regions that differed for different symbols. The fusiform gyri are involved in processing the form of many symbolic stimuli, but not specifically for stimuli with symbolic content. Selectivity for stimuli with symbolic content only emerges in the visual word network at the level of the middle temporal and inferior frontal gyri, but is specific for words and musical notation. Copyright © 2015 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Shafiro, Valeriy; Kharkhurin, Anatoliy V.
2009-01-01
Abstract Does native language phonology influence visual word processing in a second language? This question was investigated in two experiments with two groups of Russian-English bilinguals, differing in their English experience, and a monolingual English control group. Experiment 1 tested visual word recognition following semantic…
Partial Membership Latent Dirichlet Allocation for Soft Image Segmentation.
Chen, Chao; Zare, Alina; Trinh, Huy N; Omotara, Gbenga O; Cobb, James Tory; Lagaunne, Timotius A
2017-12-01
Topic models [e.g., probabilistic latent semantic analysis, latent Dirichlet allocation (LDA), and supervised LDA] have been widely used for segmenting imagery. However, these models are confined to crisp segmentation, forcing a visual word (i.e., an image patch) to belong to one and only one topic. Yet, there are many images in which some regions cannot be assigned a crisp categorical label (e.g., transition regions between a foggy sky and the ground or between sand and water at a beach). In these cases, a visual word is best represented with partial memberships across multiple topics. To address this, we present a partial membership LDA (PM-LDA) model and an associated parameter estimation algorithm. This model can be useful for imagery, where a visual word may be a mixture of multiple topics. Experimental results on visual and sonar imagery show that PM-LDA can produce both crisp and soft semantic image segmentations; a capability previous topic modeling methods do not have.
Elevating Baseline Activation Does Not Facilitate Reading of Unattended Words
NASA Technical Reports Server (NTRS)
Lien, Mei-Ching; Kouchi, Scott; Ruthruff, Eric; Lachter, Joel B.
2009-01-01
Previous studies have disagreed the extent to which people extract meaning from words presented outside the focus of spatial attention. The present study, examined a possible explanation for such discrepancies, inspired by attenuation theory: unattended words can be read more automatically when they have a high baseline level of activation (e.g., due to frequent repetition or due to being expected in a given context). We presented a brief prime word in lowercase, followed by a target word in uppercase. Participants indicated whether the target word belonged to a particular category (e.g., "sport"). When we drew attention to the prime word using a visual cue, the prime produced substantial priming effects on target responses (i.e., faster responses when the prime and target words were identical or from the same category than when they belonged to different categories). When prime words were not attended, however, they produced no priming effects. This finding replicated even when there were only 4 words, each repeated 160 times during the experiment. Even with a very high baseline level of activation, it appears that very little word processing is possible without spatial attention.
Influence of automatic word reading on motor control.
Gentilucci, M; Gangitano, M
1998-02-01
We investigated the possible influence of automatic word reading on processes of visuo-motor transformation. Six subjects were required to reach and grasp a rod on whose visible face the word 'long' or 'short' was printed. Word reading was not explicitly required. In order to induce subjects to visually analyse the object trial by trial, object position and size were randomly varied during the experimental session. The kinematics of the reaching component was affected by word presentation. Peak acceleration, peak velocity, and peak deceleration of arm were higher for the word 'long' with respect to the word 'short'. That is, during the initial movement phase subjects automatically associated the meaning of the word with the distance to be covered and activated a motor program for a farther and/or nearer object position. During the final movement phase, subjects modified the braking forces (deceleration) in order to correct the initial error. No effect of the words on the grasp component was observed. These results suggest a possible influence of cognitive functions on motor control and seem to contrast with the notion that the analyses executed in the ventral and dorsal cortical visual streams are different and independent.
ERIC Educational Resources Information Center
Csikos, Csaba; Szitanyi, Judit; Kelemen, Rita
2012-01-01
The present study aims to investigate the effects of a design experiment developed for third-grade students in the field of mathematics word problems. The main focus of the program was developing students' knowledge about word problem solving strategies with an emphasis on the role of visual representations in mathematical modeling. The experiment…
ERIC Educational Resources Information Center
Van der Haegen, Lise; Brysbaert, Marc
2011-01-01
Words are processed as units. This is not as evident as it seems, given the division of the human cerebral cortex in two hemispheres and the partial decussation of the optic tract. In two experiments, we investigated what underlies the unity of foveally presented words: A bilateral projection of visual input in foveal vision, or interhemispheric…
ERIC Educational Resources Information Center
Van der Haegen, Lise; Brysbaert, Marc; Davis, Colin J.
2009-01-01
It has recently been shown that interhemispheric communication is needed for the processing of foveally presented words. In this study, we examine whether the integration of information happens at an early stage, before word recognition proper starts, or whether the integration is part of the recognition process itself. Two lexical decision…
Stephan-Otto, Christian; Siddi, Sara; Senior, Carl; Cuevas-Esteban, Jorge; Cambra-Martí, Maria Rosa; Ochoa, Susana; Brébion, Gildas
2017-09-01
Previous research suggests that visual hallucinations in schizophrenia consist of mental images mistaken for percepts due to failure of the reality-monitoring processes. However, the neural substrates that underpin such dysfunction are currently unknown. We conducted a brain imaging study to investigate the role of visual mental imagery in visual hallucinations. Twenty-three patients with schizophrenia and 26 healthy participants were administered a reality-monitoring task whilst undergoing an fMRI protocol. At the encoding phase, a mixture of pictures of common items and labels designating common items were presented. On the memory test, participants were requested to remember whether a picture of the item had been presented or merely its label. Visual hallucination scores were associated with a liberal response bias reflecting propensity to erroneously remember pictures of the items that had in fact been presented as words. At encoding, patients with visual hallucinations differentially activated the right fusiform gyrus when processing the words they later remembered as pictures, which suggests the formation of visual mental images. On the memory test, the whole patient group activated the anterior cingulate and medial superior frontal gyrus when falsely remembering pictures. However, no differential activation was observed in patients with visual hallucinations, whereas in the healthy sample, the production of visual mental images at encoding led to greater activation of a fronto-parietal decisional network on the memory test. Visual hallucinations are associated with enhanced visual imagery and possibly with a failure of the reality-monitoring processes that enable discrimination between imagined and perceived events. Copyright © 2017 Elsevier Ltd. All rights reserved.
Visual Attention to Print-Salient and Picture-Salient Environmental Print in Young Children
ERIC Educational Resources Information Center
Neumann, Michelle M.; Summerfield, Katelyn; Neumann, David L.
2015-01-01
Environmental print is composed of words and contextual cues such as logos and pictures. The salience of the contextual cues may influence attention to words and thus the potential of environmental print in promoting early reading development. The present study explored this by presenting pre-readers (n = 20) and beginning readers (n = 16) with…
Balthasar, Andrea J R; Huber, Walter; Weis, Susanne
2011-09-02
Homonym processing in German is of theoretical interest as homonyms specifically involve word form information. In a previous study (Weis et al., 2001), we found inferior parietal activation as a correlate of successfully finding a homonym from written stimuli. The present study tries to clarify the underlying mechanism and to examine to what extend the previous homonym effect is dependent on visual in contrast to auditory input modality. 18 healthy subjects were examined using an event-related functional magnetic resonance imaging paradigm. Participants had to find and articulate a homonym in relation to two spoken or written words. A semantic-lexical task - oral naming from two-word definitions - was used as a control condition. When comparing brain activation for solved homonym trials to both brain activation for unsolved homonyms and solved definition trials we obtained two activations patterns, which characterised both auditory and visual processing. Semantic-lexical processing was related to bilateral inferior frontal activation, whereas left inferior parietal activation was associated with finding the correct homonym. As the inferior parietal activation during successful access to the word form of a homonym was independent of input modality, it might be the substrate of access to word form knowledge. Copyright © 2011 Elsevier B.V. All rights reserved.
Attentional Capture of Objects Referred to by Spoken Language
ERIC Educational Resources Information Center
Salverda, Anne Pier; Altmann, Gerry T. M.
2011-01-01
Participants saw a small number of objects in a visual display and performed a visual detection or visual-discrimination task in the context of task-irrelevant spoken distractors. In each experiment, a visual cue was presented 400 ms after the onset of a spoken word. In experiments 1 and 2, the cue was an isoluminant color change and participants…
Walla, Peter; Hufnagl, Bernd; Lehrner, Johann; Mayer, Dagmar; Lindinger, Gerald; Deecke, Lüder; Lang, Wilfried
2002-11-01
The present study was meant to distinguish between unconscious and conscious olfactory information processing and to investigate the influence of olfaction on word information processing. Magnetic field changes were recorded in healthy young participants during deep encoding of visually presented words whereby some of the words were randomly associated with an odor. All recorded data were then split into two groups. One group consisted of participants who did not consciously perceive the odor during the whole experiment whereas the other group did report continuous conscious odor perception. The magnetic field changes related to the condition 'words without odor' were subtracted from the magnetic field changes related to the condition 'words with odor' for both groups. First, an odor-induced effect occurred between about 200 and 500 ms after stimulus onset which was similar in both groups. It is interpreted to reflect an activity reduction during word encoding related to the additional olfactory stimulation. Second, a later effect occurred between about 600 and 900 ms after stimulus onset which differed between the two groups. This effect was due to higher brain activity related to the additional olfactory stimulation. It was more pronounced in the group consisting of participants who consciously perceived the odor during the whole experiment as compared to the other group. These results are interpreted as evidence that the later effect is related to conscious odor perception whereas the earlier effect reflects unconscious olfactory information processing. Furthermore, our study provides evidence that only the conscious perception of an odor which is simultaneously presented to the visual presentation of a word reduces its chance to be subsequently recognized.
Incidental orthographic learning during a color detection task.
Protopapas, Athanassios; Mitsi, Anna; Koustoumbardis, Miltiadis; Tsitsopoulou, Sofia M; Leventi, Marianna; Seitz, Aaron R
2017-09-01
Orthographic learning refers to the acquisition of knowledge about specific spelling patterns forming words and about general biases and constraints on letter sequences. It is thought to occur by strengthening simultaneously activated visual and phonological representations during reading. Here we demonstrate that a visual perceptual learning procedure that leaves no time for articulation can result in orthographic learning evidenced in improved reading and spelling performance. We employed task-irrelevant perceptual learning (TIPL), in which the stimuli to be learned are paired with an easy task target. Assorted line drawings and difficult-to-spell words were presented in red color among sequences of other black-colored words and images presented in rapid succession, constituting a fast-TIPL procedure with color detection being the explicit task. In five experiments, Greek children in Grades 4-5 showed increased recognition of words and images that had appeared in red, both during and after the training procedure, regardless of within-training testing, and also when targets appeared in blue instead of red. Significant transfer to reading and spelling emerged only after increased training intensity. In a sixth experiment, children in Grades 2-3 showed generalization to words not presented during training that carried the same derivational affixes as in the training set. We suggest that reinforcement signals related to detection of the target stimuli contribute to the strengthening of orthography-phonology connections beyond earlier levels of visually-based orthographic representation learning. These results highlight the potential of perceptual learning procedures for the reinforcement of higher-level orthographic representations. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Duñabeitia, Jon Andoni; Dimitropoulou, María; Estévez, Adelina; Carreiras, Manuel
2013-01-01
The visual word recognition system recruits neuronal systems originally developed for object perception which are characterized by orientation insensitivity to mirror reversals. It has been proposed that during reading acquisition beginning readers have to “unlearn” this natural tolerance to mirror reversals in order to efficiently discriminate letters and words. Therefore, it is supposed that this unlearning process takes place in a gradual way and that reading expertise modulates mirror-letter discrimination. However, to date no supporting evidence for this has been obtained. We present data from an eye-movement study that investigated the degree of sensitivity to mirror-letters in a group of beginning readers and a group of expert readers. Participants had to decide which of the two strings presented on a screen corresponded to an auditorily presented word. Visual displays always included the correct target word and one distractor word. Results showed that those distractors that were the same as the target word except for the mirror lateralization of two internal letters attracted participants’ attention more than distractors created by replacement of two internal letters. Interestingly, the time course of the effects was found to be different for the two groups, with beginning readers showing a greater tolerance (decreased sensitivity) to mirror-letters than expert readers. Implications of these findings are discussed within the framework of preceding evidence showing how reading expertise modulates letter identification. PMID:24273596
The Attention Cascade Model and Attentional Blink
ERIC Educational Resources Information Center
Shih, Shui-I
2008-01-01
An attention cascade model is proposed to account for attentional blinks in rapid serial visual presentation (RSVP) of stimuli. Data were collected using single characters in a single RSVP stream at 10 Hz [Shih, S., & Reeves, A. (2007). "Attentional capture in rapid serial visual presentation." "Spatial Vision", 20(4), 301-315], and single words,…
ERIC Educational Resources Information Center
Bowers, Jeffrey S.; Davis, Colin J.; Hanley, Derek A.
2005-01-01
We assessed the impact of visual similarity on written word identification by having participants learn new words (e.g. BANARA) that were neighbours of familiar words that previously had no neighbours (e.g. BANANA). Repeated exposure to these new words made it more difficult to semantically categorize the familiar words. There was some evidence of…
Chen, Chen; Schneps, Matthew H; Masyn, Katherine E; Thomson, Jennifer M
2016-11-01
Increasing evidence has shown visual attention span to be a factor, distinct from phonological skills, that explains single-word identification (pseudo-word/word reading) performance in dyslexia. Yet, little is known about how well visual attention span explains text comprehension. Observing reading comprehension in a sample of 105 high school students with dyslexia, we used a pathway analysis to examine the direct and indirect path between visual attention span and reading comprehension while controlling for other factors such as phonological awareness, letter identification, short-term memory, IQ and age. Integrating phonemic decoding efficiency skills in the analytic model, this study aimed to disentangle how visual attention span and phonological skills work together in reading comprehension for readers with dyslexia. We found visual attention span to have a significant direct effect on more difficult reading comprehension but not on an easier level. It also had a significant direct effect on pseudo-word identification but not on word identification. In addition, we found that visual attention span indirectly explains reading comprehension through pseudo-word reading and word reading skills. This study supports the hypothesis that at least part of the dyslexic profile can be explained by visual attention abilities. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Similarity as an organising principle in short-term memory.
LeCompte, D C; Watkins, M J
1993-03-01
The role of stimulus similarity as an organising principle in short-term memory was explored in a series of seven experiments. Each experiment involved the presentation of a short sequence of items that were drawn from two distinct physical classes and arranged such that item class changed after every second item. Following presentation, one item was re-presented as a probe for the 'target' item that had directly followed it in the sequence. Memory for the sequence was considered organised by class if probability of recall was higher when the probe and target were from the same class than when they were from different classes. Such organisation was found when one class was auditory and the other was visual (spoken vs. written words, and sounds vs. pictures). It was also found when both classes were auditory (words spoken in a male voice vs. words spoken in a female voice) and when both classes were visual (digits shown in one location vs. digits shown in another). It is concluded that short-term memory can be organised on the basis of sensory modality and on the basis of certain features within both the auditory and visual modalities.
Implicit phonological priming during visual word recognition.
Wilson, Lisa B; Tregellas, Jason R; Slason, Erin; Pasko, Bryce E; Rojas, Donald C
2011-03-15
Phonology is a lower-level structural aspect of language involving the sounds of a language and their organization in that language. Numerous behavioral studies utilizing priming, which refers to an increased sensitivity to a stimulus following prior experience with that or a related stimulus, have provided evidence for the role of phonology in visual word recognition. However, most language studies utilizing priming in conjunction with functional magnetic resonance imaging (fMRI) have focused on lexical-semantic aspects of language processing. The aim of the present study was to investigate the neurobiological substrates of the automatic, implicit stages of phonological processing. While undergoing fMRI, eighteen individuals performed a lexical decision task (LDT) on prime-target pairs including word-word homophone and pseudoword-word pseudohomophone pairs with a prime presentation below perceptual threshold. Whole-brain analyses revealed several cortical regions exhibiting hemodynamic response suppression due to phonological priming including bilateral superior temporal gyri (STG), middle temporal gyri (MTG), and angular gyri (AG) with additional region of interest (ROI) analyses revealing response suppression in the left lateralized supramarginal gyrus (SMG). Homophone and pseudohomophone priming also resulted in different patterns of hemodynamic responses relative to one another. These results suggest that phonological processing plays a key role in visual word recognition. Furthermore, enhanced hemodynamic responses for unrelated stimuli relative to primed stimuli were observed in midline cortical regions corresponding to the default-mode network (DMN) suggesting that DMN activity can be modulated by task requirements within the context of an implicit task. Copyright © 2010 Elsevier Inc. All rights reserved.
Ma, Bosen; Wang, Xiaoyun; Li, Degao
2015-01-01
To separate the contribution of phonological from that of visual-orthographic information in the recognition of a Chinese word that is composed of one or two Chinese characters, we conducted two experiments in a priming task of semantic categorization (PTSC), in which length (one- or two-character words), relation, prime (related or unrelated prime-target pairs), and SOA (47, 87, or 187 ms) were manipulated. The prime was similar to the target in meaning or in visual configuration in Experiment A and in meaning or in pronunciation in Experiment B. The results indicate that the two-character words were similar to the one-character words but were less demanding of cognitive resources than the one-character words in the processing of phonological, visual-orthographic, and semantic information. The phonological primes had a facilitating effect at the SOA of 47 ms but an inhibitory effect at the SOA of 187 ms on the participants' reaction times; the visual-orthographic primes only had an inhibitory influence on the participants' reaction times at the SOA of 187 ms. The visual configuration of a Chinese word of one or two Chinese characters has its own contribution in helping retrieve the word's meanings; similarly, the phonological configuration of a one- or two-character word plays its own role in triggering activations of the word's semantic representations.
Universal brain systems for recognizing word shapes and handwriting gestures during reading
Nakamura, Kimihiro; Kuo, Wen-Jui; Pegado, Felipe; Cohen, Laurent; Tzeng, Ovid J. L.; Dehaene, Stanislas
2012-01-01
Do the neural circuits for reading vary across culture? Reading of visually complex writing systems such as Chinese has been proposed to rely on areas outside the classical left-hemisphere network for alphabetic reading. Here, however, we show that, once potential confounds in cross-cultural comparisons are controlled for by presenting handwritten stimuli to both Chinese and French readers, the underlying network for visual word recognition may be more universal than previously suspected. Using functional magnetic resonance imaging in a semantic task with words written in cursive font, we demonstrate that two universal circuits, a shape recognition system (reading by eye) and a gesture recognition system (reading by hand), are similarly activated and show identical patterns of activation and repetition priming in the two language groups. These activations cover most of the brain regions previously associated with culture-specific tuning. Our results point to an extended reading network that invariably comprises the occipitotemporal visual word-form system, which is sensitive to well-formed static letter strings, and a distinct left premotor region, Exner’s area, which is sensitive to the forward or backward direction with which cursive letters are dynamically presented. These findings suggest that cultural effects in reading merely modulate a fixed set of invariant macroscopic brain circuits, depending on surface features of orthographies. PMID:23184998
A task-dependent causal role for low-level visual processes in spoken word comprehension.
Ostarek, Markus; Huettig, Falk
2017-08-01
It is well established that the comprehension of spoken words referring to object concepts relies on high-level visual areas in the ventral stream that build increasingly abstract representations. It is much less clear whether basic low-level visual representations are also involved. Here we asked in what task situations low-level visual representations contribute functionally to concrete word comprehension using an interference paradigm. We interfered with basic visual processing while participants performed a concreteness task (Experiment 1), a lexical-decision task (Experiment 2), and a word class judgment task (Experiment 3). We found that visual noise interfered more with concrete versus abstract word processing, but only when the task required visual information to be accessed. This suggests that basic visual processes can be causally involved in language comprehension, but that their recruitment is not automatic and rather depends on the type of information that is required in a given task situation. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Image Location Estimation by Salient Region Matching.
Qian, Xueming; Zhao, Yisi; Han, Junwei
2015-11-01
Nowadays, locations of images have been widely used in many application scenarios for large geo-tagged image corpora. As to images which are not geographically tagged, we estimate their locations with the help of the large geo-tagged image set by content-based image retrieval. In this paper, we exploit spatial information of useful visual words to improve image location estimation (or content-based image retrieval performances). We proposed to generate visual word groups by mean-shift clustering. To improve the retrieval performance, spatial constraint is utilized to code the relative position of visual words. We proposed to generate a position descriptor for each visual word and build fast indexing structure for visual word groups. Experiments show the effectiveness of our proposed approach.
Recapitulation of Emotional Source Context during Memory Retrieval
Bowen, Holly J.; Kensinger, Elizabeth A.
2016-01-01
Recapitulation involves the reactivation of cognitive and neural encoding processes at retrieval. In the current study, we investigated the effects of emotional valence on recapitulation processes. Participants encoded neutral words presented on a background face or scene that was negative, positive or neutral. During retrieval, studied and novel neutral words were presented alone (i.e., without the scene or face) and participants were asked to make a remember, know or new judgment. Both the encoding and retrieval tasks were completed in the fMRI scanner. Conjunction analyses were used to reveal the overlap between encoding and retrieval processing. These results revealed that, compared to positive or neutral contexts, words that were recollected and previously encoded in a negative context showed greater encoding-to-retrieval overlap, including in the ventral visual stream and amygdala. Interestingly, the visual stream recapitulation was not enhanced within regions that specifically process faces or scenes but rather extended broadly throughout visual cortices. These findings elucidate how memories for negative events can feel more vivid or detailed than positive or neutral memories. PMID:27923474
Taking Word Clouds Apart: An Empirical Investigation of the Design Space for Keyword Summaries.
Felix, Cristian; Franconeri, Steven; Bertini, Enrico
2018-01-01
In this paper we present a set of four user studies aimed at exploring the visual design space of what we call keyword summaries: lists of words with associated quantitative values used to help people derive an intuition of what information a given document collection (or part of it) may contain. We seek to systematically study how different visual representations may affect people's performance in extracting information out of keyword summaries. To this purpose, we first create a design space of possible visual representations and compare the possible solutions in this design space through a variety of representative tasks and performance metrics. Other researchers have, in the past, studied some aspects of effectiveness with word clouds, however, the existing literature is somewhat scattered and do not seem to address the problem in a sufficiently systematic and holistic manner. The results of our studies showed a strong dependency on the tasks users are performing. In this paper we present details of our methodology, the results, as well as, guidelines on how to design effective keyword summaries based in our discoveries.
Deep learning of orthographic representations in baboons.
Hannagan, Thomas; Ziegler, Johannes C; Dufau, Stéphane; Fagot, Joël; Grainger, Jonathan
2014-01-01
What is the origin of our ability to learn orthographic knowledge? We use deep convolutional networks to emulate the primate's ventral visual stream and explore the recent finding that baboons can be trained to discriminate English words from nonwords. The networks were exposed to the exact same sequence of stimuli and reinforcement signals as the baboons in the experiment, and learned to map real visual inputs (pixels) of letter strings onto binary word/nonword responses. We show that the networks' highest levels of representations were indeed sensitive to letter combinations as postulated in our previous research. The model also captured the key empirical findings, such as generalization to novel words, along with some intriguing inter-individual differences. The present work shows the merits of deep learning networks that can simulate the whole processing chain all the way from the visual input to the response while allowing researchers to analyze the complex representations that emerge during the learning process.
NASA Astrophysics Data System (ADS)
Benkler, Erik; Telle, Harald R.
2007-06-01
An improved phase-locked loop (PLL) for versatile synchronization of a sampling pulse train to an optical data stream is presented. It enables optical sampling of the true waveform of repetitive high bit-rate optical time division multiplexed (OTDM) data words such as pseudorandom bit sequences. Visualization of the true waveform can reveal details, which cause systematic bit errors. Such errors cannot be inferred from eye diagrams and require word-synchronous sampling. The programmable direct-digital-synthesis circuit used in our novel PLL approach allows flexible adaption of virtually any problem-specific synchronization scenario, including those required for waveform sampling, for jitter measurements by slope detection, and for classical eye-diagrams. Phase comparison of the PLL is performed at 10-GHz OTDM base clock rate, leading to a residual synchronization jitter of less than 70 fs.
Matching Heard and Seen Speech: An ERP Study of Audiovisual Word Recognition
Kaganovich, Natalya; Schumaker, Jennifer; Rowland, Courtney
2016-01-01
Seeing articulatory gestures while listening to speech-in-noise (SIN) significantly improves speech understanding. However, the degree of this improvement varies greatly among individuals. We examined a relationship between two distinct stages of visual articulatory processing and the SIN accuracy by combining a cross-modal repetition priming task with ERP recordings. Participants first heard a word referring to a common object (e.g., pumpkin) and then decided whether the subsequently presented visual silent articulation matched the word they had just heard. Incongruent articulations elicited a significantly enhanced N400, indicative of a mismatch detection at the pre-lexical level. Congruent articulations elicited a significantly larger LPC, indexing articulatory word recognition. Only the N400 difference between incongruent and congruent trials was significantly correlated with individuals’ SIN accuracy improvement in the presence of the talker’s face. PMID:27155219
Effects of Referent Token Variability on L2 Vocabulary Learning
ERIC Educational Resources Information Center
Sommers, Mitchell S.; Barcroft, Joe
2013-01-01
Previous research has demonstrated substantially improved second language (L2) vocabulary learning when spoken word forms are varied using multiple talkers, speaking styles, or speaking rates. In contrast, the present study varied visual representations of referents for target vocabulary. English speakers learned Spanish words in formats of no…
Cognitive Skills and Literacy Performance of Chinese Adolescents with and without Dyslexia
ERIC Educational Resources Information Center
Chung, Kevin K. H.; Ho, Connie S.-H.; Chan, David W.; Tsang, Suk-Man; Lee, Suk-Han
2011-01-01
The present study sought to identify cognitive abilities that might distinguish Hong Kong Chinese adolescents with dyslexia and to assess how these abilities were associated with Chinese word reading, word dictation, and reading comprehension. The cognitive skills of interest were morphological awareness, visual-orthographic knowledge, rapid…
ERIC Educational Resources Information Center
Filatova, Olga
2016-01-01
Word cloud generating applications were originally designed to add visual attractiveness to posters, websites, slide show presentations, and the like. They can also be an effective tool in reading and writing classes in English as a second language (ESL) for all levels of English proficiency. They can reduce reading time and help to improve…
Syntactic Categorization in French-Learning Infants
ERIC Educational Resources Information Center
Shi, Rushen; Melancon, Andreane
2010-01-01
Recent work showed that infants recognize and store function words starting from the age of 6-8 months. Using a visual fixation procedure, the present study tested whether French-learning 14-month-olds have the knowledge of syntactic categories of determiners and pronouns, respectively, and whether they can use these function words for…
How Word Frequency Affects Morphological Processing in Monolinguals and Bilinguals
ERIC Educational Resources Information Center
Lehtonen, Minna; Laine, Matti
2003-01-01
The present study investigated processing of morphologically complex words in three different frequency ranges in monolingual Finnish speakers and Finnish-Swedish bilinguals. By employing a visual lexical decision task, we found a differential pattern of results in monolinguals vs. bilinguals. Monolingual Finns seemed to process low frequency and…
Neuromagnetic correlates of audiovisual word processing in the developing brain.
Dinga, Samantha; Wu, Di; Huang, Shuyang; Wu, Caiyun; Wang, Xiaoshan; Shi, Jingping; Hu, Yue; Liang, Chun; Zhang, Fawen; Lu, Meng; Leiken, Kimberly; Xiang, Jing
2018-06-01
The brain undergoes enormous changes during childhood. Little is known about how the brain develops to serve word processing. The objective of the present study was to investigate the maturational changes of word processing in children and adolescents using magnetoencephalography (MEG). Responses to a word processing task were investigated in sixty healthy participants. Each participant was presented with simultaneous visual and auditory word pairs in "match" and "mismatch" conditions. The patterns of neuromagnetic activation from MEG recordings were analyzed at both sensor and source levels. Topography and source imaging revealed that word processing transitioned from bilateral connections to unilateral connections as age increased from 6 to 17 years old. Correlation analyses of language networks revealed that the path length of word processing networks negatively correlated with age (r = -0.833, p < 0.0001), while the connection strength (r = 0.541, p < 0.01) and the clustering coefficient (r = 0.705, p < 0.001) of word processing networks were positively correlated with age. In addition, males had more visual connections, whereas females had more auditory connections. The correlations between gender and path length, gender and connection strength, and gender and clustering coefficient demonstrated a developmental trend without reaching statistical significance. The results indicate that the developmental trajectory of word processing is gender specific. Since the neuromagnetic signatures of these gender-specific paths to adult word processing were determined using non-invasive, objective, and quantitative methods, the results may play a key role in understanding language impairments in pediatric patients in the future. Copyright © 2018 Elsevier B.V. All rights reserved.
Orthographic versus semantic matching in visual search for words within lists.
Léger, Laure; Rouet, Jean-François; Ros, Christine; Vibert, Nicolas
2012-03-01
An eye-tracking experiment was performed to assess the influence of orthographic and semantic distractor words on visual search for words within lists. The target word (e.g., "raven") was either shown to participants before the search (literal search) or defined by its semantic category (e.g., "bird", categorical search). In both cases, the type of words included in the list affected visual search times and eye movement patterns. In the literal condition, the presence of orthographic distractors sharing initial and final letters with the target word strongly increased search times. Indeed, the orthographic distractors attracted participants' gaze and were fixated for longer times than other words in the list. The presence of semantic distractors related to the target word also increased search times, which suggests that significant automatic semantic processing of nontarget words took place. In the categorical condition, semantic distractors were expected to have a greater impact on the search task. As expected, the presence in the list of semantic associates of the target word led to target selection errors. However, semantic distractors did not significantly increase search times any more, whereas orthographic distractors still did. Hence, the visual characteristics of nontarget words can be strong predictors of the efficiency of visual search even when the exact target word is unknown. The respective impacts of orthographic and semantic distractors depended more on the characteristics of lists than on the nature of the search task.
Category and Word Search: Generalizing Search Principles to Complex Processing.
1982-03-01
complex processing (e.g., LaBerge & Samels, 1974; Shiffrin & Schneider, 1977). In the present paper we examine how well the major phenomena in simple visual...subjects are searching for novel characters ( LaBerge , 1973). The relatively large and rapid CH practice effects for word and category search are analogous...1974) demonstrated interference effects of irrelevant flanking letters. Shaffer and Laberge (1979) showed a similar effect with words and semantic
Decoding and disrupting left midfusiform gyrus activity during word reading
Hirshorn, Elizabeth A.; Ward, Michael J.; Fiez, Julie A.; Ghuman, Avniel Singh
2016-01-01
The nature of the visual representation for words has been fiercely debated for over 150 y. We used direct brain stimulation, pre- and postsurgical behavioral measures, and intracranial electroencephalography to provide support for, and elaborate upon, the visual word form hypothesis. This hypothesis states that activity in the left midfusiform gyrus (lmFG) reflects visually organized information about words and word parts. In patients with electrodes placed directly in their lmFG, we found that disrupting lmFG activity through stimulation, and later surgical resection in one of the patients, led to impaired perception of whole words and letters. Furthermore, using machine-learning methods to analyze the electrophysiological data from these electrodes, we found that information contained in early lmFG activity was consistent with an orthographic similarity space. Finally, the lmFG contributed to at least two distinguishable stages of word processing, an early stage that reflects gist-level visual representation sensitive to orthographic statistics, and a later stage that reflects more precise representation sufficient for the individuation of orthographic word forms. These results provide strong support for the visual word form hypothesis and demonstrate that across time the lmFG is involved in multiple stages of orthographic representation. PMID:27325763
Decoding and disrupting left midfusiform gyrus activity during word reading.
Hirshorn, Elizabeth A; Li, Yuanning; Ward, Michael J; Richardson, R Mark; Fiez, Julie A; Ghuman, Avniel Singh
2016-07-19
The nature of the visual representation for words has been fiercely debated for over 150 y. We used direct brain stimulation, pre- and postsurgical behavioral measures, and intracranial electroencephalography to provide support for, and elaborate upon, the visual word form hypothesis. This hypothesis states that activity in the left midfusiform gyrus (lmFG) reflects visually organized information about words and word parts. In patients with electrodes placed directly in their lmFG, we found that disrupting lmFG activity through stimulation, and later surgical resection in one of the patients, led to impaired perception of whole words and letters. Furthermore, using machine-learning methods to analyze the electrophysiological data from these electrodes, we found that information contained in early lmFG activity was consistent with an orthographic similarity space. Finally, the lmFG contributed to at least two distinguishable stages of word processing, an early stage that reflects gist-level visual representation sensitive to orthographic statistics, and a later stage that reflects more precise representation sufficient for the individuation of orthographic word forms. These results provide strong support for the visual word form hypothesis and demonstrate that across time the lmFG is involved in multiple stages of orthographic representation.
Deafness for the meanings of number words
Caño, Agnès; Rapp, Brenda; Costa, Albert; Juncadella, Montserrat
2008-01-01
We describe the performance of an aphasic individual who showed a selective impairment affecting his comprehension of auditorily presented number words and not other word categories. His difficulty in number word comprehension was restricted to the auditory modality, given that with visual stimuli (written words, Arabic numerals and pictures) his comprehension of number and non-number words was intact. While there have been previous reports of selective difficulty or sparing of number words at the semantic and post-semantic levels, this is the first reported case of a pre-semantic deficit that is specific to the category of number words. This constitutes evidence that lexical semantic distinctions are respected by modality-specific neural mechanisms responsible for providing access to the meanings of words. PMID:17915265
The development of cortical sensitivity to visual word forms.
Ben-Shachar, Michal; Dougherty, Robert F; Deutsch, Gayle K; Wandell, Brian A
2011-09-01
The ability to extract visual word forms quickly and efficiently is essential for using reading as a tool for learning. We describe the first longitudinal fMRI study to chart individual changes in cortical sensitivity to written words as reading develops. We conducted four annual measurements of brain function and reading skills in a heterogeneous group of children, initially 7-12 years old. The results show age-related increase in children's cortical sensitivity to word visibility in posterior left occipito-temporal sulcus (LOTS), nearby the anatomical location of the visual word form area. Moreover, the rate of increase in LOTS word sensitivity specifically correlates with the rate of improvement in sight word efficiency, a measure of speeded overt word reading. Other cortical regions, including V1, posterior parietal cortex, and the right homologue of LOTS, did not demonstrate such developmental changes. These results provide developmental support for the hypothesis that LOTS is part of the cortical circuitry that extracts visual word forms quickly and efficiently and highlight the importance of developing cortical sensitivity to word visibility in reading acquisition.
The Development of Cortical Sensitivity to Visual Word Forms
Ben-Shachar, Michal; Dougherty, Robert F.; Deutsch, Gayle K.; Wandell, Brian A.
2011-01-01
The ability to extract visual word forms quickly and efficiently is essential for using reading as a tool for learning. We describe the first longitudinal fMRI study to chart individual changes in cortical sensitivity to written words as reading develops. We conducted four annual measurements of brain function and reading skills in a heterogeneous group of children, initially 7–12 years old. The results show age-related increase in children's cortical sensitivity to word visibility in posterior left occipito-temporal sulcus (LOTS), nearby the anatomical location of the visual word form area. Moreover, the rate of increase in LOTS word sensitivity specifically correlates with the rate of improvement in sight word efficiency, a measure of speeded overt word reading. Other cortical regions, including V1, posterior parietal cortex, and the right homologue of LOTS, did not demonstrate such developmental changes. These results provide developmental support for the hypothesis that LOTS is part of the cortical circuitry that extracts visual word forms quickly and efficiently and highlight the importance of developing cortical sensitivity to word visibility in reading acquisition. PMID:21261451
Eberhardt, Silvio P; Auer, Edward T; Bernstein, Lynne E
2014-01-01
In a series of studies we have been investigating how multisensory training affects unisensory perceptual learning with speech stimuli. Previously, we reported that audiovisual (AV) training with speech stimuli can promote auditory-only (AO) perceptual learning in normal-hearing adults but can impede learning in congenitally deaf adults with late-acquired cochlear implants. Here, impeder and promoter effects were sought in normal-hearing adults who participated in lipreading training. In Experiment 1, visual-only (VO) training on paired associations between CVCVC nonsense word videos and nonsense pictures demonstrated that VO words could be learned to a high level of accuracy even by poor lipreaders. In Experiment 2, visual-auditory (VA) training in the same paradigm but with the addition of synchronous vocoded acoustic speech impeded VO learning of the stimuli in the paired-associates paradigm. In Experiment 3, the vocoded AO stimuli were shown to be less informative than the VO speech. Experiment 4 combined vibrotactile speech stimuli with the visual stimuli during training. Vibrotactile stimuli were shown to promote visual perceptual learning. In Experiment 5, no-training controls were used to show that training with visual speech carried over to consonant identification of untrained CVCVC stimuli but not to lipreading words in sentences. Across this and previous studies, multisensory training effects depended on the functional relationship between pathways engaged during training. Two principles are proposed to account for stimulus effects: (1) Stimuli presented to the trainee's primary perceptual pathway will impede learning by a lower-rank pathway. (2) Stimuli presented to the trainee's lower rank perceptual pathway will promote learning by a higher-rank pathway. The mechanisms supporting these principles are discussed in light of multisensory reverse hierarchy theory (RHT).
Eberhardt, Silvio P.; Auer Jr., Edward T.; Bernstein, Lynne E.
2014-01-01
In a series of studies we have been investigating how multisensory training affects unisensory perceptual learning with speech stimuli. Previously, we reported that audiovisual (AV) training with speech stimuli can promote auditory-only (AO) perceptual learning in normal-hearing adults but can impede learning in congenitally deaf adults with late-acquired cochlear implants. Here, impeder and promoter effects were sought in normal-hearing adults who participated in lipreading training. In Experiment 1, visual-only (VO) training on paired associations between CVCVC nonsense word videos and nonsense pictures demonstrated that VO words could be learned to a high level of accuracy even by poor lipreaders. In Experiment 2, visual-auditory (VA) training in the same paradigm but with the addition of synchronous vocoded acoustic speech impeded VO learning of the stimuli in the paired-associates paradigm. In Experiment 3, the vocoded AO stimuli were shown to be less informative than the VO speech. Experiment 4 combined vibrotactile speech stimuli with the visual stimuli during training. Vibrotactile stimuli were shown to promote visual perceptual learning. In Experiment 5, no-training controls were used to show that training with visual speech carried over to consonant identification of untrained CVCVC stimuli but not to lipreading words in sentences. Across this and previous studies, multisensory training effects depended on the functional relationship between pathways engaged during training. Two principles are proposed to account for stimulus effects: (1) Stimuli presented to the trainee’s primary perceptual pathway will impede learning by a lower-rank pathway. (2) Stimuli presented to the trainee’s lower rank perceptual pathway will promote learning by a higher-rank pathway. The mechanisms supporting these principles are discussed in light of multisensory reverse hierarchy theory (RHT). PMID:25400566
An Avatar-Based Italian Sign Language Visualization System
NASA Astrophysics Data System (ADS)
Falletto, Andrea; Prinetto, Paolo; Tiotto, Gabriele
In this paper, we present an experimental system that supports the translation from Italian to Italian Sign Language (ISL) of the deaf and its visualization through a virtual character. Our objective is to develop a complete platform useful for any application and reusable on several platforms including Web, Digital Television and offline text translation. The system relies on a database that stores both a corpus of Italian words and words coded in the ISL notation system. An interface for the insertion of data is implemented, that allows future extensions and integrations.
Pitch enhancement facilitates word learning across visual contexts
Filippi, Piera; Gingras, Bruno; Fitch, W. Tecumseh
2014-01-01
This study investigates word-learning using a new experimental paradigm that integrates three processes: (a) extracting a word out of a continuous sound sequence, (b) inferring its referential meanings in context, (c) mapping the segmented word onto its broader intended referent, such as other objects of the same semantic category, and to novel utterances. Previous work has examined the role of statistical learning and/or of prosody in each of these processes separately. Here, we combine these strands of investigation into a single experimental approach, in which participants viewed a photograph belonging to one of three semantic categories while hearing a complex, five-word utterance containing a target word. Six between-subjects conditions were tested with 20 adult participants each. In condition 1, the only cue to word-meaning mapping was the co-occurrence of word and referents. This statistical cue was present in all conditions. In condition 2, the target word was sounded at a higher pitch. In condition 3, random words were sounded at a higher pitch, creating an inconsistent cue. In condition 4, the duration of the target word was lengthened. In conditions 5 and 6, an extraneous acoustic cue and a visual cue were associated with the target word, respectively. Performance in this word-learning task was significantly higher than that observed with simple co-occurrence only when pitch prominence consistently marked the target word. We discuss implications for the pragmatic value of pitch marking as well as the relevance of our findings to language acquisition and language evolution. PMID:25566144
Verbal-spatial and visuospatial coding of power-space interactions.
Dai, Qiang; Zhu, Lei
2018-05-10
A power-space interaction, which denotes the phenomenon that people responded faster to powerful words when they are placed higher in a visual field and faster to powerless words when they are placed lower in a visual field, has been repeatedly found. The dominant explanation of this power-space interaction is that it results from a tight correspondence between the representation of power and visual space (i.e., a visuospatial coding account). In the present study, we demonstrated that the interaction between power and space could be also based on a verbal-spatial coding in absence of any vertical spatial information. Additionally, the verbal-spatial coding was dominant in driving the power-space interaction when verbal space was contrasted with the visual space. Copyright © 2018 Elsevier Inc. All rights reserved.
Dissociating visual form from lexical frequency using Japanese.
Twomey, Tae; Kawabata Duncan, Keith J; Hogan, John S; Morita, Kenji; Umeda, Kazumasa; Sakai, Katsuyuki; Devlin, Joseph T
2013-05-01
In Japanese, the same word can be written in either morphographic Kanji or syllabographic Hiragana and this provides a unique opportunity to disentangle a word's lexical frequency from the frequency of its visual form - an important distinction for understanding the neural information processing in regions engaged by reading. Behaviorally, participants responded more quickly to high than low frequency words and to visually familiar relative to less familiar words, independent of script. Critically, the imaging results showed that visual familiarity, as opposed to lexical frequency, had a strong effect on activation in ventral occipito-temporal cortex. Activation here was also greater for Kanji than Hiragana words and this was not due to their inherent differences in visual complexity. These findings can be understood within a predictive coding framework in which vOT receives bottom-up information encoding complex visual forms and top-down predictions from regions encoding non-visual attributes of the stimulus. Copyright © 2012 Elsevier Inc. All rights reserved.
Kornrumpf, Benthe; Sommer, Werner
2015-09-01
Due to capacity limitation, visual attention must be focused to a limited region of the visual field. Nevertheless, it is assumed that the size of that region may vary with task demands. We aimed to obtain direct evidence for the modulation of visuospatial attention as a function of foveal and parafoveal task load. Participants were required to fixate the center word of word triplets. In separate task blocks, either just the fixated word or both the fixated and the parafoveal word to the right should be semantically classified. The spatiotemporal distribution of attention was assessed with task-irrelevant probes flashed briefly at center or parafoveal positions, during or in between word presentation trials. The N1 component of the ERP elicited by intertrial probes at possible target positions increased with task demands within a block. These results suggest the recruitment of additional attentional resources rather than a redistribution of a fixed resource pool, which persists across trials. © 2015 Society for Psychophysiological Research.
Influence of color word availability on the Stroop color-naming effect.
Kim, Hyosun; Cho, Yang Seok; Yamaguchi, Motonori; Proctor, Robert W
2008-11-01
Three experiments tested whether the Stroop color-naming effect is a consequence of word recognition's being automatic or of the color word's capturing visual attention. In Experiment 1, a color bar was presented at fixation as the color carrier, with color and neutral words presented in locations above or below the color bar; Experiment 2 was similar, except that the color carrier could occur in one of the peripheral locations and the color word at fixation. The Stroop effect increased as display duration increased, and the Stroop dilution effect (a reduced Stroop effect when a neutral word is also present) was an approximately constant proportion of the Stroop effect at all display durations, regardless of whether the color bar or color word was at fixation. In Experiment 3, the interval between the onsets of the to-be-named color and the color word was manipulated. The Stroop effect decreased with increasing delay of the color word onset, but the absolute amount of Stroop dilution produced by the neutral word increased. This study's results imply that an attention shift from the color carrier to the color word is an important factor modulating the size of the Stroop effect.
Trait anxiety and impaired control of reflective attention in working memory.
Hoshino, Takatoshi; Tanno, Yoshihiko
2016-01-01
The present study investigated whether the control of reflective attention in working memory (WM) is impaired in high trait anxiety individuals. We focused on the consequences of refreshing-a simple reflective process of thinking briefly about a just-activated representation in mind-on the subsequent processing of verbal stimuli. Participants performed a selective refreshing task, in which they initially refreshed or read one word from a three-word set, and then refreshed a non-selected item from the initial phrase or read aloud a new word. High trait anxiety individuals exhibited greater latencies when refreshing a word after experiencing the refreshing of a word from the same list of semantic associates. The same pattern was observed for reading a new word after prior refreshing. These findings suggest that high trait anxiety individuals have difficulty resolving interference from active distractors when directing reflective attention towards contents in WM or processing a visually presented word.
Lewellen, Mary Jo; Goldinger, Stephen D.; Pisoni, David B.; Greene, Beth G.
2012-01-01
College students were separated into 2 groups (high and low) on the basis of 3 measures: subjective familiarity ratings of words, self-reported language experiences, and a test of vocabulary knowledge. Three experiments were conducted to determine if the groups also differed in visual word naming, lexical decision, and semantic categorization. High Ss were consistently faster than low Ss in naming visually presented words. They were also faster and more accurate in making difficult lexical decisions and in rejecting homophone foils in semantic categorization. Taken together, the results demonstrate that Ss who differ in lexical familiarity also differ in processing efficiency. The relationship between processing efficiency and working memory accounts of individual differences in language processing is also discussed. PMID:8371087
Individual Differences in Visual Word Recognition: Insights from the English Lexicon Project
Yap, Melvin J.; Balota, David A.; Sibley, Daragh E.; Ratcliff, Roger
2011-01-01
Empirical work and models of visual word recognition have traditionally focused on group-level performance. Despite the emphasis on the prototypical reader, there is clear evidence that variation in reading skill modulates word recognition performance. In the present study, we examined differences between individuals who contributed to the English Lexicon Project (http://elexicon.wustl.edu), an online behavioral database containing nearly four million word recognition (speeded pronunciation and lexical decision) trials from over 1,200 participants. We observed considerable within- and between-session reliability across distinct sets of items, in terms of overall mean response time (RT), RT distributional characteristics, diffusion model parameters (Ratcliff, Gomez, & McKoon, 2004), and sensitivity to underlying lexical dimensions. This indicates reliably detectable individual differences in word recognition performance. In addition, higher vocabulary knowledge was associated with faster, more accurate word recognition performance, attenuated sensitivity to stimuli characteristics, and more efficient accumulation of information. Finally, in contrast to suggestions in the literature, we did not find evidence that individuals were trading-off in their utilization of lexical and nonlexical information. PMID:21728459
Reading sky and seeing a cloud: On the relevance of events for perceptual simulation.
Ostarek, Markus; Vigliocco, Gabriella
2017-04-01
Previous research has shown that processing words with an up/down association (e.g., bird, foot) can influence the subsequent identification of visual targets in congruent location (at the top/bottom of the screen). However, as facilitation and interference were found under similar conditions, the nature of the underlying mechanisms remained unclear. We propose that word comprehension relies on the perceptual simulation of a prototypical event involving the entity denoted by a word in order to provide a general account of the different findings. In 3 experiments, participants had to discriminate between 2 target pictures appearing at the top or the bottom of the screen by pressing the left versus right button. Immediately before the targets appeared, they saw an up/down word belonging to the target's event, an up/down word unrelated to the target, or a spatially neutral control word. Prime words belonging to target event facilitated identification of targets at a stimulus onset asynchrony (SOA) of 250 ms (Experiment 1), but only when presented in the vertical location where they are typically seen, indicating that targets were integrated in the simulations activated by the prime words. Moreover, at the same SOA, there was a robust facilitation effect for targets appearing in their typical location regardless of the prime type. However, when words were presented for 100 ms (Experiment 2) or 800 ms (Experiment 3), only a location nonspecific priming effect was found, suggesting that the visual system was not activated. Implications for theories of semantic processing are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Brain activation for lexical decision and reading aloud: two sides of the same coin?
Carreiras, Manuel; Mechelli, Andrea; Estévez, Adelina; Price, Cathy J
2007-03-01
This functional magnetic resonance imaging study compared the neuronal implementation of word and pseudoword processing during two commonly used word recognition tasks: lexical decision and reading aloud. In the lexical decision task, participants made a finger-press response to indicate whether a visually presented letter string is a word or a pseudoword (e.g., "paple"). In the reading-aloud task, participants read aloud visually presented words and pseudowords. The same sets of words and pseudowords were used for both tasks. This enabled us to look for the effects of task (lexical decision vs. reading aloud), lexicality (words vs. nonwords), and the interaction of lexicality with task. We found very similar patterns of activation for lexical decision and reading aloud in areas associated with word recognition and lexical retrieval (e.g., left fusiform gyrus, posterior temporal cortex, pars opercularis, and bilateral insulae), but task differences were observed bilaterally in sensorimotor areas. Lexical decision increased activation in areas associated with decision making and finger tapping (bilateral postcentral gyri, supplementary motor area, and right cerebellum), whereas reading aloud increased activation in areas associated with articulation and hearing the sound of the spoken response (bilateral precentral gyri, superior temporal gyri, and posterior cerebellum). The effect of lexicality (pseudoword vs. words) was also remarkably consistent across tasks. Nevertheless, increased activation for pseudowords relative to words was greater in the left precentral cortex for reading than lexical decision, and greater in the right inferior frontal cortex for lexical decision than reading. We attribute these effects to differences in the demands on speech production and decision-making processes, respectively.
Reduced effects of pictorial distinctiveness on false memory following dynamic visual noise.
Parker, Andrew; Kember, Timothy; Dagnall, Neil
2017-07-01
High levels of false recognition for non-presented items typically occur following exposure to lists of associated words. These false recognition effects can be reduced by making the studied items more distinctive by the presentation of pictures during encoding. One explanation of this is that during recognition, participants expect or attempt to retrieve distinctive pictorial information in order to evaluate the study status of the test item. If this involves the retrieval and use of visual imagery, then interfering with imagery processing should reduce the effectiveness of pictorial information in false memory reduction. In the current experiment, visual-imagery processing was disrupted at retrieval by the use of dynamic visual noise (DVN). It was found that effects of DVN dissociated true from false memory. Memory for studied words was not influenced by the presence of an interfering noise field. However, false memory was increased and the effects of picture-induced distinctiveness was eliminated. DVN also increased false recollection and remember responses to unstudied items.
ERIC Educational Resources Information Center
Tam, Cynthia; Wells, David
2009-01-01
Visual-cognitive loads influence the effectiveness of word prediction technology. Adjusting parameters of word prediction programs can lessen visual-cognitive loads. This study evaluated the benefits of WordQ word prediction software for users' performance when the prediction window was moved to a personal digital assistant (PDA) device placed at…
ERIC Educational Resources Information Center
Gyllstad, Henrik; Wolter, Brent
2016-01-01
The present study investigates whether two types of word combinations (free combinations and collocations) differ in terms of processing by testing Howarth's Continuum Model based on word combination typologies from a phraseological tradition. A visual semantic judgment task was administered to advanced Swedish learners of English (n = 27) and…
The Efficacy of Using Diagrams When Solving Probability Word Problems in College
ERIC Educational Resources Information Center
Beitzel, Brian D.; Staley, Richard K.
2015-01-01
Previous experiments have shown a deleterious effect of visual representations on college students' ability to solve total- and joint-probability word problems. The present experiments used conditional-probability problems, known to be more difficult than total- and joint-probability problems. The diagram group was instructed in how to use tree…
Automatization and Orthographic Development in Second Language Visual Word Recognition
ERIC Educational Resources Information Center
Kida, Shusaku
2016-01-01
The present study investigated second language (L2) learners' acquisition of automatic word recognition and the development of L2 orthographic representation in the mental lexicon. Participants in the study were Japanese university students enrolled in a compulsory course involving a weekly 30-minute sustained silent reading (SSR) activity with…
Word-Category Violations in Patients with Broca's Aphasia: An ERP Study
ERIC Educational Resources Information Center
Wassenaar, Marlies; Hagoort, Peter
2005-01-01
An event-related brain potential experiment was carried out to investigate on-line syntactic processing in patients with Broca's aphasia. Subjects were visually presented with sentences that were either syntactically correct or contained violations of word-category. Three groups of subjects were tested: Broca patients (N=11), non-aphasic patients…
The influence of print exposure on the body-object interaction effect in visual word recognition.
Hansen, Dana; Siakaluk, Paul D; Pexman, Penny M
2012-01-01
We examined the influence of print exposure on the body-object interaction (BOI) effect in visual word recognition. High print exposure readers and low print exposure readers either made semantic categorizations ("Is the word easily imageable?"; Experiment 1) or phonological lexical decisions ("Does the item sound like a real English word?"; Experiment 2). The results from Experiment 1 showed that there was a larger BOI effect for the low print exposure readers than for the high print exposure readers in semantic categorization, though an effect was observed for both print exposure groups. However, the results from Experiment 2 showed that the BOI effect was observed only for the high print exposure readers in phonological lexical decision. The results of the present study suggest that print exposure does influence the BOI effect, and that this influence varies as a function of task demands.
van Gemert, Jan C; Veenman, Cor J; Smeulders, Arnold W M; Geusebroek, Jan-Mark
2010-07-01
This paper studies automatic image classification by modeling soft assignment in the popular codebook model. The codebook model describes an image as a bag of discrete visual words selected from a vocabulary, where the frequency distributions of visual words in an image allow classification. One inherent component of the codebook model is the assignment of discrete visual words to continuous image features. Despite the clear mismatch of this hard assignment with the nature of continuous features, the approach has been successfully applied for some years. In this paper, we investigate four types of soft assignment of visual words to image features. We demonstrate that explicitly modeling visual word assignment ambiguity improves classification performance compared to the hard assignment of the traditional codebook model. The traditional codebook model is compared against our method for five well-known data sets: 15 natural scenes, Caltech-101, Caltech-256, and Pascal VOC 2007/2008. We demonstrate that large codebook vocabulary sizes completely deteriorate the performance of the traditional model, whereas the proposed model performs consistently. Moreover, we show that our method profits in high-dimensional feature spaces and reaps higher benefits when increasing the number of image categories.
Audiovisual speech facilitates voice learning.
Sheffert, Sonya M; Olson, Elizabeth
2004-02-01
In this research, we investigated the effects of voice and face information on the perceptual learning of talkers and on long-term memory for spoken words. In the first phase, listeners were trained over several days to identify voices from words presented auditorily or audiovisually. The training data showed that visual information about speakers enhanced voice learning, revealing cross-modal connections in talker processing akin to those observed in speech processing. In the second phase, the listeners completed an auditory or audiovisual word recognition memory test in which equal numbers of words were spoken by familiar and unfamiliar talkers. The data showed that words presented by familiar talkers were more likely to be retrieved from episodic memory, regardless of modality. Together, these findings provide new information about the representational code underlying familiar talker recognition and the role of stimulus familiarity in episodic word recognition.
Hartwigsen, Gesa; Price, Cathy J; Baumgaertner, Annette; Geiss, Gesine; Koehnke, Maria; Ulmer, Stephan; Siebner, Hartwig R
2010-08-01
There is consensus that the left hemisphere plays a dominant role in language processing, but functional imaging studies have shown that the right as well as the left posterior inferior frontal gyri (pIFG) are activated when healthy right-handed individuals make phonological word decisions. Here we used online transcranial magnetic stimulation (TMS) to examine the functional relevance of the right pIFG for auditory and visual phonological decisions. Healthy right-handed individuals made phonological or semantic word judgements on the same set of auditorily and visually presented words while they received stereotactically guided TMS over the left, right or bilateral pIFG (n=14) or the anterior left, right or bilateral IFG (n=14). TMS started 100ms after word onset and consisted of four stimuli given at a rate of 10Hz and intensity of 90% of active motor threshold. Compared to TMS of aIFG, TMS of pIFG impaired reaction times and accuracy of phonological but not semantic decisions for visually and auditorily presented words. TMS over left, right or bilateral pIFG disrupted phonological processing to a similar degree. In a follow-up experiment, the intensity threshold for delaying phonological judgements was identical for unilateral TMS of left and right pIFG. These findings indicate that an intact function of right pIFG is necessary for accurate and efficient phonological decisions in the healthy brain with no evidence that the left and right pIFG can compensate for one another during online TMS. Our findings motivate detailed studies of phonological processing in patients with acute and chronic damage of the right pIFG. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Barban, Francesco; Zannino, Gian Daniele; Macaluso, Emiliano; Caltagirone, Carlo; Carlesimo, Giovanni A
2013-06-01
Iconic memory is a high-capacity low-duration visual memory store that allows the persistence of a visual stimulus after its offset. The categorical nature of this store has been extensively debated. This study provides functional magnetic resonance imaging evidence for brain regions underlying the persistence of postcategorical representations of visual stimuli. In a partial report paradigm, subjects matched a cued row of a 3 × 3 array of letters (postcategorical stimuli) or false fonts (precategorical stimuli) with a subsequent triplet of stimuli. The cued row was indicated by two visual flankers presented at the onset (physical stimulus readout) or after the offset of the array (iconic memory readout). The left planum temporale showed a greater modulation of the source of readout (iconic memory vs. physical stimulus) when letters were presented compared to false fonts. This is a multimodal brain region responsible for matching incoming acoustic and visual patterns with acoustic pattern templates. These findings suggest that letters persist after their physical offset in an abstract postcategorical representation. A targeted region of interest analysis revealed a similar pattern of activation in the Visual Word Form Area. These results suggest that multiple higher-order visual areas mediate iconic memory for postcategorical stimuli. Copyright © 2012 Wiley Periodicals, Inc.
Survival Processing Enhances Visual Search Efficiency.
Cho, Kit W
2018-05-01
Words rated for their survival relevance are remembered better than when rated using other well-known memory mnemonics. This finding, which is known as the survival advantage effect and has been replicated in many studies, suggests that our memory systems are molded by natural selection pressures. In two experiments, the present study used a visual search task to examine whether there is likewise a survival advantage for our visual systems. Participants rated words for their survival relevance or for their pleasantness before locating that object's picture in a search array with 8 or 16 objects. Although there was no difference in search times among the two rating scenarios when set size was 8, survival processing reduced visual search times when set size was 16. These findings reflect a search efficiency effect and suggest that similar to our memory systems, our visual systems are also tuned toward self-preservation.
Methods study for the relocation of visual information in central scotoma cases
NASA Astrophysics Data System (ADS)
Scherlen, Anne-Catherine; Gautier, Vincent
2005-03-01
In this study we test the benefit on the reading performance of different ways to relocating the visual information present under the scotoma. The relocation (or unmasking) allows to compensate the loss of information and avoid the patient developing driving strategies not adapted for the reading. Eight healthy subjects were tested on a reading task, on each a central scotoma of various sizes was simulated. We then evaluate the reading speed (words/min) during three visual information relocation methods: all masked information is relocated - on both side of scotoma, - on the right of scotoma, - and only essentials letters for the word recognition too on the right of scotoma. We compare these reading speeds versus the pathological condition, ie without relocating visual information. Our results show that unmasking strategy improve the reading speed when all the visual information is unmask to the right of scotoma, this only for large scotoma. Taking account the word morphology, the perception of only certain letters outside the scotoma can be sufficient to improve the reading speed. A deepening of reading processes in the presence of a scotoma will then allows a new perspective for visual information unmasking. Multidisciplinary competences brought by engineers, ophtalmologists, linguists, clinicians would allow to optimize the reading benefit brought by the unmasking.
When canary primes yellow: effects of semantic memory on overt attention.
Léger, Laure; Chauvet, Elodie
2015-02-01
This study explored how overt attention is influenced by the colour that is primed when a target word is read during a lexical visual search task. Prior studies have shown that attention can be influenced by conceptual or perceptual overlap between a target word and distractor pictures: attention is attracted to pictures that have the same form (rope--snake) or colour (green--frog) as the spoken target word or is drawn to an object from the same category as the spoken target word (trumpet--piano). The hypothesis for this study was that attention should be attracted to words displayed in the colour that is primed by reading a target word (for example, yellow for canary). An experiment was conducted in which participants' eye movements were recorded whilst they completed a lexical visual search task. The primary finding was that participants' eye movements were mainly directed towards words displayed in the colour primed by reading the target word, even though this colour was not relevant to completing the visual search task. This result is discussed in terms of top-down guidance of overt attention in visual search for words.
Visual feature-tolerance in the reading network.
Rauschecker, Andreas M; Bowen, Reno F; Perry, Lee M; Kevan, Alison M; Dougherty, Robert F; Wandell, Brian A
2011-09-08
A century of neurology and neuroscience shows that seeing words depends on ventral occipital-temporal (VOT) circuitry. Typically, reading is learned using high-contrast line-contour words. We explored whether a specific VOT region, the visual word form area (VWFA), learns to see only these words or recognizes words independent of the specific shape-defining visual features. Word forms were created using atypical features (motion-dots, luminance-dots) whose statistical properties control word-visibility. We measured fMRI responses as word form visibility varied, and we used TMS to interfere with neural processing in specific cortical circuits, while subjects performed a lexical decision task. For all features, VWFA responses increased with word-visibility and correlated with performance. TMS applied to motion-specialized area hMT+ disrupted reading performance for motion-dots, but not line-contours or luminance-dots. A quantitative model describes feature-convergence in the VWFA and relates VWFA responses to behavioral performance. These findings suggest how visual feature-tolerance in the reading network arises through signal convergence from feature-specialized cortical areas. Copyright © 2011 Elsevier Inc. All rights reserved.
Intrinsically organized network for word processing during the resting state.
Zhao, Jizheng; Liu, Jiangang; Li, Jun; Liang, Jimin; Feng, Lu; Ai, Lin; Lee, Kang; Tian, Jie
2011-01-03
Neural mechanisms underlying word processing have been extensively studied. It has been revealed that when individuals are engaged in active word processing, a complex network of cortical regions is activated. However, it is entirely unknown whether the word-processing regions are intrinsically organized without any explicit processing tasks during the resting state. The present study investigated the intrinsic functional connectivity between word-processing regions during the resting state with the use of fMRI methodology. The low-frequency fluctuations were observed between the left middle fusiform gyrus and a number of cortical regions. They included the left angular gyrus, left supramarginal gyrus, bilateral pars opercularis, and left pars triangularis of the inferior frontal gyrus, which have been implicated in phonological and semantic processing. Additionally, the activations were also observed in the bilateral superior parietal lobule and dorsal lateral prefrontal cortex, which have been suggested to provide top-down monitoring on the visual-spatial processing of words. The findings of our study indicate an intrinsically organized network during the resting state that likely prepares the visual system to anticipate the highly probable word input for ready and effective processing. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
An Updated Account of the WISELAV Project: A Visual Construction of the English Verb System
ERIC Educational Resources Information Center
Pablos, Andrés Palacios
2016-01-01
This article presents the state of the art in WISELAV, an on-going research project based on the metaphor Languages Are (like) Visuals (LAV) and its mapping Words-In-Shapes Exchange (WISE). First, the cognitive premises that motivate the proposal are recalled: the power of images, students' increasingly visual cognitive learning style, and the…
Locating the cortical bottleneck for slow reading in peripheral vision
Yu, Deyue; Jiang, Yi; Legge, Gordon E.; He, Sheng
2015-01-01
Yu, Legge, Park, Gage, and Chung (2010) suggested that the neural bottleneck for slow peripheral reading is located in nonretinotopic areas. We investigated the potential rate-limiting neural site for peripheral reading using fMRI, and contrasted peripheral reading with recognition of peripherally presented line drawings of common objects. We measured the BOLD responses to both text (three-letter words/nonwords) and line-drawing objects presented either in foveal or peripheral vision (10° lower right visual field) at three presentation rates (2, 4, and 8/second). The statistically significant interaction effect of visual field × presentation rate on the BOLD response for text but not for line drawings provides evidence for distinctive processing of peripheral text. This pattern of results was obtained in all five regions of interest (ROIs). At the early retinotopic cortical areas, the BOLD signal slightly increased with increasing presentation rate for foveal text, and remained fairly constant for peripheral text. In the Occipital Word-Responsive Area (OWRA), Visual Word Form Area (VWFA), and object sensitive areas (LO and PHA), the BOLD responses to text decreased with increasing presentation rate for peripheral but not foveal presentation. In contrast, there was no rate-dependent reduction in BOLD response for line-drawing objects in all the ROIs for either foveal or peripheral presentation. Only peripherally presented text showed a distinctive rate-dependence pattern. Although it is possible that the differentiation starts to emerge at the early retinotopic cortical representation, the neural bottleneck for slower reading of peripherally presented text may be a special property of peripheral text processing in object category selective cortex. PMID:26237299
Adult Word Recognition and Visual Sequential Memory
ERIC Educational Resources Information Center
Holmes, V. M.
2012-01-01
Two experiments were conducted investigating the role of visual sequential memory skill in the word recognition efficiency of undergraduate university students. Word recognition was assessed in a lexical decision task using regularly and strangely spelt words, and nonwords that were either standard orthographically legal strings or items made from…
Making the invisible visible: verbal but not visual cues enhance visual detection.
Lupyan, Gary; Spivey, Michael J
2010-07-07
Can hearing a word change what one sees? Although visual sensitivity is known to be enhanced by attending to the location of the target, perceptual enhancements of following cues to the identity of an object have been difficult to find. Here, we show that perceptual sensitivity is enhanced by verbal, but not visual cues. Participants completed an object detection task in which they made an object-presence or -absence decision to briefly-presented letters. Hearing the letter name prior to the detection task increased perceptual sensitivity (d'). A visual cue in the form of a preview of the to-be-detected letter did not. Follow-up experiments found that the auditory cuing effect was specific to validly cued stimuli. The magnitude of the cuing effect positively correlated with an individual measure of vividness of mental imagery; introducing uncertainty into the position of the stimulus did not reduce the magnitude of the cuing effect, but eliminated the correlation with mental imagery. Hearing a word made otherwise invisible objects visible. Interestingly, seeing a preview of the target stimulus did not similarly enhance detection of the target. These results are compatible with an account in which auditory verbal labels modulate lower-level visual processing. The findings show that a verbal cue in the form of hearing a word can influence even the most elementary visual processing and inform our understanding of how language affects perception.
Lack of habituation to shocking words: the attentional bias to their spatial origin is context free.
Bertels, Julie; Kolinsky, Régine; Morais, José
2012-01-01
Following a suggestion made by Aquino and Arnell (2007), we assumed that the processing of emotional words is influenced by their context of presentation. Supporting this idea, previous studies using the emotional Stroop task in its visual or auditory variant revealed different results depending on the mixed versus blocked presentation of the stimuli (Bertels, Kolinsky, Pietrons, & Morais, 2011; Richards, French, Johnson, Naparstek, & Williams, 1992). In the present study, we investigated the impact of these presentation designs on the occurrence of spatial attentional biases in a modified version of the beep-probe task (Bertels, Kolinsky, & Morais, 2010). Attentional vigilance to taboo words as well as non-spatial slowing effects of these words were observed whatever the mixed or blocked design, whereas attentional vigilance to positive words was only observed in the mixed design. Together with the results from our previous study (Bertels et al., 2010), the present data support the reliability of the effects of shocking stimuli, while vigilance to positive words would only be observed in a threatening context.
When a Picture Isn't Worth 1000 Words: Learners Struggle to Find Meaning in Data Visualizations
ERIC Educational Resources Information Center
Stofer, Kathryn A.
2016-01-01
The oft-repeated phrase "a picture is worth a thousand words" supposes that an image can replace a profusion of words to more easily express complex ideas. For scientific visualizations that represent profusions of numerical data, however, an untranslated academic visualization suffers the same pitfalls untranslated jargon does. Previous…
Artful terms: A study on aesthetic word usage for visual art versus film and music.
Augustin, M Dorothee; Carbon, Claus-Christian; Wagemans, Johan
2012-01-01
Despite the importance of the arts in human life, psychologists still know relatively little about what characterises their experience for the recipient. The current research approaches this problem by studying people's word usage in aesthetics, with a focus on three important art forms: visual art, film, and music. The starting point was a list of 77 words known to be useful to describe aesthetic impressions of visual art (Augustin et al 2012, Acta Psychologica139 187-201). Focusing on ratings of likelihood of use, we examined to what extent word usage in aesthetic descriptions of visual art can be generalised to film and music. The results support the claim of an interplay of generality and specificity in aesthetic word usage. Terms with equal likelihood of use for all art forms included beautiful, wonderful, and terms denoting originality. Importantly, emotion-related words received higher ratings for film and music than for visual art. To our knowledge this is direct evidence that aesthetic experiences of visual art may be less affectively loaded than, for example, experiences of music. The results render important information about aesthetic word usage in the realm of the arts and may serve as a starting point to develop tailored measurement instruments for different art forms.
Artful terms: A study on aesthetic word usage for visual art versus film and music
Augustin, M Dorothee; Carbon, Claus-Christian; Wagemans, Johan
2012-01-01
Despite the importance of the arts in human life, psychologists still know relatively little about what characterises their experience for the recipient. The current research approaches this problem by studying people's word usage in aesthetics, with a focus on three important art forms: visual art, film, and music. The starting point was a list of 77 words known to be useful to describe aesthetic impressions of visual art (Augustin et al 2012, Acta Psychologica 139 187–201). Focusing on ratings of likelihood of use, we examined to what extent word usage in aesthetic descriptions of visual art can be generalised to film and music. The results support the claim of an interplay of generality and specificity in aesthetic word usage. Terms with equal likelihood of use for all art forms included beautiful, wonderful, and terms denoting originality. Importantly, emotion-related words received higher ratings for film and music than for visual art. To our knowledge this is direct evidence that aesthetic experiences of visual art may be less affectively loaded than, for example, experiences of music. The results render important information about aesthetic word usage in the realm of the arts and may serve as a starting point to develop tailored measurement instruments for different art forms. PMID:23145287
Korinth, Sebastian Peter; Breznitz, Zvia
2014-01-01
Higher N170 amplitudes to words and to faces were recently reported for faster readers of German. Since the shallow German orthography allows phonological recoding of single letters, the reported speed advantages might have their origin in especially well-developed visual processing skills of faster readers. In contrast to German, adult readers of Hebrew are forced to process letter chunks up to whole words. This dependence on more complex visual processing might have created ceiling effects for this skill. Therefore, the current study examined whether also in the deep Hebrew orthography visual processing skills as reflected by N170 amplitudes explain reading speed differences. Forty university students, native speakers of Hebrew without reading impairments, accomplished a lexical decision task (i.e., deciding whether a visually presented stimulus represents a real or a pseudo word) and a face decision task (i.e., deciding whether a face was presented complete or with missing facial features) while their electroencephalogram was recorded from 64 scalp positions. In both tasks stronger event related potentials (ERPs) were observed for faster readers in time windows at about 200 ms. Unlike in previous studies, ERP waveforms in relevant time windows did not correspond to N170 scalp topographies. The results support the notion of visual processing ability as an orthography independent marker of reading proficiency, which advances our understanding about regular and impaired reading development.
Binocular coordination in response to stereoscopic stimuli
NASA Astrophysics Data System (ADS)
Liversedge, Simon P.; Holliman, Nicolas S.; Blythe, Hazel I.
2009-02-01
Humans actively explore their visual environment by moving their eyes. Precise coordination of the eyes during visual scanning underlies the experience of a unified perceptual representation and is important for the perception of depth. We report data from three psychological experiments investigating human binocular coordination during visual processing of stereoscopic stimuli.In the first experiment participants were required to read sentences that contained a stereoscopically presented target word. Half of the word was presented exclusively to one eye and half exclusively to the other eye. Eye movements were recorded and showed that saccadic targeting was uninfluenced by the stereoscopic presentation, strongly suggesting that complementary retinal stimuli are perceived as a single, unified input prior to saccade initiation. In a second eye movement experiment we presented words stereoscopically to measure Panum's Fusional Area for linguistic stimuli. In the final experiment we compared binocular coordination during saccades between simple dot stimuli under 2D, stereoscopic 3D and real 3D viewing conditions. Results showed that depth appropriate vergence movements were made during saccades and fixations to real 3D stimuli, but only during fixations on stereoscopic 3D stimuli. 2D stimuli did not induce depth vergence movements. Together, these experiments indicate that stereoscopic visual stimuli are fused when they fall within Panum's Fusional Area, and that saccade metrics are computed on the basis of a unified percept. Also, there is sensitivity to non-foveal retinal disparity in real 3D stimuli, but not in stereoscopic 3D stimuli, and the system responsible for binocular coordination responds to this during saccades as well as fixations.
Short-term retention of pictures and words: evidence for dual coding systems.
Pellegrino, J W; Siegel, A W; Dhawan, M
1975-03-01
The recall of picture and word triads was examined in three experiments that manipulated the type of distraction in a Brown-Peterson short-term retention task. In all three experiments recall of pictures was superior to words under auditory distraction conditions. Visual distraction produced high performance levels with both types of stimuli, whereas combined auditory and visual distraction significantly reduced picture recall without further affecting word recall. The results were interpreted in terms of the dual coding hypothesis and indicated that pictures are encoded into separate visual and acoustic processing systems while words are primarily acoustically encoded.
Miles, James D; Proctor, Robert W
2009-10-01
In the current study, we show that the non-intentional processing of visually presented words and symbols can be attenuated by sounds. Importantly, this attenuation is dependent on the similarity in categorical domain between the sounds and words or symbols. Participants performed a task in which left or right responses were made contingent on the color of a centrally presented target that was either a location word (LEFT or RIGHT) or a left or right arrow. Responses were faster when they were on the side congruent with the word or arrow. This bias was reduced for location words by a neutral spoken word and for arrows by a tone series, but not vice versa. We suggest that words and symbols are processed with minimal attentional requirements until they are categorized into specific knowledge domains, but then become sensitive to other information within the same domain regardless of the similarity between modalities.
Seamon, John G; Lee, Ihno A; Toner, Sarah K; Wheeler, Rachel H; Goodkind, Madeleine S; Birch, Antoine D
2002-11-01
Do participants in the Deese, Roediger, and McDermott (DRM) procedure demonstrate false memory because they think of nonpresented critical words during study and confuse them with words that were actually presented? In two experiments, 160 participants studied eight visually presented DRM lists at a rate of 2 s or 5 s per word. Half of the participants rehearsed silently: the other half rehearsed overtly. Following study, the participants' memory for the lists was tested by recall or recognition. Typical false memory results were obtained for both memory measures. More important, two new results were observed. First, a large majority of the overt-rehearsal participants spontaneously rehearsed approximately half of the critical words during study. Second, critical-word rehearsal at study enhanced subsequent false recall, but it had no effect on false recognition or remember judgments for falsely recognized critical words. Thinking of critical words during study was unnecessary for producing false memory.
Siakaluk, Paul D; Pexman, Penny M; Aguilera, Laura; Owen, William J; Sears, Christopher R
2008-01-01
We examined the effects of sensorimotor experience in two visual word recognition tasks. Body-object interaction (BOI) ratings were collected for a large set of words. These ratings assess perceptions of the ease with which a human body can physically interact with a word's referent. A set of high BOI words (e.g., mask) and a set of low BOI words (e.g., ship) were created, matched on imageability and concreteness. Facilitatory BOI effects were observed in lexical decision and phonological lexical decision tasks: responses were faster for high BOI words than for low BOI words. We discuss how our findings may be accounted for by (a) semantic feedback within the visual word recognition system, and (b) an embodied view of cognition (e.g., Barsalou's perceptual symbol systems theory), which proposes that semantic knowledge is grounded in sensorimotor interactions with the environment.
Bonin, Patrick; Méot, Alain; Ferrand, Ludovic; Bugaïska, Aurélia
2015-09-01
We collected sensory experience ratings (SERs) for 1,659 French words in adults. Sensory experience for words is a recently introduced variable that corresponds to the degree to which words elicit sensory and perceptual experiences (Juhasz & Yap Behavior Research Methods, 45, 160-168, 2013; Juhasz, Yap, Dicke, Taylor, & Gullick Quarterly Journal of Experimental Psychology, 64, 1683-1691, 2011). The relationships of the sensory experience norms with other psycholinguistic variables (e.g., imageability and age of acquisition) were analyzed. We also investigated the degree to which SER predicted performance in visual word recognition tasks (lexical decision, word naming, and progressive demasking). The analyses indicated that SER reliably predicted response times in lexical decision, but not in word naming or progressive demasking. The findings are discussed in relation to the status of SER, the role of semantic code activation in visual word recognition, and the embodied view of cognition.
Word boundaries affect visual attention in Chinese reading.
Li, Xingshan; Ma, Guojie
2012-01-01
In two experiments, we explored attention deployment during the reading of Chinese words using a probe detection task. In both experiments, Chinese readers saw four simplified Chinese characters briefly, and then a probe was presented at one of the character positions. The four characters constituted either one word or two words of two characters each. Reaction time was shorter when the probe was at the character 2 position than the character 3 position in the two-word condition, but not in the one-word condition. In Experiment 2, there were more trials and the materials were more carefully controlled, and the results replicated that of Experiment 1. These results suggest that word boundary information affects attentional deployment in Chinese reading.
Does N200 reflect semantic processing?--An ERP study on Chinese visual word recognition.
Du, Yingchun; Zhang, Qin; Zhang, John X
2014-01-01
Recent event-related potential research has reported a N200 response or a negative deflection peaking around 200 ms following the visual presentation of two-character Chinese words. This N200 shows amplitude enhancement upon immediate repetition and there has been preliminary evidence that it reflects orthographic processing but not semantic processing. The present study tested whether this N200 is indeed unrelated to semantic processing with more sensitive measures, including the use of two tasks engaging semantic processing either implicitly or explicitly and the adoption of a within-trial priming paradigm. In Exp. 1, participants viewed repeated, semantically related and unrelated prime-target word pairs as they performed a lexical decision task judging whether or not each target was a real word. In Exp. 2, participants viewed high-related, low-related and unrelated word pairs as they performed a semantic task judging whether each word pair was related in meaning. In both tasks, semantic priming was found from both the behavioral data and the N400 ERP responses. Critically, while repetition priming elicited a clear and large enhancement on the N200 response, semantic priming did not show any modulation effect on the same response. The results indicate that the N200 repetition enhancement effect cannot be explained with semantic priming and that this specific N200 response is unlikely to reflect semantic processing.
Unfolding Visual Lexical Decision in Time
Barca, Laura; Pezzulo, Giovanni
2012-01-01
Visual lexical decision is a classical paradigm in psycholinguistics, and numerous studies have assessed the so-called “lexicality effect" (i.e., better performance with lexical than non-lexical stimuli). Far less is known about the dynamics of choice, because many studies measured overall reaction times, which are not informative about underlying processes. To unfold visual lexical decision in (over) time, we measured participants' hand movements toward one of two item alternatives by recording the streaming x,y coordinates of the computer mouse. Participants categorized four kinds of stimuli as “lexical" or “non-lexical:" high and low frequency words, pseudowords, and letter strings. Spatial attraction toward the opposite category was present for low frequency words and pseudowords. Increasing the ambiguity of the stimuli led to greater movement complexity and trajectory attraction to competitors, whereas no such effect was present for high frequency words and letter strings. Results fit well with dynamic models of perceptual decision-making, which describe the process as a competition between alternatives guided by the continuous accumulation of evidence. More broadly, our results point to a key role of statistical decision theory in studying linguistic processing in terms of dynamic and non-modular mechanisms. PMID:22563419
The sound of enemies and friends in the neighborhood.
Pecher, Diane; Boot, Inge; van Dantzig, Saskia; Madden, Carol J; Huber, David E; Zeelenberg, René
2011-01-01
Previous studies (e.g., Pecher, Zeelenberg, & Wagenmakers, 2005) found that semantic classification performance is better for target words with orthographic neighbors that are mostly from the same semantic class (e.g., living) compared to target words with orthographic neighbors that are mostly from the opposite semantic class (e.g., nonliving). In the present study we investigated the contribution of phonology to orthographic neighborhood effects by comparing effects of phonologically congruent orthographic neighbors (book-hook) to phonologically incongruent orthographic neighbors (sand-wand). The prior presentation of a semantically congruent word produced larger effects on subsequent animacy decisions when the previously presented word was a phonologically congruent neighbor than when it was a phonologically incongruent neighbor. In a second experiment, performance differences between target words with versus without semantically congruent orthographic neighbors were larger if the orthographic neighbors were also phonologically congruent. These results support models of visual word recognition that assume an important role for phonology in cascaded access to meaning.
ERIC Educational Resources Information Center
Hicks, J.L.; Starns, J.J.
2005-01-01
We used implicit measures of memory to ascertain whether false memories for critical nonpresented items in the DRM paradigm (Deese, 1959; Roediger & McDermott, 1995) contain structural and perceptual detail. In Experiment 1, we manipulated presentation modality in a visual word-stem-completion task. Critical item priming was significant and…
Evidence for a Limited-Cascading Account of Written Word Naming
ERIC Educational Resources Information Center
Bonin, Patrick; Roux, Sebastien; Barry, Christopher; Canell, Laura
2012-01-01
We address the issue of how information flows within the written word production system by examining written object-naming latencies. We report 4 experiments in which we manipulate variables assumed to have their primary impact at the level of object recognition (e.g., quality of visual presentation of pictured objects), at the level of semantic…
Remembering Plurals: Unit of Coding and Form of Coding during Serial Recall.
ERIC Educational Resources Information Center
Van Der Molen, Hugo; Morton, John
1979-01-01
Adult females recalled lists of six words, including some plural nouns, presented visually in sequence. A frequent error was to detach the plural from its root. This supports a morpheme-based as opposed to a unitary word code. Evidence for a primarily phonological coding of the plural morpheme was obtained. (Author/RD)
InfoSyll: A Syllabary Providing Statistical Information on Phonological and Orthographic Syllables
ERIC Educational Resources Information Center
Chetail, Fabienne; Mathey, Stephanie
2010-01-01
There is now a growing body of evidence in various languages supporting the claim that syllables are functional units of visual word processing. In the perspective of modeling the processing of polysyllabic words and the activation of syllables, current studies investigate syllabic effects with subtle manipulations. We present here a syllabary of…
Investigating Orthographic and Semantic Aspects of Word Learning in Poor Comprehenders
ERIC Educational Resources Information Center
Ricketts, Jessie; Bishop, Dorothy V. M.; Nation, Kate
2008-01-01
This study compared orthographic and semantic aspects of word learning in children who differed in reading comprehension skill. Poor comprehenders and controls matched for age (9-10 years), nonverbal ability and decoding skill were trained to pronounce 20 visually presented nonwords, 10 in a consistent way and 10 in an inconsistent way. They then…
Encoding Modality Can Affect Memory Accuracy via Retrieval Orientation
ERIC Educational Resources Information Center
Pierce, Benton H.; Gallo, David A.
2011-01-01
Research indicates that false memory is lower following visual than auditory study, potentially because visual information is more distinctive. In the present study we tested the extent to which retrieval orientation can cause a modality effect on memory accuracy. Participants studied unrelated words in different modalities, followed by criterial…
Notions of Technology and Visual Literacy
ERIC Educational Resources Information Center
Stankiewicz, Mary Ann
2004-01-01
For many art educators, the word "technology" conjures up visions of overhead projectors and VCRs, video and digital cameras, computers equipped with graphic programs and presentation software, digital labs where images rendered in pixels replace the debris of charcoal dust and puddled paints. One forgets that visual literacy and technology have…
ERIC Educational Resources Information Center
Weber-Fox, Christine; Hart, Laura J.; Spruill, John E., III
2006-01-01
This study examined how school-aged children process different grammatical categories. Event-related brain potentials elicited by words in visually presented sentences were analyzed according to seven grammatical categories with naturally varying characteristics of linguistic functions, semantic features, and quantitative attributes of length and…
The anatomy of language: contributions from functional neuroimaging
PRICE, CATHY J.
2000-01-01
This article illustrates how functional neuroimaging can be used to test the validity of neurological and cognitive models of language. Three models of language are described: the 19th Century neurological model which describes both the anatomy and cognitive components of auditory and visual word processing, and 2 20th Century cognitive models that are not constrained by anatomy but emphasise 2 different routes to reading that are not present in the neurological model. A series of functional imaging studies are then presented which show that, as predicted by the 19th Century neurologists, auditory and visual word repetition engage the left posterior superior temporal and posterior inferior frontal cortices. More specifically, the roles Wernicke and Broca assigned to these regions lie respectively in the posterior superior temporal sulcus and the anterior insula. In addition, a region in the left posterior inferior temporal cortex is activated for word retrieval, thereby providing a second route to reading, as predicted by the 20th Century cognitive models. This region and its function may have been missed by the 19th Century neurologists because selective damage is rare. The angular gyrus, previously linked to the visual word form system, is shown to be part of a distributed semantic system that can be accessed by objects and faces as well as speech. Other components of the semantic system include several regions in the inferior and middle temporal lobes. From these functional imaging results, a new anatomically constrained model of word processing is proposed which reconciles the anatomical ambitions of the 19th Century neurologists and the cognitive finesse of the 20th Century cognitive models. The review focuses on single word processing and does not attempt to discuss how words are combined to generate sentences or how several languages are learned and interchanged. Progress in unravelling these and other related issues will depend on the integration of behavioural, computational and neurophysiological approaches, including neuroimaging. PMID:11117622
Günther, Fritz; Dudschig, Carolin; Kaup, Barbara
2018-05-01
Theories of embodied cognition assume that concepts are grounded in non-linguistic, sensorimotor experience. In support of this assumption, previous studies have shown that upwards response movements are faster than downwards movements after participants have been presented with words whose referents are typically located in the upper vertical space (and vice versa for downwards responses). This is taken as evidence that processing these words reactivates sensorimotor experiential traces. This congruency effect was also found for novel words, after participants learned these words as labels for novel objects that they encountered either in their upper or lower visual field. While this indicates that direct experience with a word's referent is sufficient to evoke said congruency effects, the present study investigates whether this direct experience is also a necessary condition. To this end, we conducted five experiments in which participants learned novel words from purely linguistic input: Novel words were presented in pairs with real up- or down-words (Experiment 1); they were presented in natural sentences where they replaced these real words (Experiment 2); they were presented as new labels for these real words (Experiment 3); and they were presented as labels for novel combined concepts based on these real words (Experiment 4 and 5). In all five experiments, we did not find any congruency effects elicited by the novel words; however, participants were always able to make correct explicit judgements about the vertical dimension associated to the novel words. These results suggest that direct experience is necessary for reactivating experiential traces, but this reactivation is not a necessary condition for understanding (in the sense of storing and accessing) the corresponding aspects of word meaning. Copyright © 2017 Cognitive Science Society, Inc.
Identifiable Orthographically Similar Word Primes Interfere in Visual Word Identification
ERIC Educational Resources Information Center
Burt, Jennifer S.
2009-01-01
University students participated in five experiments concerning the effects of unmasked, orthographically similar, primes on visual word recognition in the lexical decision task (LDT) and naming tasks. The modal prime-target stimulus onset asynchrony (SOA) was 350 ms. When primes were words that were orthographic neighbors of the targets, and…
Deep Learning of Orthographic Representations in Baboons
Hannagan, Thomas; Ziegler, Johannes C.; Dufau, Stéphane; Fagot, Joël; Grainger, Jonathan
2014-01-01
What is the origin of our ability to learn orthographic knowledge? We use deep convolutional networks to emulate the primate's ventral visual stream and explore the recent finding that baboons can be trained to discriminate English words from nonwords [1]. The networks were exposed to the exact same sequence of stimuli and reinforcement signals as the baboons in the experiment, and learned to map real visual inputs (pixels) of letter strings onto binary word/nonword responses. We show that the networks' highest levels of representations were indeed sensitive to letter combinations as postulated in our previous research. The model also captured the key empirical findings, such as generalization to novel words, along with some intriguing inter-individual differences. The present work shows the merits of deep learning networks that can simulate the whole processing chain all the way from the visual input to the response while allowing researchers to analyze the complex representations that emerge during the learning process. PMID:24416300
Are You Taking the Fastest Route to the RESTAURANT?
Perea, Manuel; Marcet, Ana; Vergara-Martínez, Marta
2018-03-01
Most words in books and digital media are written in lowercase. The primacy of this format has been brought out by different experiments showing that common words are identified faster in lowercase (e.g., molecule) than in uppercase (MOLECULE). However, there are common words that are usually written in uppercase (street signs, billboards; e.g., STOP, PHARMACY). We conducted a lexical decision experiment to examine whether the usual letter-case configuration (uppercase vs. lowercase) of common words modulates word identification times. To this aim, we selected 78 molecule-type words and 78 PHARMACY-type words that were presented in lowercase or uppercase. For molecule-type words, the lowercase format elicited faster responses than the uppercase format, whereas this effect was absent for PHARMACY-type words. This pattern of results suggests that the usual letter configuration of common words plays an important role during visual word processing.
Resting state neural networks for visual Chinese word processing in Chinese adults and children.
Li, Ling; Liu, Jiangang; Chen, Feiyan; Feng, Lu; Li, Hong; Tian, Jie; Lee, Kang
2013-07-01
This study examined the resting state neural networks for visual Chinese word processing in Chinese children and adults. Both the functional connectivity (FC) and amplitude of low frequency fluctuation (ALFF) approaches were used to analyze the fMRI data collected when Chinese participants were not engaged in any specific explicit tasks. We correlated time series extracted from the visual word form area (VWFA) with those in other regions in the brain. We also performed ALFF analysis in the resting state FC networks. The FC results revealed that, regarding the functionally connected brain regions, there exist similar intrinsically organized resting state networks for visual Chinese word processing in adults and children, suggesting that such networks may already be functional after 3-4 years of informal exposure to reading plus 3-4 years formal schooling. The ALFF results revealed that children appear to recruit more neural resources than adults in generally reading-irrelevant brain regions. Differences between child and adult ALFF results suggest that children's intrinsic word processing network during the resting state, though similar in functional connectivity, is still undergoing development. Further exposure to visual words and experience with reading are needed for children to develop a mature intrinsic network for word processing. The developmental course of the intrinsically organized word processing network may parallel that of the explicit word processing network. Copyright © 2013 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Zhao, Pei; Zhao, Jing; Weng, Xuchu; Li, Su
2018-01-01
Visual word N170 is an index of perceptual expertise for visual words across different writing systems. Recent developmental studies have shown the early emergence of visual word N170 and its close association with individual's reading ability. In the current study, we investigated whether fine-tuning N170 for Chinese characters could emerge after…
Semantic Neighborhood Effects for Abstract versus Concrete Words
Danguecan, Ashley N.; Buchanan, Lori
2016-01-01
Studies show that semantic effects may be task-specific, and thus, that semantic representations are flexible and dynamic. Such findings are critical to the development of a comprehensive theory of semantic processing in visual word recognition, which should arguably account for how semantic effects may vary by task. It has been suggested that semantic effects are more directly examined using tasks that explicitly require meaning processing relative to those for which meaning processing is not necessary (e.g., lexical decision task). The purpose of the present study was to chart the processing of concrete versus abstract words in the context of a global co-occurrence variable, semantic neighborhood density (SND), by comparing word recognition response times (RTs) across four tasks varying in explicit semantic demands: standard lexical decision task (with non-pronounceable non-words), go/no-go lexical decision task (with pronounceable non-words), progressive demasking task, and sentence relatedness task. The same experimental stimulus set was used across experiments and consisted of 44 concrete and 44 abstract words, with half of these being low SND, and half being high SND. In this way, concreteness and SND were manipulated in a factorial design using a number of visual word recognition tasks. A consistent RT pattern emerged across tasks, in which SND effects were found for abstract (but not necessarily concrete) words. Ultimately, these findings highlight the importance of studying interactive effects in word recognition, and suggest that linguistic associative information is particularly important for abstract words. PMID:27458422
Semantic Neighborhood Effects for Abstract versus Concrete Words.
Danguecan, Ashley N; Buchanan, Lori
2016-01-01
Studies show that semantic effects may be task-specific, and thus, that semantic representations are flexible and dynamic. Such findings are critical to the development of a comprehensive theory of semantic processing in visual word recognition, which should arguably account for how semantic effects may vary by task. It has been suggested that semantic effects are more directly examined using tasks that explicitly require meaning processing relative to those for which meaning processing is not necessary (e.g., lexical decision task). The purpose of the present study was to chart the processing of concrete versus abstract words in the context of a global co-occurrence variable, semantic neighborhood density (SND), by comparing word recognition response times (RTs) across four tasks varying in explicit semantic demands: standard lexical decision task (with non-pronounceable non-words), go/no-go lexical decision task (with pronounceable non-words), progressive demasking task, and sentence relatedness task. The same experimental stimulus set was used across experiments and consisted of 44 concrete and 44 abstract words, with half of these being low SND, and half being high SND. In this way, concreteness and SND were manipulated in a factorial design using a number of visual word recognition tasks. A consistent RT pattern emerged across tasks, in which SND effects were found for abstract (but not necessarily concrete) words. Ultimately, these findings highlight the importance of studying interactive effects in word recognition, and suggest that linguistic associative information is particularly important for abstract words.
The role of Broca's area in speech perception: evidence from aphasia revisited.
Hickok, Gregory; Costanzo, Maddalena; Capasso, Rita; Miceli, Gabriele
2011-12-01
Motor theories of speech perception have been re-vitalized as a consequence of the discovery of mirror neurons. Some authors have even promoted a strong version of the motor theory, arguing that the motor speech system is critical for perception. Part of the evidence that is cited in favor of this claim is the observation from the early 1980s that individuals with Broca's aphasia, and therefore inferred damage to Broca's area, can have deficits in speech sound discrimination. Here we re-examine this issue in 24 patients with radiologically confirmed lesions to Broca's area and various degrees of associated non-fluent speech production. Patients performed two same-different discrimination tasks involving pairs of CV syllables, one in which both CVs were presented auditorily, and the other in which one syllable was auditorily presented and the other visually presented as an orthographic form; word comprehension was also assessed using word-to-picture matching tasks in both auditory and visual forms. Discrimination performance on the all-auditory task was four standard deviations above chance, as measured using d', and was unrelated to the degree of non-fluency in the patients' speech production. Performance on the auditory-visual task, however, was worse than, and not correlated with, the all-auditory task. The auditory-visual task was related to the degree of speech non-fluency. Word comprehension was at ceiling for the auditory version (97% accuracy) and near ceiling for the orthographic version (90% accuracy). We conclude that the motor speech system is not necessary for speech perception as measured both by discrimination and comprehension paradigms, but may play a role in orthographic decoding or in auditory-visual matching of phonological forms. 2011 Elsevier Inc. All rights reserved.
Phonological working memory in German children with poor reading and spelling abilities.
Steinbrink, Claudia; Klatte, Maria
2008-11-01
Deficits in verbal short-term memory have been identified as one factor underlying reading and spelling disorders. However, the nature of this deficit is still unclear. It has been proposed that poor readers make less use of phonological coding, especially if the task can be solved through visual strategies. In the framework of Baddeley's phonological loop model, this study examined serial recall performance in German second-grade children with poor vs good reading and spelling abilities. Children were presented with four-item lists of common nouns for immediate serial recall. Word length and phonological similarity as well as presentation modality (visual vs auditory) and type of recall (visual vs verbal) were varied as within-subject factors in a mixed design. Word length and phonological similarity effects did not differ between groups, thus indicating equal use of phonological coding and rehearsal in poor and good readers. However, in all conditions, except the one that combined visual presentation and visual recall, overall performance was significantly lower in poor readers. The results suggest that the poor readers' difficulties do not arise from an avoidance of the phonological loop, but from its inefficient use. An alternative account referring to unstable phonological representations in long-term memory is discussed. Copyright (c) 2007 John Wiley & Sons, Ltd.
Naber, Marnix; Vedder, Anneke; Brown, Stephen B R E; Nieuwenhuis, Sander
2016-01-01
The Stroop task is a popular neuropsychological test that measures executive control. Strong Stroop interference is commonly interpreted in neuropsychology as a diagnostic marker of impairment in executive control, possibly reflecting executive dysfunction. However, popular models of the Stroop task indicate that several other aspects of color and word processing may also account for individual differences in the Stroop task, independent of executive control. Here we use new approaches to investigate the degree to which individual differences in Stroop interference correlate with the relative processing speed of word and color stimuli, and the lateral inhibition between visual stimuli. We conducted an electrophysiological and behavioral experiment to measure (1) how quickly an individual's brain processes words and colors presented in isolation (P3 latency), and (2) the strength of an individual's lateral inhibition between visual representations with a visual illusion. Both measures explained at least 40% of the variance in Stroop interference across individuals. As these measures were obtained in contexts not requiring any executive control, we conclude that the Stroop effect also measures an individual's pre-set way of processing visual features such as words and colors. This study highlights the important contributions of stimulus processing speed and lateral inhibition to individual differences in Stroop interference, and challenges the general view that the Stroop task primarily assesses executive control.
ERIC Educational Resources Information Center
Siakaluk, Paul D.; Pexman, Penny M.; Aguilera, Laura; Owen, William J.; Sears, Christopher R.
2008-01-01
We examined the effects of sensorimotor experience in two visual word recognition tasks. Body-object interaction (BOI) ratings were collected for a large set of words. These ratings assess perceptions of the ease with which a human body can physically interact with a word's referent. A set of high BOI words (e.g., "mask") and a set of low BOI…
Searching for the right word: Hybrid visual and memory search for words
Boettcher, Sage E. P.; Wolfe, Jeremy M.
2016-01-01
In “Hybrid Search” (Wolfe 2012) observers search through visual space for any of multiple targets held in memory. With photorealistic objects as stimuli, response times (RTs) increase linearly with the visual set size and logarithmically with memory set size even when over 100 items are committed to memory. It is well established that pictures of objects are particularly easy to memorize (Brady, Konkle, Alvarez, & Olivia, 2008). Would hybrid search performance be similar if the targets were words or phrases where word order can be important and where the processes of memorization might be different? In Experiment One, observers memorized 2, 4, 8, or 16 words in 4 different blocks. After passing a memory test, confirming memorization of the list, observers searched for these words in visual displays containing 2 to 16 words. Replicating Wolfe (2012), RTs increased linearly with the visual set size and logarithmically with the length of the word list. The word lists of Experiment One were random. In Experiment Two, words were drawn from phrases that observers reported knowing by heart (E.G. “London Bridge is falling down”). Observers were asked to provide four phrases ranging in length from 2 words to a phrase of no less than 20 words (range 21–86). Words longer than 2 characters from the phrase constituted the target list. Distractor words were matched for length and frequency. Even with these strongly ordered lists, results again replicated the curvilinear function of memory set size seen in hybrid search. One might expect serial position effects; perhaps reducing RTs for the first (primacy) and/or last (recency) members of a list (Atkinson & Shiffrin 1968; Murdock, 1962). Surprisingly we showed no reliable effects of word order. Thus, in “London Bridge is falling down”, “London” and “down” are found no faster than “falling”. PMID:25788035
Wolff, Susann; Schlesewsky, Matthias; Hirotani, Masako; Bornkessel-Schlesewsky, Ina
2008-11-01
We present two ERP studies on the processing of word order variations in Japanese, a language that is suited to shedding further light on the implications of word order freedom for neurocognitive approaches to sentence comprehension. Experiment 1 used auditory presentation and revealed that initial accusative objects elicit increased processing costs in comparison to initial subjects (in the form of a transient negativity) only when followed by a prosodic boundary. A similar effect was observed using visual presentation in Experiment 2, however only for accusative but not for dative objects. These results support a relational account of word order processing, in which the costs of comprehending an object-initial word order are determined by the linearization properties of the initial object in relation to the linearization properties of possible upcoming arguments. In the absence of a prosodic boundary, the possibility for subject omission in Japanese renders it likely that the initial accusative is the only argument in the clause. Hence, no upcoming arguments are expected and no linearization problem can arise. A prosodic boundary or visual segmentation, by contrast, indicate an object-before-subject word order, thereby leading to a mismatch between argument "prominence" (e.g. in terms of thematic roles) and linear order. This mismatch is alleviated when the initial object is highly prominent itself (e.g. in the case of a dative, which can bear the higher-ranking thematic role in a two argument relation). We argue that the processing mechanism at work here can be distinguished from more general aspects of "dependency processing" in object-initial sentences.
van den Hurk, J; Gentile, F; Jansma, B M
2011-12-01
The identification of a face comprises processing of both visual features and conceptual knowledge. Studies showing that the fusiform face area (FFA) is sensitive to face identity generally neglect this dissociation. The present study is the first that isolates conceptual face processing by using words presented in a person context instead of faces. The design consisted of 2 different conditions. In one condition, participants were presented with blocks of words related to each other at the categorical level (e.g., brands of cars, European cities). The second condition consisted of blocks of words linked to the personality features of a specific face. Both conditions were created from the same 8 × 8 word matrix, thereby controlling for visual input across conditions. Univariate statistical contrasts did not yield any significant differences between the 2 conditions in FFA. However, a machine learning classification algorithm was able to successfully learn the functional relationship between the 2 contexts and their underlying response patterns in FFA, suggesting that these activation patterns can code for different semantic contexts. These results suggest that the level of processing in FFA goes beyond facial features. This has strong implications for the debate about the role of FFA in face identification.
Blinded by taboo words in L1 but not L2.
Colbeck, Katie L; Bowers, Jeffrey S
2012-04-01
The present study compares the emotionality of English taboo words in native English speakers and native Chinese speakers who learned English as a second language. Neutral and taboo/sexual words were included in a Rapid Serial Visual Presentation (RSVP) task as to-be-ignored distracters in a short- and long-lag condition. Compared with neutral distracters, taboo/sexual distracters impaired the performance in the short-lag condition only. Of critical note, however, is that the performance of Chinese speakers was less impaired by taboo/sexual distracters. This supports the view that a first language is more emotional than a second language, even when words are processed quickly and automatically. (PsycINFO Database Record (c) 2012 APA, all rights reserved).
Emotion Words Shape Emotion Percepts
Gendron, Maria; Lindquist, Kristen A.; Barsalou, Lawrence; Barrett, Lisa Feldman
2015-01-01
People believe they see emotion written on the faces of other people. In an instant, simple facial actions are transformed into information about another's emotional state. The present research examined whether a perceiver unknowingly contributes to emotion perception with emotion word knowledge. We present 2 studies that together support a role for emotion concepts in the formation of visual percepts of emotion. As predicted, we found that perceptual priming of emotional faces (e.g., a scowling face) was disrupted when the accessibility of a relevant emotion word (e.g., anger) was temporarily reduced, demonstrating that the exact same face was encoded differently when a word was accessible versus when it was not. The implications of these findings for a linguistically relative view of emotion perception are discussed. PMID:22309717
Predictors of photo naming: Dutch norms for 327 photos.
Shao, Zeshu; Stiegert, Julia
2016-06-01
In the present study, we report naming latencies and norms for 327 photos of objects in Dutch. We provide norms for eight psycholinguistic variables: age of acquisition, familiarity, imageability, image agreement, objective and subjective visual complexity, word frequency, word length in syllables and letters, and name agreement. Furthermore, multiple regression analyses revealed that the significant predictors of photo-naming latencies were name agreement, word frequency, imageability, and image agreement. The naming latencies, norms, and stimuli are provided as supplemental materials.
ERIC Educational Resources Information Center
Janssen, David Rainsford
This study investigated alternate methods of letter discrimination pretraining and word recognition training in young children. Seventy kindergarten children were trained to recognize eight printed words in a vocabulary list by a mixed-list paired-associate method. Four of the stimulus words had visual response choices (pictures) and four had…
Aparicio, Mario; Peigneux, Philippe; Charlier, Brigitte; Balériaux, Danielle; Kavec, Martin; Leybaert, Jacqueline
2017-01-01
We present here the first neuroimaging data for perception of Cued Speech (CS) by deaf adults who are native users of CS. CS is a visual mode of communicating a spoken language through a set of manual cues which accompany lipreading and disambiguate it. With CS, sublexical units of the oral language are conveyed clearly and completely through the visual modality without requiring hearing. The comparison of neural processing of CS in deaf individuals with processing of audiovisual (AV) speech in normally hearing individuals represents a unique opportunity to explore the similarities and differences in neural processing of an oral language delivered in a visuo-manual vs. an AV modality. The study included deaf adult participants who were early CS users and native hearing users of French who process speech audiovisually. Words were presented in an event-related fMRI design. Three conditions were presented to each group of participants. The deaf participants saw CS words (manual + lipread), words presented as manual cues alone, and words presented to be lipread without manual cues. The hearing group saw AV spoken words, audio-alone and lipread-alone. Three findings are highlighted. First, the middle and superior temporal gyrus (excluding Heschl’s gyrus) and left inferior frontal gyrus pars triangularis constituted a common, amodal neural basis for AV and CS perception. Second, integration was inferred in posterior parts of superior temporal sulcus for audio and lipread information in AV speech, but in the occipito-temporal junction, including MT/V5, for the manual cues and lipreading in CS. Third, the perception of manual cues showed a much greater overlap with the regions activated by CS (manual + lipreading) than lipreading alone did. This supports the notion that manual cues play a larger role than lipreading for CS processing. The present study contributes to a better understanding of the role of manual cues as support of visual speech perception in the framework of the multimodal nature of human communication. PMID:28424636
Aphasic and amnesic patients' verbal vs. nonverbal retentive abilities.
Cermak, L S; Tarlow, S
1978-03-01
Four different groups of patients (aphasics, alcoholic Korsakoffs, chronic alcoholics, and control patients) were asked to detect either repeated words presented orally, repeated words presented visually, repeated pictures or repeated shapes, during the presentation of a list of similarly constructed stimuli. It was discovered that on the verbal tasks, the number of words intervening between repetitions had more effect on the aphasics than on the other groups of patients. However, for the nonverbal picture repetition and shape repetition tasks, the aphasics' performance was normal, while the alcoholic Korsakoff patients were most affected by the number of intervening items. It was concluded that the aphasics' memory deficit demonstrated by the use of this paradigm was specific to the presentation of verbal material.
NASA Astrophysics Data System (ADS)
Hassanat, Ahmad B. A.; Jassim, Sabah
2010-04-01
In this paper, the automatic lip reading problem is investigated, and an innovative approach to providing solutions to this problem has been proposed. This new VSR approach is dependent on the signature of the word itself, which is obtained from a hybrid feature extraction method dependent on geometric, appearance, and image transform features. The proposed VSR approach is termed "visual words". The visual words approach consists of two main parts, 1) Feature extraction/selection, and 2) Visual speech feature recognition. After localizing face and lips, several visual features for the lips where extracted. Such as the height and width of the mouth, mutual information and the quality measurement between the DWT of the current ROI and the DWT of the previous ROI, the ratio of vertical to horizontal features taken from DWT of ROI, The ratio of vertical edges to horizontal edges of ROI, the appearance of the tongue and the appearance of teeth. Each spoken word is represented by 8 signals, one of each feature. Those signals maintain the dynamic of the spoken word, which contains a good portion of information. The system is then trained on these features using the KNN and DTW. This approach has been evaluated using a large database for different people, and large experiment sets. The evaluation has proved the visual words efficiency, and shown that the VSR is a speaker dependent problem.
Foveal vs. parafoveal attention-grabbing power of threat-related information.
Calvo, Manuel G; Castillo, M Dolores
2005-01-01
We investigated whether threat words presented in attended (foveal) and in unattended (parafoveal) locations of the visual field are attention grabbing. Neutral (nonemotional) words were presented at fixation as probes in a lexical decision task. Each probe word was preceded by 2 simultaneous prime words (1 foveal, 1 parafoveal), either threatening or neutral, for 150 ms. The stimulus onset asynchrony (SOA) between the primes and the probe was either 300 or 1,000 ms. Results revealed slowed lexical decision times on the probe when primed by an unrelated foveal threat word at the short (300-ms) delay. In contrast, parafoveal threat words did not affect processing of the neutral probe at either delay. Nevertheless, both neutral and threat parafoveal words facilitated lexical decisions for identical probe words at 300-ms SOA. This suggests that threat words appearing outside the focus of attention do not draw or engage cognitive resources to such an extent as to produce interference in the processing of concurrent or subsequent neutral stimuli. An explanation of the lack of parafoveal interference is that semantic content is not extracted in the parafovea.
Visual Testing: An Experimental Assessment of the Encoding Specificity Hypothesis.
ERIC Educational Resources Information Center
DeMelo, Hermes T.; And Others
This study of 96 high school biology students investigates the effectiveness of visual instruction composed of simple line drawings and printed words as compared to printed-words-only instruction, visual tests, and the interaction between visual or non-visual mode of instruction and mode of testing. The subjects were randomly assigned to be given…
Independent Deficits of Visual Word and Motion Processing in Aging and Early Alzheimer's Disease
Velarde, Carla; Perelstein, Elizabeth; Ressmann, Wendy; Duffy, Charles J.
2013-01-01
We tested whether visual processing impairments in aging and Alzheimer's disease (AD) reflect uniform posterior cortical decline, or independent disorders of visual processing for reading and navigation. Young and older normal controls were compared to early AD patients using psychophysical measures of visual word and motion processing. We find elevated perceptual thresholds for letters and word discrimination from young normal controls, to older normal controls, to early AD patients. Across subject groups, visual motion processing showed a similar pattern of increasing thresholds, with the greatest impact on radial pattern motion perception. Combined analyses show that letter, word, and motion processing impairments are independent of each other. Aging and AD may be accompanied by independent impairments of visual processing for reading and navigation. This suggests separate underlying disorders and highlights the need for comprehensive evaluations to detect early deficits. PMID:22647256
Poster presentations at medical conferences: an effective way of disseminating research?
Goodhand, J R; Giles, C L; Wahed, M; Irving, P M; Langmead, L; Rampton, D S
2011-04-01
This study aimed to ascertain the value of posters at medical meetings to presenters and delegates. The usefulness of posters to presenters at national and international meetings was evaluated by assessing the numbers of delegates visiting them and the reasons why they visited. Memorability of selected posters was assessed and factors influencing their appeal to expert delegates identified. At both the national and international meetings, very few delegates (< 5%) visited posters. Only a minority read them and fewer asked useful questions. Recall of content was so poor that it prevented identification of factors improving their memorability. Factors increasing posters' visual appeal included their scientific content, pictures/graphs and limited use of words. Few delegates visit posters and those doing so recall little of their content. To engage their audience, researchers should design visually appealing posters by presenting high quality data in pictures or graphs without an excess of words.
Strengthening the Visual Element in Visual Media Materials.
ERIC Educational Resources Information Center
Wilhelm, R. Dwight
1996-01-01
Describes how to more effectively communicate the visual element in video and audiovisual materials. Discusses identifying a central topic, developing the visual content without words, preparing a storyboard, testing its effectiveness on people who are unacquainted with the production, and writing the script with as few words as possible. (AEF)
What you say matters: exploring visual-verbal interactions in visual working memory.
Mate, Judit; Allen, Richard J; Baqués, Josep
2012-01-01
The aim of this study was to explore whether the content of a simple concurrent verbal load task determines the extent of its interference on memory for coloured shapes. The task consisted of remembering four visual items while repeating aloud a pair of words that varied in terms of imageability and relatedness to the task set. At test, a cue appeared that was either the colour or the shape of one of the previously seen objects, with participants required to select the object's other feature from a visual array. During encoding and retention, there were four verbal load conditions: (a) a related, shape-colour pair (from outside the experimental set, i.e., "pink square"); (b) a pair of unrelated but visually imageable, concrete, words (i.e., "big elephant"); (c) a pair of unrelated and abstract words (i.e., "critical event"); and (d) no verbal load. Results showed differential effects of these verbal load conditions. In particular, imageable words (concrete and related conditions) interfered to a greater degree than abstract words. Possible implications for how visual working memory interacts with verbal memory and long-term memory are discussed.
Words, Shape, Visual Search and Visual Working Memory in 3-Year-Old Children
ERIC Educational Resources Information Center
Vales, Catarina; Smith, Linda B.
2015-01-01
Do words cue children's visual attention, and if so, what are the relevant mechanisms? Across four experiments, 3-year-old children (N = 163) were tested in visual search tasks in which targets were cued with only a visual preview versus a visual preview and a spoken name. The experiments were designed to determine whether labels facilitated…
ERIC Educational Resources Information Center
Bouaffre, Sarah; Faita-Ainseba, Frederique
2007-01-01
To investigate hemispheric differences in the timing of word priming, the modulation of event-related potentials by semantic word relationships was examined in each cerebral hemisphere. Primes and targets, either categorically (silk-wool) or associatively (needle-sewing) related, were presented to the left or right visual field in a go/no-go…
ERIC Educational Resources Information Center
Borowsky, Ron; Besner, Derek
2006-01-01
D. C. Plaut and J. R. Booth presented a parallel distributed processing model that purports to simulate human lexical decision performance. This model (and D. C. Plaut, 1995) offers a single mechanism account of the pattern of factor effects on reaction time (RT) between semantic priming, word frequency, and stimulus quality without requiring a…
ERIC Educational Resources Information Center
Solomyak, Olla; Marantz, Alec
2009-01-01
We present an MEG study of heteronym recognition, aiming to distinguish between two theories of lexical access: the "early access" theory, which entails that lexical access occurs at early (pre 200 ms) stages of processing, and the "late access" theory, which interprets this early activity as orthographic word-form identification rather than…
ERIC Educational Resources Information Center
Koen, Bobbie Jean; Hawkins, Jacqueline; Zhu, Xi; Jansen, Ben; Fan, Weihua; Johnson, Sharon
2018-01-01
Fluency is used as an indicator of reading proficiency. Many students with reading disabilities are unable to benefit from typical interventions. This study is designed to replicate Lorusso, Facoetti, Paganoni, Pezzani, and Molteni's (2006) work using FlashWord, a computer program that tachistoscopically presents words in the right or left visual…
Towards a Universal Model of Reading
Frost, Ram
2013-01-01
In the last decade, reading research has seen a paradigmatic shift. A new wave of computational models of orthographic processing that offer various forms of noisy position or context-sensitive coding, have revolutionized the field of visual word recognition. The influx of such models stems mainly from consistent findings, coming mostly from European languages, regarding an apparent insensitivity of skilled readers to letter-order. Underlying the current revolution is the theoretical assumption that the insensitivity of readers to letter order reflects the special way in which the human brain encodes the position of letters in printed words. The present paper discusses the theoretical shortcomings and misconceptions of this approach to visual word recognition. A systematic review of data obtained from a variety of languages demonstrates that letter-order insensitivity is not a general property of the cognitive system, neither it is a property of the brain in encoding letters. Rather, it is a variant and idiosyncratic characteristic of some languages, mostly European, reflecting a strategy of optimizing encoding resources, given the specific structure of words. Since the main goal of reading research is to develop theories that describe the fundamental and invariant phenomena of reading across orthographies, an alternative approach to model visual word recognition is offered. The dimensions of a possible universal model of reading, which outlines the common cognitive operations involved in orthographic processing in all writing systems, are discussed. PMID:22929057
Towards a universal model of reading.
Frost, Ram
2012-10-01
In the last decade, reading research has seen a paradigmatic shift. A new wave of computational models of orthographic processing that offer various forms of noisy position or context-sensitive coding have revolutionized the field of visual word recognition. The influx of such models stems mainly from consistent findings, coming mostly from European languages, regarding an apparent insensitivity of skilled readers to letter order. Underlying the current revolution is the theoretical assumption that the insensitivity of readers to letter order reflects the special way in which the human brain encodes the position of letters in printed words. The present article discusses the theoretical shortcomings and misconceptions of this approach to visual word recognition. A systematic review of data obtained from a variety of languages demonstrates that letter-order insensitivity is neither a general property of the cognitive system nor a property of the brain in encoding letters. Rather, it is a variant and idiosyncratic characteristic of some languages, mostly European, reflecting a strategy of optimizing encoding resources, given the specific structure of words. Since the main goal of reading research is to develop theories that describe the fundamental and invariant phenomena of reading across orthographies, an alternative approach to model visual word recognition is offered. The dimensions of a possible universal model of reading, which outlines the common cognitive operations involved in orthographic processing in all writing systems, are discussed.
Embodied attention and word learning by toddlers
Yu, Chen; Smith, Linda B.
2013-01-01
Many theories of early word learning begin with the uncertainty inherent to learning a word from its co-occurrence with a visual scene. However, the relevant visual scene for infant word learning is neither from the adult theorist’s view nor the mature partner’s view, but is rather from the learner’s personal view. Here we show that when 18-month old infants interacted with objects in play with their parents, they created moments in which a single object was visually dominant. If parents named the object during these moments of bottom-up selectivity, later forced-choice tests showed that infants learned the name, but did not when naming occurred during a less visually selective moment. The momentary visual input for parents and toddlers was captured via head cameras placed low on each participant’s forehead as parents played with and named objects for their infant. Frame-by-frame analyses of the head camera images at and around naming moments were conducted to determine the visual properties at input that were associated with learning. The analyses indicated that learning occurred when bottom-up visual information was clean and uncluttered. The sensory-motor behaviors of infants and parents were also analyzed to determine how their actions on the objects may have created these optimal visual moments for learning. The results are discussed with respect to early word learning, embodied attention, and the social role of parents in early word learning. PMID:22878116
Rudimentary Reading Repertoires via Stimulus Equivalence and Recombination of Minimal Verbal Units
Matos, Maria Amelia; Avanzi, Alessandra Lopes; McIlvane, William J
2006-01-01
We report a study with sixteen low-SES Brazilian children that sought to establish a repertoire of relations involving dictated words, printed words, and corresponding pictures. Children were taught: (1) in response to dictated words, to select corresponding pictures; (2) in response to syllables presented in both visual and auditory formats, to select words which contained a corresponding syllable in either the first or the last position; (3) in response to dictated-word samples, to “construct” corresponding printed words via arranging their constituent syllabic components; and (4) in response to printed word samples, to construct identical printed words by arranging their syllabic constituents. After training on the first two types of tasks, children were given tests for potentially emergent relations involving printed words and pictures. Almost all exhibited relations consistent with stimulus equivalence. They also displayed emergent naming performances––not only with training words but also with new words that were recombinations of their constituent syllables. The present work was inspired by Sidman's stimulus equivalence paradigm and by Skinner's functional analysis of verbal relations, particularly as applied to conceptions of minimal behavioral units and creativity (i.e., behavioral flexibility) in the analytical units applied to verbal relations. PMID:22477340
Fujimaki, N; Miyauchi, S; Pütz, B; Sasaki, Y; Takino, R; Sakai, K; Tamada, T
1999-01-01
Functional magnetic resonance imaging was used to investigate neural activity during the judgment of visual stimuli in two groups of experiments using seven and five normal subjects. The subjects were given tasks designed differentially to involve orthographic (more generally, visual form), phonological, and lexico-semantic processes. These tasks included the judgments of whether a line was horizontal, whether a pseudocharacter or pseudocharacter string included a horizontal line, whether a Japanese katakana (phonogram) character or character string included a certain vowel, or whether a character string was meaningful (noun or verb) or meaningless. Neural activity related to the visual form process was commonly observed during judgments of both single real-characters and single pseudocharacters in lateral extrastriate visual cortex, the posterior ventral or medial occipito-temporal area, and the posterior inferior temporal area of both hemispheres. In contrast, left-lateralized activation was observed in the latter two areas during judgments of real- and pseudo-character strings. These results show that there is no katakana "word form center" whose activity is specific to real words. Activation related to the phonological process was observed, in Broca's area, the insula, the supramarginal gyrus, and the posterior superior temporal area, with greater activation in the left hemisphere. These activation foci for visual form and phonological processes of katakana also were reported for the English alphabet in previous studies. The present activation showed no additional areas for contrasts of noun judgment with other conditions and was similar between noun and verb judgment tasks, suggesting two possibilities: no strong semantic activation was produced, or the semantic process shared activation foci with the phonological process.
Making the Invisible Visible: Verbal but Not Visual Cues Enhance Visual Detection
Lupyan, Gary; Spivey, Michael J.
2010-01-01
Background Can hearing a word change what one sees? Although visual sensitivity is known to be enhanced by attending to the location of the target, perceptual enhancements of following cues to the identity of an object have been difficult to find. Here, we show that perceptual sensitivity is enhanced by verbal, but not visual cues. Methodology/Principal Findings Participants completed an object detection task in which they made an object-presence or -absence decision to briefly-presented letters. Hearing the letter name prior to the detection task increased perceptual sensitivity (d′). A visual cue in the form of a preview of the to-be-detected letter did not. Follow-up experiments found that the auditory cuing effect was specific to validly cued stimuli. The magnitude of the cuing effect positively correlated with an individual measure of vividness of mental imagery; introducing uncertainty into the position of the stimulus did not reduce the magnitude of the cuing effect, but eliminated the correlation with mental imagery. Conclusions/Significance Hearing a word made otherwise invisible objects visible. Interestingly, seeing a preview of the target stimulus did not similarly enhance detection of the target. These results are compatible with an account in which auditory verbal labels modulate lower-level visual processing. The findings show that a verbal cue in the form of hearing a word can influence even the most elementary visual processing and inform our understanding of how language affects perception. PMID:20628646
Does the advantage of the upper part of words occur at the lexical level?
Perea, Manuel; Comesaña, Montserrat; Soares, Ana P
2012-11-01
Several recent studies have shown that the upper part of words is more important than the lower part in visual word recognition. Here, we examine whether or not this advantage arises at the lexical or at the letter (letter feature) level. To examine this issue, we conducted two lexical decision experiments in which words/pseudowords were preceded by a very brief (50-ms) presentation of their upper or lower parts (e.g., ). If the advantage for the upper part of words arises at the letter (letter feature) level, the effect should occur for both words and pseudowords. Results revealed an advantage for the upper part of words, but not for pseudowords. This suggests that the advantage for the upper part of words occurs at the lexical level, rather than at the letter (or letter feature) level.
Visual Distinctiveness and the Development of Children's False Memories
ERIC Educational Resources Information Center
Howe, Mark L.
2008-01-01
Distinctiveness effects in children's (5-, 7-, and 11-year-olds) false memory illusions were examined using visual materials. In Experiment 1, developmental trends (increasing false memories with age) were obtained using Deese-Roediger-McDermott lists presented as words and color photographs but not line drawings. In Experiment 2, when items were…
Visual Processing of Verbal and Nonverbal Stimuli in Adolescents with Reading Disabilities.
ERIC Educational Resources Information Center
Boden, Catherine; Brodeur, Darlene A.
1999-01-01
A study investigated whether 32 adolescents with reading disabilities (RD) were slower at processing visual information compared to children of comparable age and reading level, or whether their deficit was specific to the written word. Adolescents with RD demonstrated difficulties in processing rapidly presented verbal and nonverbal visual…
Perea, Manuel; Marcet, Ana; Vergara-Martínez, Marta; Gomez, Pablo
2016-01-01
A number of models of visual-word recognition assume that the repetition of an item in a lexical decision experiment increases that item's familiarity/wordness. This would produce not only a facilitative repetition effect for words, but also an inhibitory effect for nonwords (i.e., more familiarity/wordness makes the negative decision slower). We conducted a two-block lexical decision experiment to examine word/nonword repetition effects in the framework of a leading "familiarity/wordness" model of the lexical decision task, namely, the diffusion model (Ratcliff et al., 2004). Results showed that while repeated words were responded to faster than the unrepeated words, repeated nonwords were responded to more slowly than the nonrepeated nonwords. Fits from the diffusion model revealed that the repetition effect for words/nonwords was mainly due to differences in the familiarity/wordness (drift rate) parameter. This word/nonword dissociation favors those accounts that posit that the previous presentation of an item increases its degree of familiarity/wordness.
Reading impairment in schizophrenia: dysconnectivity within the visual system.
Vinckier, Fabien; Cohen, Laurent; Oppenheim, Catherine; Salvador, Alexandre; Picard, Hernan; Amado, Isabelle; Krebs, Marie-Odile; Gaillard, Raphaël
2014-01-01
Patients with schizophrenia suffer from perceptual visual deficits. It remains unclear whether those deficits result from an isolated impairment of a localized brain process or from a more diffuse long-range dysconnectivity within the visual system. We aimed to explore, with a reading paradigm, the functioning of both ventral and dorsal visual pathways and their interaction in schizophrenia. Patients with schizophrenia and control subjects were studied using event-related functional MRI (fMRI) while reading words that were progressively degraded through word rotation or letter spacing. Reading intact or minimally degraded single words involves mainly the ventral visual pathway. Conversely, reading in non-optimal conditions involves both the ventral and the dorsal pathway. The reading paradigm thus allowed us to study the functioning of both pathways and their interaction. Behaviourally, patients with schizophrenia were selectively impaired at reading highly degraded words. While fMRI activation level was not different between patients and controls, functional connectivity between the ventral and dorsal visual pathways increased with word degradation in control subjects, but not in patients. Moreover, there was a negative correlation between the patients' behavioural sensitivity to stimulus degradation and dorso-ventral connectivity. This study suggests that perceptual visual deficits in schizophrenia could be related to dysconnectivity between dorsal and ventral visual pathways. © 2013 Published by Elsevier Ltd.
Maurer, Urs; Zevin, Jason D.; McCandliss, Bruce D.
2015-01-01
The N170 component of the event-related potential (ERP) reflects experience-dependent neural changes in several forms of visual expertise, including expertise for visual words. Readers skilled in writing systems that link characters to phonemes (i.e., alphabetic writing) typically produce a left-lateralized N170 to visual word forms. This study examined the N170 in three Japanese scripts that link characters to larger phonological units. Participants were monolingual English speakers (EL1) and native Japanese speakers (JL1) who were also proficient in English. ERPs were collected using a 129-channel array, as participants performed a series of experiments viewing words or novel control stimuli in a repetition detection task. The N170 was strongly left-lateralized for all three Japanese scripts (including logographic Kanji characters) in JL1 participants, but bilateral in EL1 participants viewing these same stimuli. This demonstrates that left-lateralization of the N170 is dependent on specific reading expertise and is not limited to alphabetic scripts. Additional contrasts within the moraic Katakana script revealed equivalent N170 responses in JL1 speakers for familiar Katakana words and for Kanji words transcribed into novel Katakana words, suggesting that the N170 expertise effect is driven by script familiarity rather than familiarity with particular visual word forms. Finally, for English words and novel symbol string stimuli, both EL1 and JL1 subjects produced equivalent responses for the novel symbols, and more left-lateralized N170 responses for the English words, indicating that such effects are not limited to the first language. Taken together, these cross-linguistic results suggest that similar neural processes underlie visual expertise for print in very different writing systems. PMID:18370600
Comparing different kinds of words and word-word relations to test an habituation model of priming.
Rieth, Cory A; Huber, David E
2017-06-01
Huber and O'Reilly (2003) proposed that neural habituation exists to solve a temporal parsing problem, minimizing blending between one word and the next when words are visually presented in rapid succession. They developed a neural dynamics habituation model, explaining the finding that short duration primes produce positive priming whereas long duration primes produce negative repetition priming. The model contains three layers of processing, including a visual input layer, an orthographic layer, and a lexical-semantic layer. The predicted effect of prime duration depends both on this assumed representational hierarchy and the assumption that synaptic depression underlies habituation. The current study tested these assumptions by comparing different kinds of words (e.g., words versus non-words) and different kinds of word-word relations (e.g., associative versus repetition). For each experiment, the predictions of the original model were compared to an alternative model with different representational assumptions. Experiment 1 confirmed the prediction that non-words and inverted words require longer prime durations to eliminate positive repetition priming (i.e., a slower transition from positive to negative priming). Experiment 2 confirmed the prediction that associative priming increases and then decreases with increasing prime duration, but remains positive even with long duration primes. Experiment 3 replicated the effects of repetition and associative priming using a within-subjects design and combined these effects by examining target words that were expected to repeat (e.g., viewing the target word 'BACK' after the prime phrase 'back to'). These results support the originally assumed representational hierarchy and more generally the role of habituation in temporal parsing and priming. Copyright © 2017 Elsevier Inc. All rights reserved.
Attentional capture by taboo words: A functional view of auditory distraction.
Röer, Jan P; Körner, Ulrike; Buchner, Axel; Bell, Raoul
2017-06-01
It is well established that task-irrelevant, to-be-ignored speech adversely affects serial short-term memory (STM) for visually presented items compared with a quiet control condition. However, there is an ongoing debate about whether the semantic content of the speech has the capacity to capture attention and to disrupt memory performance. In the present article, we tested whether taboo words are more difficult to ignore than neutral words. Taboo words or neutral words were presented as (a) steady state sequences in which the same distractor word was repeated, (b) changing state sequences in which different distractor words were presented, and (c) auditory deviant sequences in which a single distractor word deviated from a sequence of repeated words. Experiments 1 and 2 showed that taboo words disrupted performance more than neutral words. This taboo effect did not habituate and it did not differ between individuals with high and low working memory capacity. In Experiments 3 and 4, in which only a single deviant taboo word was presented, no taboo effect was obtained. These results do not support the idea that the processing of the auditory distractors' semantic content is the result of occasional attention switches to the auditory modality. Instead, the overall pattern of results is more in line with a functional view of auditory distraction, according to which the to-be-ignored modality is routinely monitored for potentially important stimuli (e.g., self-relevant or threatening information), the detection of which draws processing resources away from the primary task. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Strand, Julia F; Sommers, Mitchell S
2011-09-01
Much research has explored how spoken word recognition is influenced by the architecture and dynamics of the mental lexicon (e.g., Luce and Pisoni, 1998; McClelland and Elman, 1986). A more recent question is whether the processes underlying word recognition are unique to the auditory domain, or whether visually perceived (lipread) speech may also be sensitive to the structure of the mental lexicon (Auer, 2002; Mattys, Bernstein, and Auer, 2002). The current research was designed to test the hypothesis that both aurally and visually perceived spoken words are isolated in the mental lexicon as a function of their modality-specific perceptual similarity to other words. Lexical competition (the extent to which perceptually similar words influence recognition of a stimulus word) was quantified using metrics that are well-established in the literature, as well as a statistical method for calculating perceptual confusability based on the phi-square statistic. Both auditory and visual spoken word recognition were influenced by modality-specific lexical competition as well as stimulus word frequency. These findings extend the scope of activation-competition models of spoken word recognition and reinforce the hypothesis (Auer, 2002; Mattys et al., 2002) that perceptual and cognitive properties underlying spoken word recognition are not specific to the auditory domain. In addition, the results support the use of the phi-square statistic as a better predictor of lexical competition than metrics currently used in models of spoken word recognition. © 2011 Acoustical Society of America
Perception of Words and Non-Words in the Upper and Lower Visual Fields
ERIC Educational Resources Information Center
Darker, Iain T.; Jordan, Timothy R.
2004-01-01
The findings of previous investigations into word perception in the upper and the lower visual field (VF) are variable and may have incurred non-perceptual biases caused by the asymmetric distribution of information within a word, an advantage for saccadic eye-movements to targets in the upper VF and the possibility that stimuli were not projected…
ERIC Educational Resources Information Center
Sauval, Karinne; Casalis, Séverine; Perre, Laetitia
2017-01-01
This study investigated the phonological contribution during visual word recognition in child readers as a function of general reading expertise (third and fifth grades) and specific word exposure (frequent and less-frequent words). An intermodal priming in lexical decision task was performed. Auditory primes (identical and unrelated) were used in…
[Medical Image Registration Method Based on a Semantic Model with Directional Visual Words].
Jin, Yufei; Ma, Meng; Yang, Xin
2016-04-01
Medical image registration is very challenging due to the various imaging modality,image quality,wide inter-patients variability,and intra-patient variability with disease progressing of medical images,with strict requirement for robustness.Inspired by semantic model,especially the recent tremendous progress in computer vision tasks under bag-of-visual-word framework,we set up a novel semantic model to match medical images.Since most of medical images have poor contrast,small dynamic range,and involving only intensities and so on,the traditional visual word models do not perform very well.To benefit from the advantages from the relative works,we proposed a novel visual word model named directional visual words,which performs better on medical images.Then we applied this model to do medical registration.In our experiment,the critical anatomical structures were first manually specified by experts.Then we adopted the directional visual word,the strategy of spatial pyramid searching from coarse to fine,and the k-means algorithm to help us locating the positions of the key structures accurately.Sequentially,we shall register corresponding images by the areas around these positions.The results of the experiments which were performed on real cardiac images showed that our method could achieve high registration accuracy in some specific areas.
Skill dependent audiovisual integration in the fusiform induces repetition suppression.
McNorgan, Chris; Booth, James R
2015-02-01
Learning to read entails mapping existing phonological representations to novel orthographic representations and is thus an ideal context for investigating experience driven audiovisual integration. Because two dominant brain-based theories of reading development hinge on the sensitivity of the visual-object processing stream to phonological information, we were interested in how reading skill relates to audiovisual integration in this area. Thirty-two children between 8 and 13 years of age spanning a range of reading skill participated in a functional magnetic resonance imaging experiment. Participants completed a rhyme judgment task to word pairs presented unimodally (auditory- or visual-only) and cross-modally (auditory followed by visual). Skill-dependent sub-additive audiovisual modulation was found in left fusiform gyrus, extending into the putative visual word form area, and was correlated with behavioral orthographic priming. These results suggest learning to read promotes facilitatory audiovisual integration in the ventral visual-object processing stream and may optimize this region for orthographic processing. Copyright © 2014 Elsevier Inc. All rights reserved.
Skill Dependent Audiovisual Integration in the Fusiform Induces Repetition Suppression
McNorgan, Chris; Booth, James R.
2015-01-01
Learning to read entails mapping existing phonological representations to novel orthographic representations and is thus an ideal context for investigating experience driven audiovisual integration. Because two dominant brain-based theories of reading development hinge on the sensitivity of the visual-object processing stream to phonological information, we were interested in how reading skill relates to audiovisual integration in this area. Thirty-two children between 8 and 13 years of age spanning a range of reading skill participated in a functional magnetic resonance imaging experiment. Participants completed a rhyme judgment task to word pairs presented unimodally (auditory- or visual-only) and cross-modally (auditory followed by visual). Skill-dependent sub-additive audiovisual modulation was found in left fusiform gyrus, extending into the putative visual word form area, and was correlated with behavioral orthographic priming. These results suggest learning to read promotes facilitatory audiovisual integration in the ventral visual-object processing stream and may optimize this region for orthographic processing. PMID:25585276
Language-Mediated Visual Orienting Behavior in Low and High Literates
Huettig, Falk; Singh, Niharika; Mishra, Ramesh Kumar
2011-01-01
The influence of formal literacy on spoken language-mediated visual orienting was investigated by using a simple look and listen task which resembles every day behavior. In Experiment 1, high and low literates listened to spoken sentences containing a target word (e.g., “magar,” crocodile) while at the same time looking at a visual display of four objects (a phonological competitor of the target word, e.g., “matar,” peas; a semantic competitor, e.g., “kachuwa,” turtle, and two unrelated distractors). In Experiment 2 the semantic competitor was replaced with another unrelated distractor. Both groups of participants shifted their eye gaze to the semantic competitors (Experiment 1). In both experiments high literates shifted their eye gaze toward phonological competitors as soon as phonological information became available and moved their eyes away as soon as the acoustic information mismatched. Low literates in contrast only used phonological information when semantic matches between spoken word and visual referent were not present (Experiment 2) but in contrast to high literates these phonologically mediated shifts in eye gaze were not closely time-locked to the speech input. These data provide further evidence that in high literates language-mediated shifts in overt attention are co-determined by the type of information in the visual environment, the timing of cascaded processing in the word- and object-recognition systems, and the temporal unfolding of the spoken language. Our findings indicate that low literates exhibit a similar cognitive behavior but instead of participating in a tug-of-war among multiple types of cognitive representations, word–object mapping is achieved primarily at the semantic level. If forced, for instance by a situation in which semantic matches are not present (Experiment 2), low literates may on occasion have to rely on phonological information but do so in a much less proficient manner than their highly literate counterparts. PMID:22059083
The effect of high- and low-frequency previews and sentential fit on word skipping during reading
Angele, Bernhard; Laishley, Abby; Rayner, Keith; Liversedge, Simon P.
2014-01-01
In a previous gaze-contingent boundary experiment, Angele and Rayner (2012) found that readers are likely to skip a word that appears to be the definite article the even when syntactic constraints do not allow for articles to occur in that position. In the present study, we investigated whether the word frequency of the preview of a three-letter target word influences a reader’s decision to fixate or skip that word. We found that the word frequency rather than the felicitousness (syntactic fit) of the preview affected how often the upcoming word was skipped. These results indicate that visual information about the upcoming word trumps information from the sentence context when it comes to making a skipping decision. Skipping parafoveal instances of the therefore may simply be an extreme case of skipping high-frequency words. PMID:24707791
Syllable Transposition Effects in Korean Word Recognition
ERIC Educational Resources Information Center
Lee, Chang H.; Kwon, Youan; Kim, Kyungil; Rastle, Kathleen
2015-01-01
Research on the impact of letter transpositions in visual word recognition has yielded important clues about the nature of orthographic representations. This study investigated the impact of syllable transpositions on the recognition of Korean multisyllabic words. Results showed that rejection latencies in visual lexical decision for…
François, Clément; Cunillera, Toni; Garcia, Enara; Laine, Matti; Rodriguez-Fornells, Antoni
2017-04-01
Learning a new language requires the identification of word units from continuous speech (the speech segmentation problem) and mapping them onto conceptual representation (the word to world mapping problem). Recent behavioral studies have revealed that the statistical properties found within and across modalities can serve as cues for both processes. However, segmentation and mapping have been largely studied separately, and thus it remains unclear whether both processes can be accomplished at the same time and if they share common neurophysiological features. To address this question, we recorded EEG of 20 adult participants during both an audio alone speech segmentation task and an audiovisual word-to-picture association task. The participants were tested for both the implicit detection of online mismatches (structural auditory and visual semantic violations) as well as for the explicit recognition of words and word-to-picture associations. The ERP results from the learning phase revealed a delayed learning-related fronto-central negativity (FN400) in the audiovisual condition compared to the audio alone condition. Interestingly, while online structural auditory violations elicited clear MMN/N200 components in the audio alone condition, visual-semantic violations induced meaning-related N400 modulations in the audiovisual condition. The present results support the idea that speech segmentation and meaning mapping can take place in parallel and act in synergy to enhance novel word learning. Copyright © 2016 Elsevier Ltd. All rights reserved.
Lack of visual field asymmetries for spatial cueing in reading parafoveal Chinese characters.
Luo, Chunming; Dell'Acqua, Roberto; Proctor, Robert W; Li, Xingshan
2015-12-01
In two experiments, we investigated whether visual field (VF) asymmetries of spatial cueing are involved in reading parafoveal Chinese characters. These characters are different from linearly arranged alphabetic words in that they are logograms that are confined to a constant, square-shaped area and are composed of only a few radicals. We observed a cueing effect, but it did not vary with the VF in which the Chinese character was presented, regardless of whether the cue validity (the ratio of validly to invalidly cued targets) was 1:1 or 7:3. These results suggest that VF asymmetries of spatial cueing do not affect the reading of parafoveal Chinese characters, contrary to the reading of alphabetic words. The mechanisms of spatial attention in reading parafoveal English-like words and Chinese characters are discussed.
Perea, Manuel; Marcet, Ana; Vergara-Martínez, Marta; Gomez, Pablo
2016-01-01
A number of models of visual-word recognition assume that the repetition of an item in a lexical decision experiment increases that item's familiarity/wordness. This would produce not only a facilitative repetition effect for words, but also an inhibitory effect for nonwords (i.e., more familiarity/wordness makes the negative decision slower). We conducted a two-block lexical decision experiment to examine word/nonword repetition effects in the framework of a leading “familiarity/wordness” model of the lexical decision task, namely, the diffusion model (Ratcliff et al., 2004). Results showed that while repeated words were responded to faster than the unrepeated words, repeated nonwords were responded to more slowly than the nonrepeated nonwords. Fits from the diffusion model revealed that the repetition effect for words/nonwords was mainly due to differences in the familiarity/wordness (drift rate) parameter. This word/nonword dissociation favors those accounts that posit that the previous presentation of an item increases its degree of familiarity/wordness. PMID:26925021
Toward a reliable gaze-independent hybrid BCI combining visual and natural auditory stimuli.
Barbosa, Sara; Pires, Gabriel; Nunes, Urbano
2016-03-01
Brain computer interfaces (BCIs) are one of the last communication options for patients in the locked-in state (LIS). For complete LIS patients, interfaces must be gaze-independent due to their eye impairment. However, unimodal gaze-independent approaches typically present levels of performance substantially lower than gaze-dependent approaches. The combination of multimodal stimuli has been pointed as a viable way to increase users' performance. A hybrid visual and auditory (HVA) P300-based BCI combining simultaneously visual and auditory stimulation is proposed. Auditory stimuli are based on natural meaningful spoken words, increasing stimuli discrimination and decreasing user's mental effort in associating stimuli to the symbols. The visual part of the interface is covertly controlled ensuring gaze-independency. Four conditions were experimentally tested by 10 healthy participants: visual overt (VO), visual covert (VC), auditory (AU) and covert HVA. Average online accuracy for the hybrid approach was 85.3%, which is more than 32% over VC and AU approaches. Questionnaires' results indicate that the HVA approach was the less demanding gaze-independent interface. Interestingly, the P300 grand average for HVA approach coincides with an almost perfect sum of P300 evoked separately by VC and AU tasks. The proposed HVA-BCI is the first solution simultaneously embedding natural spoken words and visual words to provide a communication lexicon. Online accuracy and task demand of the approach compare favorably with state-of-the-art. The proposed approach shows that the simultaneous combination of visual covert control and auditory modalities can effectively improve the performance of gaze-independent BCIs. Copyright © 2015 Elsevier B.V. All rights reserved.
Visual noise disrupts conceptual integration in reading.
Gao, Xuefei; Stine-Morrow, Elizabeth A L; Noh, Soo Rim; Eskew, Rhea T
2011-02-01
The Effortfulness Hypothesis suggests that sensory impairment (either simulated or age-related) may decrease capacity for semantic integration in language comprehension. We directly tested this hypothesis by measuring resource allocation to different levels of processing during reading (i.e., word vs. semantic analysis). College students read three sets of passages word-by-word, one at each of three levels of dynamic visual noise. There was a reliable interaction between processing level and noise, such that visual noise increased resources allocated to word-level processing, at the cost of attention paid to semantic analysis. Recall of the most important ideas also decreased with increasing visual noise. Results suggest that sensory challenge can impair higher-level cognitive functions in learning from text, supporting the Effortfulness Hypothesis.
Communication training in mute autistic adolescents using the written work.
LaVigna, G W
1977-06-01
The expressive and receptive use of three written words was taught to three mute autistic adolescents using a procedure based on Terrace's errorless discrimination model and Premack's language training with chimps. Expressive language was measured by the subject's selection of the appropriate word card from among the available alternatives when the corresponding object was presented. Receptive language was measured by the subject's selection of the appropriate object from among the available alternatives when the corresponding word card was presented. The sequence of the presentations and the order of placement of the available alternatives were randomized. The three subjects required 979, 1,791, and 1,644 trails, respectively, to master both the expressive and receptive use of the three words. The correct response rates for the three subjects over the entire training program were 92, 92, and 90%, respectively. It was concluded that, as concrete visual symbols, written words may provide a viable communication system for the mute autistic. The implications for treatment are discussed and suggestions for future research are made.
Richlan, Fabio; Gagl, Benjamin; Hawelka, Stefan; Braun, Mario; Schurz, Matthias; Kronbichler, Martin; Hutzler, Florian
2014-10-01
The present study investigated the feasibility of using self-paced eye movements during reading (measured by an eye tracker) as markers for calculating hemodynamic brain responses measured by functional magnetic resonance imaging (fMRI). Specifically, we were interested in whether the fixation-related fMRI analysis approach was sensitive enough to detect activation differences between reading material (words and pseudowords) and nonreading material (line and unfamiliar Hebrew strings). Reliable reading-related activation was identified in left hemisphere superior temporal, middle temporal, and occipito-temporal regions including the visual word form area (VWFA). The results of the present study are encouraging insofar as fixation-related analysis could be used in future fMRI studies to clarify some of the inconsistent findings in the literature regarding the VWFA. Our study is the first step in investigating specific visual word recognition processes during self-paced natural sentence reading via simultaneous eye tracking and fMRI, thus aiming at an ecologically valid measurement of reading processes. We provided the proof of concept and methodological framework for the analysis of fixation-related fMRI activation in the domain of reading research. © The Author 2013. Published by Oxford University Press.
Kim, Young Youn; Lee, Boreom; Shin, Yong Wook; Kwon, Jun Soo; Kim, Myung-Sun
2006-02-01
We investigated the brain substrate of word repetition effects on the implicit memory task using low-resolution electromagnetic tomography (LORETA) with high-density 128-channel EEG and individual MRI as a realistic head model. Thirteen right-handed, healthy subjects performed a word/non-word discrimination task, in which the words and non-words were presented visually, and some of the words appeared twice with a lag of one or five items. All of the subjects exhibited word repetition effects with respect to the behavioral data, in which a faster reaction time was observed to the repeated word (old word) than to the first presentation of the word (new word). The old words elicited more positive-going potentials than the new words, beginning at 200 ms and lasting until 500 ms post-stimulus. We conducted source reconstruction using LORETA at a latency of 400 ms with the peak mean global field potentials and used statistical parametric mapping for the statistical analysis. We found that the source elicited by the old words exhibited a statistically significant current density reduction in the left inferior frontal gyrus. This is the first study to investigate the generators of word repetition effects using voxel-by-voxel statistical mapping of the current density with individual MRI and high-density EEG.
PechaKucha Presentations: Teaching Storytelling, Visual Design, and Conciseness
ERIC Educational Resources Information Center
Lucas, Kristen; Rawlins, Jacob D.
2015-01-01
When speakers rely too heavily on presentation software templates, they often end up stultifying audiences with a triple-whammy of bullet points. In this article, Lucas and Rawlins present an alternative method--PechaKucha (the Japanese word for "chit chat")--a presentation style driven by a carefully planned, automatically timed…
An fMRI study of semantic processing in men with schizophrenia
Kubicki, M.; McCarley, R.W.; Nestor, P.G.; Huh, T.; Kikinis, R.; Shenton, M.E.; Wible, C.G.
2009-01-01
As a means toward understanding the neural bases of schizophrenic thought disturbance, we examined brain activation patterns in response to semantically and superficially encoded words in patients with schizophrenia. Nine male schizophrenic and 9 male control subjects were tested in a visual levels of processing (LOP) task first outside the magnet and then during the fMRI scanning procedures (using a different set of words). During the experiments visual words were presented under two conditions. Under the deep, semantic encoding condition, subjects made semantic judgments as to whether the words were abstract or concrete. Under the shallow, nonsemantic encoding condition, subjects made perceptual judgments of the font size (uppercase/lowercase) of the presented words. After performance of the behavioral task, a recognition test was used to assess the depth of processing effect, defined as better performance for semantically encoded words than for perceptually encoded words. For the scanned version only, the words for both conditions were repeated in order to assess repetition-priming effects. Reaction times were assessed in both testing scenarios. Both groups showed the expected depth of processing effect for recognition, and control subjects showed the expected increased activation of the left inferior prefrontal cortex (LIPC) under semantic encoding relative to perceptual encoding conditions as well as repetition priming for semantic conditions only. In contrast, schizophrenics showed similar patterns of fMRI activation regardless of condition. Most striking in relation to controls, patients showed decreased LIFC activation concurrent with increased left superior temporal gyrus activation for semantic encoding versus shallow encoding. Furthermore, schizophrenia subjects did not show the repetition priming effect, either behaviorally or as a decrease in LIPC activity. In patients with schizophrenia, LIFC underactivation and left superior temporal gyrus overactivation for semantically encoded words may reflect a disease-related disruption of a distributed frontal temporal network that is engaged in the representation and processing of meaning of words, text, and discourse and which may underlie schizophrenic thought disturbance. PMID:14683698
An fMRI study of semantic processing in men with schizophrenia.
Kubicki, M; McCarley, R W; Nestor, P G; Huh, T; Kikinis, R; Shenton, M E; Wible, C G
2003-12-01
As a means toward understanding the neural bases of schizophrenic thought disturbance, we examined brain activation patterns in response to semantically and superficially encoded words in patients with schizophrenia. Nine male schizophrenic and 9 male control subjects were tested in a visual levels of processing (LOP) task first outside the magnet and then during the fMRI scanning procedures (using a different set of words). During the experiments visual words were presented under two conditions. Under the deep, semantic encoding condition, subjects made semantic judgments as to whether the words were abstract or concrete. Under the shallow, nonsemantic encoding condition, subjects made perceptual judgments of the font size (uppercase/lowercase) of the presented words. After performance of the behavioral task, a recognition test was used to assess the depth of processing effect, defined as better performance for semantically encoded words than for perceptually encoded words. For the scanned version only, the words for both conditions were repeated in order to assess repetition-priming effects. Reaction times were assessed in both testing scenarios. Both groups showed the expected depth of processing effect for recognition, and control subjects showed the expected increased activation of the left inferior prefrontal cortex (LIPC) under semantic encoding relative to perceptual encoding conditions as well as repetition priming for semantic conditions only. In contrast, schizophrenics showed similar patterns of fMRI activation regardless of condition. Most striking in relation to controls, patients showed decreased LIFC activation concurrent with increased left superior temporal gyrus activation for semantic encoding versus shallow encoding. Furthermore, schizophrenia subjects did not show the repetition priming effect, either behaviorally or as a decrease in LIPC activity. In patients with schizophrenia, LIFC underactivation and left superior temporal gyrus overactivation for semantically encoded words may reflect a disease-related disruption of a distributed frontal temporal network that is engaged in the representation and processing of meaning of words, text, and discourse and which may underlie schizophrenic thought disturbance.
Searching for the right word: Hybrid visual and memory search for words.
Boettcher, Sage E P; Wolfe, Jeremy M
2015-05-01
In "hybrid search" (Wolfe Psychological Science, 23(7), 698-703, 2012), observers search through visual space for any of multiple targets held in memory. With photorealistic objects as the stimuli, response times (RTs) increase linearly with the visual set size and logarithmically with the memory set size, even when over 100 items are committed to memory. It is well-established that pictures of objects are particularly easy to memorize (Brady, Konkle, Alvarez, & Oliva Proceedings of the National Academy of Sciences, 105, 14325-14329, 2008). Would hybrid-search performance be similar if the targets were words or phrases, in which word order can be important, so that the processes of memorization might be different? In Experiment 1, observers memorized 2, 4, 8, or 16 words in four different blocks. After passing a memory test, confirming their memorization of the list, the observers searched for these words in visual displays containing two to 16 words. Replicating Wolfe (Psychological Science, 23(7), 698-703, 2012), the RTs increased linearly with the visual set size and logarithmically with the length of the word list. The word lists of Experiment 1 were random. In Experiment 2, words were drawn from phrases that observers reported knowing by heart (e.g., "London Bridge is falling down"). Observers were asked to provide four phrases, ranging in length from two words to no less than 20 words (range 21-86). All words longer than two characters from the phrase, constituted the target list. Distractor words were matched for length and frequency. Even with these strongly ordered lists, the results again replicated the curvilinear function of memory set size seen in hybrid search. One might expect to find serial position effects, perhaps reducing the RTs for the first (primacy) and/or the last (recency) members of a list (Atkinson & Shiffrin, 1968; Murdock Journal of Experimental Psychology, 64, 482-488, 1962). Surprisingly, we showed no reliable effects of word order. Thus, in "London Bridge is falling down," "London" and "down" were found no faster than "falling."
A neuroimaging study of conflict during word recognition.
Riba, Jordi; Heldmann, Marcus; Carreiras, Manuel; Münte, Thomas F
2010-08-04
Using functional magnetic resonance imaging the neural activity associated with error commission and conflict monitoring in a lexical decision task was assessed. In a cohort of 20 native speakers of Spanish conflict was introduced by presenting words with high and low lexical frequency and pseudo-words with high and low syllabic frequency for the first syllable. Erroneous versus correct responses showed activation in the frontomedial and left inferior frontal cortex. A similar pattern was found for correctly classified words of low versus high lexical frequency and for correctly classified pseudo-words of high versus low syllabic frequency. Conflict-related activations for language materials largely overlapped with error-induced activations. The effect of syllabic frequency underscores the role of sublexical processing in visual word recognition and supports the view that the initial syllable mediates between the letter and word level.
Evidence for highly selective neuronal tuning to whole words in the "visual word form area".
Glezer, Laurie S; Jiang, Xiong; Riesenhuber, Maximilian
2009-04-30
Theories of reading have posited the existence of a neural representation coding for whole real words (i.e., an orthographic lexicon), but experimental support for such a representation has proved elusive. Using fMRI rapid adaptation techniques, we provide evidence that the human left ventral occipitotemporal cortex (specifically the "visual word form area," VWFA) contains a representation based on neurons highly selective for individual real words, in contrast to current theories that posit a sublexical representation in the VWFA.
The activation of segmental and tonal information in visual word recognition.
Li, Chuchu; Lin, Candise Y; Wang, Min; Jiang, Nan
2013-08-01
Mandarin Chinese has a logographic script in which graphemes map onto syllables and morphemes. It is not clear whether Chinese readers activate phonological information during lexical access, although phonological information is not explicitly represented in Chinese orthography. In the present study, we examined the activation of phonological information, including segmental and tonal information in Chinese visual word recognition, using the Stroop paradigm. Native Mandarin speakers named the presentation color of Chinese characters in Mandarin. The visual stimuli were divided into five types: color characters (e.g., , hong2, "red"), homophones of the color characters (S+T+; e.g., , hong2, "flood"), different-tone homophones (S+T-; e.g., , hong1, "boom"), characters that shared the same tone but differed in segments with the color characters (S-T+; e.g., , ping2, "bottle"), and neutral characters (S-T-; e.g., , qian1, "leading through"). Classic Stroop facilitation was shown in all color-congruent trials, and interference was shown in the incongruent trials. Furthermore, the Stroop effect was stronger for S+T- than for S-T+ trials, and was similar between S+T+ and S+T- trials. These findings suggested that both tonal and segmental forms of information play roles in lexical constraints; however, segmental information has more weight than tonal information. We proposed a revised visual word recognition model in which the functions of both segmental and suprasegmental types of information and their relative weights are taken into account.
Auditory perception modulated by word reading.
Cao, Liyu; Klepp, Anne; Schnitzler, Alfons; Gross, Joachim; Biermann-Ruben, Katja
2016-10-01
Theories of embodied cognition positing that sensorimotor areas are indispensable during language comprehension are supported by neuroimaging and behavioural studies. Among others, the auditory system has been suggested to be important for understanding sound-related words (visually presented) and the motor system for action-related words. In this behavioural study, using a sound detection task embedded in a lexical decision task, we show that in participants with high lexical decision performance sound verbs improve auditory perception. The amount of modulation was correlated with lexical decision performance. Our study provides convergent behavioural evidence of auditory cortex involvement in word processing, supporting the view of embodied language comprehension concerning the auditory domain.
Prime, David; Dell'acqua, Roberto; Arguin, Martin; Gosselin, Frédéric; Jolicœur, Pierre
2011-03-01
The sustained posterior contralateral negativity (SPCN) was used to investigate the effect of spatial layout on the maintenance of letters in VSTM. SPCN amplitude was measured for words, nonwords, and scrambled nonwords. We reexamined the effects of spatial layout of letters on SPCN amplitude in a design that equated the mean frequency of use of each position. Scrambled letters that did not form words elicited a larger SPCN than either words or nonwords, indicating lower VSTM load for nonwords presented in a typical horizontal array than the load observed for the same letters presented in spatially scrambled locations. In contrast, prior research has shown that the spatial extent of arrays of simple stimuli did not influence the amplitude of the SPCN. Thus, the present results indicate the existence of encoding and VSTM maintenance mechanisms specific to letter and word processing. Copyright © 2010 Society for Psychophysiological Research.
Hargreaves, Ian S; Pexman, Penny M
2014-05-01
According to several current frameworks, semantic processing involves an early influence of language-based information followed by later influences of object-based information (e.g., situated simulations; Santos, Chaigneau, Simmons, & Barsalou, 2011). In the present study we examined whether these predictions extend to the influence of semantic variables in visual word recognition. We investigated the time course of semantic richness effects in visual word recognition using a signal-to-respond (STR) paradigm fitted to a lexical decision (LDT) and a semantic categorization (SCT) task. We used linear mixed effects to examine the relative contributions of language-based (number of senses, ARC) and object-based (imageability, number of features, body-object interaction ratings) descriptions of semantic richness at four STR durations (75, 100, 200, and 400ms). Results showed an early influence of number of senses and ARC in the SCT. In both LDT and SCT, object-based effects were the last to influence participants' decision latencies. We interpret our results within a framework in which semantic processes are available to influence word recognition as a function of their availability over time, and of their relevance to task-specific demands. Copyright © 2014 Elsevier B.V. All rights reserved.
Enhanced visual awareness for morality and pajamas? Perception vs. memory in 'top-down' effects.
Firestone, Chaz; Scholl, Brian J
2015-03-01
A raft of prominent findings has revived the notion that higher-level cognitive factors such as desire, meaning, and moral relevance can directly affect what we see. For example, under conditions of brief presentation, morally relevant words reportedly "pop out" and are easier to identify than morally irrelevant words. Though such results purport to show that perception itself is sensitive to such factors, much of this research instead demonstrates effects on visual recognition--which necessarily involves not only visual processing per se, but also memory retrieval. Here we report three experiments which suggest that many alleged top-down effects of this sort are actually effects on 'back-end' memory rather than 'front-end' perception. In particular, the same methods used to demonstrate popout effects for supposedly privileged stimuli (such as morality-related words, e.g. "punishment" and "victim") also yield popout effects for unmotivated, superficial categories (such as fashion-related words, e.g. "pajamas" and "stiletto"). We conclude that such effects reduce to well-known memory processes (in this case, semantic priming) that do not involve morality, and have no implications for debates about whether higher-level factors influence perception. These case studies illustrate how it is critical to distinguish perception from memory in alleged 'top-down' effects. Copyright © 2014 Elsevier B.V. All rights reserved.
Semantically Induced Distortions of Visual Awareness in a Patient with Balint's Syndrome
ERIC Educational Resources Information Center
Soto, David; Humphreys, Glyn W.
2009-01-01
We present data indicating that visual awareness for a basic perceptual feature (colour) can be influenced by the relation between the feature and the semantic properties of the stimulus. We examined semantic interference from the meaning of a colour word ("RED") on simple colour (ink related) detection responses in a patient with simultagnosia…
The Effects of Bilateral Presentations on Lateralized Lexical Decision
ERIC Educational Resources Information Center
Fernandino, Leonardo; Iacoboni, Marco; Zaidel, Eran
2007-01-01
We investigated how lateralized lexical decision is affected by the presence of distractors in the visual hemifield contralateral to the target. The study had three goals: first, to determine how the presence of a distractor (either a word or a pseudoword) affects visual field differences in the processing of the target; second, to identify the…
ERIC Educational Resources Information Center
Aizawa, Kazumi; Iso, Tatsuo; Nadasdy, Paul
2017-01-01
Testing learners' English proficiency is central to university English classes in Japan. This study developed and implemented a set of parallel online receptive aural and visual vocabulary tests that would predict learners' English proficiency. The tests shared the same target words and choices--the main difference was the presentation of the…
EFFECTS AND INTERACTIONS OF AUDITORY AND VISUAL CUES IN ORAL COMMUNICATION.
ERIC Educational Resources Information Center
KEYS, JOHN W.; AND OTHERS
VISUAL AND AUDITORY CUES WERE TESTED, SEPARATELY AND JOINTLY, TO DETERMINE THE DEGREE OF THEIR CONTRIBUTION TO IMPROVING OVERALL SPEECH SKILLS OF THE AURALLY HANDICAPPED. EIGHT SOUND INTENSITY LEVELS (FROM 6 TO 15 DECIBELS) WERE USED IN PRESENTING PHONETICALLY BALANCED WORD LISTS AND MULTIPLE-CHOICE INTELLIGIBILITY LISTS TO A SAMPLE OF 24…
ERIC Educational Resources Information Center
Beal, Carole R.; Rosenblum, L. Penny
2018-01-01
Introduction: The authors examined a tablet computer application (iPad app) for its effectiveness in helping students studying prealgebra to solve mathematical word problems. Methods: Forty-three visually impaired students (that is, those who are blind or have low vision) completed eight alternating mathematics units presented using their…
ERIC Educational Resources Information Center
Hsiao, Janet H.; Lam, Sze Man
2013-01-01
Through computational modeling, here we examine whether visual and task characteristics of writing systems alone can account for lateralization differences in visual word recognition between different languages without assuming influence from left hemisphere (LH) lateralized language processes. We apply a hemispheric processing model of face…
Music and words in the visual cortex: The impact of musical expertise.
Mongelli, Valeria; Dehaene, Stanislas; Vinckier, Fabien; Peretz, Isabelle; Bartolomeo, Paolo; Cohen, Laurent
2017-01-01
How does the human visual system accommodate expertise for two simultaneously acquired symbolic systems? We used fMRI to compare activations induced in the visual cortex by musical notation, written words and other classes of objects, in professional musicians and in musically naïve controls. First, irrespective of expertise, selective activations for music were posterior and lateral to activations for words in the left occipitotemporal cortex. This indicates that symbols characterized by different visual features engage distinct cortical areas. Second, musical expertise increased the volume of activations for music and led to an anterolateral displacement of word-related activations. In musicians, there was also a dramatic increase of the brain-scale networks connected to the music-selective visual areas. Those findings reveal that acquiring a double visual expertise involves an expansion of category-selective areas, the development of novel long-distance functional connectivity, and possibly some competition between categories for the colonization of cortical space. Copyright © 2016 Elsevier Ltd. All rights reserved.
Teramoto, Wataru; Nakazaki, Takuyuki; Sekiyama, Kaoru; Mori, Shuji
2016-01-01
The present study investigated, whether word width and length affect the optimal character size for reading of horizontally scrolling Japanese words, using reading speed as a measure. In Experiment 1, three Japanese words, each consisting of four Hiragana characters, sequentially scrolled on a display screen from right to left. Participants, all Japanese native speakers, were instructed to read the words aloud as accurately as possible, irrespective of their order within the sequence. To quantitatively measure their reading performance, we used rapid serial visual presentation paradigm, where the scrolling rate was increased until the participants began to make mistakes. Thus, the highest scrolling rate at which the participants’ performance exceeded 88.9% correct rate was calculated for each character size (0.3°, 0.6°, 1.0°, and 3.0°) and scroll window size (5 or 10 character spaces). Results showed that the reading performance was highest in the range of 0.6° to 1.0°, irrespective of the scroll window size. Experiment 2 investigated whether the optimal character size observed in Experiment 1 was applicable for any word width and word length (i.e., the number of characters in a word). Results showed that reading speeds were slower for longer than shorter words and the word width of 3.6° was optimal among the word lengths tested (three, four, and six character words). Considering that character size varied depending on word width and word length in the present study, this means that the optimal character size can be changed by word width and word length in scrolling Japanese words. PMID:26909052
Teramoto, Wataru; Nakazaki, Takuyuki; Sekiyama, Kaoru; Mori, Shuji
2016-01-01
The present study investigated, whether word width and length affect the optimal character size for reading of horizontally scrolling Japanese words, using reading speed as a measure. In Experiment 1, three Japanese words, each consisting of four Hiragana characters, sequentially scrolled on a display screen from right to left. Participants, all Japanese native speakers, were instructed to read the words aloud as accurately as possible, irrespective of their order within the sequence. To quantitatively measure their reading performance, we used rapid serial visual presentation paradigm, where the scrolling rate was increased until the participants began to make mistakes. Thus, the highest scrolling rate at which the participants' performance exceeded 88.9% correct rate was calculated for each character size (0.3°, 0.6°, 1.0°, and 3.0°) and scroll window size (5 or 10 character spaces). Results showed that the reading performance was highest in the range of 0.6° to 1.0°, irrespective of the scroll window size. Experiment 2 investigated whether the optimal character size observed in Experiment 1 was applicable for any word width and word length (i.e., the number of characters in a word). Results showed that reading speeds were slower for longer than shorter words and the word width of 3.6° was optimal among the word lengths tested (three, four, and six character words). Considering that character size varied depending on word width and word length in the present study, this means that the optimal character size can be changed by word width and word length in scrolling Japanese words.
Glezer, Laurie S; Kim, Judy; Rule, Josh; Jiang, Xiong; Riesenhuber, Maximilian
2015-03-25
The nature of orthographic representations in the human brain is still subject of much debate. Recent reports have claimed that the visual word form area (VWFA) in left occipitotemporal cortex contains an orthographic lexicon based on neuronal representations highly selective for individual written real words (RWs). This theory predicts that learning novel words should selectively increase neural specificity for these words in the VWFA. We trained subjects to recognize novel pseudowords (PWs) and used fMRI rapid adaptation to compare neural selectivity with RWs, untrained PWs (UTPWs), and trained PWs (TPWs). Before training, PWs elicited broadly tuned responses, whereas responses to RWs indicated tight tuning. After training, TPW responses resembled those of RWs, whereas UTPWs continued to show broad tuning. This change in selectivity was specific to the VWFA. Therefore, word learning appears to selectively increase neuronal specificity for the new words in the VWFA, thereby adding these words to the brain's visual dictionary. Copyright © 2015 the authors 0270-6474/15/354965-08$15.00/0.
Early Visual Word Processing Is Flexible: Evidence from Spatiotemporal Brain Dynamics.
Chen, Yuanyuan; Davis, Matthew H; Pulvermüller, Friedemann; Hauk, Olaf
2015-09-01
Visual word recognition is often described as automatic, but the functional locus of top-down effects is still a matter of debate. Do task demands modulate how information is retrieved, or only how it is used? We used EEG/MEG recordings to assess whether, when, and how task contexts modify early retrieval of specific psycholinguistic information in occipitotemporal cortex, an area likely to contribute to early stages of visual word processing. Using a parametric approach, we analyzed the spatiotemporal response patterns of occipitotemporal cortex for orthographic, lexical, and semantic variables in three psycholinguistic tasks: silent reading, lexical decision, and semantic decision. Task modulation of word frequency and imageability effects occurred simultaneously in ventral occipitotemporal regions-in the vicinity of the putative visual word form area-around 160 msec, following task effects on orthographic typicality around 100 msec. Frequency and typicality also produced task-independent effects in anterior temporal lobe regions after 200 msec. The early task modulation for several specific psycholinguistic variables indicates that occipitotemporal areas integrate perceptual input with prior knowledge in a task-dependent manner. Still, later task-independent effects in anterior temporal lobes suggest that word recognition eventually leads to retrieval of semantic information irrespective of task demands. We conclude that even a highly overlearned visual task like word recognition should be described as flexible rather than automatic.
Put Power into Your Presentations: Using Presentation Software Effectively
ERIC Educational Resources Information Center
Safransky, Robert J.; Burmeister, Marsha L.
2009-01-01
Microsoft PowerPoint, Apple Keynote, and OpenOffice Impress are relatively common tools in the classroom and in the boardroom these days. What makes presentation software so popular? As the Chinese proverb declares, a picture is worth a thousand words. People like visual presentations. Presentation software can make even a dull subject come to…
The neurobiological basis of seeing words
Wandell, Brian A.
2011-01-01
This review summarizes recent ideas about the cortical circuits for seeing words, an important part of the brain system for reading. Historically, the link between the visual cortex and reading has been contentious. One influential position is that the visual cortex plays a minimal role, limited to identifying contours, and that information about these contours is delivered to cortical regions specialized for reading and language. An alternative position is that specializations for seeing words develop within the visual cortex itself. Modern neuroimaging measurements—including both functional magnetic resonance imaging (fMRI) and diffusion weighted imaging with tractography data—support the position that circuitry for seeing the statistical regularities of word forms develops within the ventral occipitotemporal cortex, which also contains important circuitry for seeing faces, colors, and forms. The review explains new findings about the visual pathways, including visual field maps, as well as new findings about how we see words. The measurements from the two fields are in close cortical proximity, and there are good opportunities for coordinating theoretical ideas about function in the ventral occipitotemporal cortex. PMID:21486296
The neurobiological basis of seeing words.
Wandell, Brian A
2011-04-01
This review summarizes recent ideas about the cortical circuits for seeing words, an important part of the brain system for reading. Historically, the link between the visual cortex and reading has been contentious. One influential position is that the visual cortex plays a minimal role, limited to identifying contours, and that information about these contours is delivered to cortical regions specialized for reading and language. An alternative position is that specializations for seeing words develop within the visual cortex itself. Modern neuroimaging measurements-including both functional magnetic resonance imaging (fMRI) and diffusion weighted imaging with tractography (DTI) data-support the position that circuitry for seeing the statistical regularities of word forms develops within the ventral occipitotemporal cortex, which also contains important circuitry for seeing faces, colors, and forms. This review explains new findings about the visual pathways, including visual field maps, as well as new findings about how we see words. The measurements from the two fields are in close cortical proximity, and there are good opportunities for coordinating theoretical ideas about function in the ventral occipitotemporal cortex. © 2011 New York Academy of Sciences.
Cao, Fan; Lee, Rebecca; Shu, Hua; Yang, Yanhui; Xu, Guoqing; Li, Kuncheng; Booth, James R
2010-05-01
Developmental differences in phonological and orthographic processing in Chinese were examined in 9 year olds, 11 year olds, and adults using functional magnetic resonance imaging. Rhyming and spelling judgments were made to 2-character words presented sequentially in the visual modality. The spelling task showed greater activation than the rhyming task in right superior parietal lobule and right inferior temporal gyrus, and there were developmental increases across tasks bilaterally in these regions in addition to bilateral occipital cortex, suggesting increased involvement over age on visuo-orthographic analysis. The rhyming task showed greater activation than the spelling task in left superior temporal gyrus and there were developmental decreases across tasks in this region, suggesting reduced involvement over age on phonological representations. The rhyming and spelling tasks included words with conflicting orthographic and phonological information (i.e., rhyming words spelled differently or nonrhyming words spelled similarly) or nonconflicting information. There was a developmental increase in the difference between conflicting and nonconflicting words in left inferior parietal lobule, suggesting greater engagement of systems for mapping between orthographic and phonological representations. Finally, there were developmental increases across tasks in an anterior (Broadman area [BA] 45, 46) and posterior (BA 9) left inferior frontal gyrus, suggesting greater reliance on controlled retrieval and selection of posterior lexical representations.
Hemispheric asymmetry of emotion words in a non-native mind: a divided visual field study.
Jończyk, Rafał
2015-05-01
This study investigates hemispheric specialization for emotional words among proficient non-native speakers of English by means of the divided visual field paradigm. The motivation behind the study is to extend the monolingual hemifield research to the non-native context and see how emotion words are processed in a non-native mind. Sixty eight females participated in the study, all highly proficient in English. The stimuli comprised 12 positive nouns, 12 negative nouns, 12 non-emotional nouns and 36 pseudo-words. To examine the lateralization of emotion, stimuli were presented unilaterally in a random fashion for 180 ms in a go/no-go lexical decision task. The perceptual data showed a right hemispheric advantage for processing speed of negative words and a complementary role of the two hemispheres in the recognition accuracy of experimental stimuli. The data indicate that processing of emotion words in non-native language may require greater interhemispheric communication, but at the same time demonstrates a specific role of the right hemisphere in the processing of negative relative to positive valence. The results of the study are discussed in light of the methodological inconsistencies in the hemifield research as well as the non-native context in which the study was conducted.
Khelifi, Rachid; Sparrow, Laurent; Casalis, Séverine
2015-11-01
We assessed third and fifth graders' processing of parafoveal word information using a lexical decision task. On each trial, a preview word was first briefly presented parafoveally in the left or right visual field before a target word was displayed. Preview and target words could be identical, share the first three letters, or have no letters in common. Experiment 1 showed that developing readers receive the same word recognition benefit from parafoveal previews as expert readers. The impact of a change of case between preview and target in Experiment 2 showed that in all groups of readers, the preview benefit resulted from the identification of letters at an abstract level rather than from facilitation at a purely visual level. Fifth graders identified more letters from the preview than third graders. The results are interpreted within the framework of the interactive activation model. In particular, we suggest that although the processing of parafoveal information led to letter identification in developing readers, the processes involved may differ from those in expert readers. Although expert readers' processing of parafoveal information led to activation at the level of lexical representations, no such activation was observed in developing readers. Copyright © 2015 Elsevier Inc. All rights reserved.
Hauk, Olaf; Davis, Matthew H; Pulvermüller, Friedemann
2008-09-01
Psycholinguistic research has documented a range of variables that influence visual word recognition performance. Many of these variables are highly intercorrelated. Most previous studies have used factorial designs, which do not exploit the full range of values available for continuous variables, and are prone to skewed stimulus selection as well as to effects of the baseline (e.g. when contrasting words with pseudowords). In our study, we used a parametric approach to study the effects of several psycholinguistic variables on brain activation. We focussed on the variable word frequency, which has been used in numerous previous behavioural, electrophysiological and neuroimaging studies, in order to investigate the neuronal network underlying visual word processing. Furthermore, we investigated the variable orthographic typicality as well as a combined variable for word length and orthographic neighbourhood size (N), for which neuroimaging results are still either scarce or inconsistent. Data were analysed using multiple linear regression analysis of event-related fMRI data acquired from 21 subjects in a silent reading paradigm. The frequency variable correlated negatively with activation in left fusiform gyrus, bilateral inferior frontal gyri and bilateral insulae, indicating that word frequency can affect multiple aspects of word processing. N correlated positively with brain activity in left and right middle temporal gyri as well as right inferior frontal gyrus. Thus, our analysis revealed multiple distinct brain areas involved in visual word processing within one data set.
Rebreikina, A B; Larionova, E B; Varlamov, A A
2015-01-01
The aim of this investigation is to study neurophysiologic mechanisms of processing of relevant words and unknown words. Event-related synchronization/desynchronization during categorization of three types of stimuli (known targets, known no targets and unknown words) was examined. The main difference between known targets and unknown stimuli was revealed in the thetal and theta2 bands at the early stage after stimuli onset (150-300 ms) and in the delta band (400-700 ms). In the late time window at about 800-1500 ms thetal ERS in response to the target stimuli was smaller than to other stimuli, but theta2 and alpha ERD in response to the target stimuli was larger than to known nontarget words.
Using Wordle as a Supplementary Research Tool
ERIC Educational Resources Information Center
McNaught, Carmel; Lam, Paul
2010-01-01
A word cloud is a special visualization of text in which the more frequently used words are effectively highlighted by occupying more prominence in the representation. We have used Wordle to produce word-cloud analyses of the spoken and written responses of informants in two research projects. The product demonstrates a fast and visually rich way…
Age-of-Acquisition Effects in Visual Word Recognition: Evidence from Expert Vocabularies
ERIC Educational Resources Information Center
Stadthagen-Gonzalez, Hans; Bowers, Jeffrey S.; Damian, Markus F.
2004-01-01
Three experiments assessed the contributions of age-of-acquisition (AoA) and frequency to visual word recognition. Three databases were created from electronic journals in chemistry, psychology and geology in order to identify technical words that are extremely frequent in each discipline but acquired late in life. In Experiment 1, psychologists…
Is Syntactic-Category Processing Obligatory in Visual Word Recognition? Evidence from Chinese
ERIC Educational Resources Information Center
Wong, Andus Wing-Kuen; Chen, Hsuan-Chih
2012-01-01
Three experiments were conducted to investigate how syntactic-category and semantic information is processed in visual word recognition. The stimuli were two-character Chinese words in which semantic and syntactic-category ambiguities were factorially manipulated. A lexical decision task was employed in Experiment 1, whereas a semantic relatedness…
MEGALEX: A megastudy of visual and auditory word recognition.
Ferrand, Ludovic; Méot, Alain; Spinelli, Elsa; New, Boris; Pallier, Christophe; Bonin, Patrick; Dufau, Stéphane; Mathôt, Sebastiaan; Grainger, Jonathan
2018-06-01
Using the megastudy approach, we report a new database (MEGALEX) of visual and auditory lexical decision times and accuracy rates for tens of thousands of words. We collected visual lexical decision data for 28,466 French words and the same number of pseudowords, and auditory lexical decision data for 17,876 French words and the same number of pseudowords (synthesized tokens were used for the auditory modality). This constitutes the first large-scale database for auditory lexical decision, and the first database to enable a direct comparison of word recognition in different modalities. Different regression analyses were conducted to illustrate potential ways to exploit this megastudy database. First, we compared the proportions of variance accounted for by five word frequency measures. Second, we conducted item-level regression analyses to examine the relative importance of the lexical variables influencing performance in the different modalities (visual and auditory). Finally, we compared the similarities and differences between the two modalities. All data are freely available on our website ( https://sedufau.shinyapps.io/megalex/ ) and are searchable at www.lexique.org , inside the Open Lexique search engine.
AdjScales: Visualizing Differences between Adjectives for Language Learners
NASA Astrophysics Data System (ADS)
Sheinman, Vera; Tokunaga, Takenobu
In this study we introduce AdjScales, a method for scaling similar adjectives by their strength. It combines existing Web-based computational linguistic techniques in order to automatically differentiate between similar adjectives that describe the same property by strength. Though this kind of information is rarely present in most of the lexical resources and dictionaries, it may be useful for language learners that try to distinguish between similar words. Additionally, learners might gain from a simple visualization of these differences using unidimensional scales. The method is evaluated by comparison with annotation on a subset of adjectives from WordNet by four native English speakers. It is also compared against two non-native speakers of English. The collected annotation is an interesting resource in its own right. This work is a first step toward automatic differentiation of meaning between similar words for language learners. AdjScales can be useful for lexical resource enhancement.
The relationship between two visual communication systems: reading and lipreading.
Williams, A
1982-12-01
To explore the relationship between reading and lipreading and to determine whether readers and lipreaders use similar strategies to comprehend verbal messages, 60 female junior and sophomore high school students--30 good and 30 poor readers--were given a filmed lipreading test, a test to measure eye-voice span, a test of cloze ability, and a test of their ability to comprehend printed material presented one word at a time in the absence of an opportunity to regress or scan ahead. The results of this study indicated that (a) there is a significant relationship between reading and lipreading ability; (b) although good readers may be either good or poor lipreaders, poor readers are more likely to be poor than good lipreaders; (c) there are similarities in the strategies used by readers and lipreaders in their approach to comprehending spoken and written material; (d) word-by-word reading of continuous prose appears to be a salient characteristic of both poor reading and poor lipreading ability; and (c) good readers and lipreaders do not engage in word-by-word reading but rather use a combination of visual and linguistic cues to interpret written and spoken messages.
Hearing Feelings: Affective Categorization of Music and Speech in Alexithymia, an ERP Study
Goerlich, Katharina Sophia; Witteman, Jurriaan; Aleman, André; Martens, Sander
2011-01-01
Background Alexithymia, a condition characterized by deficits in interpreting and regulating feelings, is a risk factor for a variety of psychiatric conditions. Little is known about how alexithymia influences the processing of emotions in music and speech. Appreciation of such emotional qualities in auditory material is fundamental to human experience and has profound consequences for functioning in daily life. We investigated the neural signature of such emotional processing in alexithymia by means of event-related potentials. Methodology Affective music and speech prosody were presented as targets following affectively congruent or incongruent visual word primes in two conditions. In two further conditions, affective music and speech prosody served as primes and visually presented words with affective connotations were presented as targets. Thirty-two participants (16 male) judged the affective valence of the targets. We tested the influence of alexithymia on cross-modal affective priming and on N400 amplitudes, indicative of individual sensitivity to an affective mismatch between words, prosody, and music. Our results indicate that the affective priming effect for prosody targets tended to be reduced with increasing scores on alexithymia, while no behavioral differences were observed for music and word targets. At the electrophysiological level, alexithymia was associated with significantly smaller N400 amplitudes in response to affectively incongruent music and speech targets, but not to incongruent word targets. Conclusions Our results suggest a reduced sensitivity for the emotional qualities of speech and music in alexithymia during affective categorization. This deficit becomes evident primarily in situations in which a verbalization of emotional information is required. PMID:21573026
van Ackeren, Markus J; Rueschemeyer, Shirley-Ann
2014-01-01
In recent years, numerous studies have provided converging evidence that word meaning is partially stored in modality-specific cortical networks. However, little is known about the mechanisms supporting the integration of this distributed semantic content into coherent conceptual representations. In the current study we aimed to address this issue by using EEG to look at the spatial and temporal dynamics of feature integration during word comprehension. Specifically, participants were presented with two modality-specific features (i.e., visual or auditory features such as silver and loud) and asked to verify whether these two features were compatible with a subsequently presented target word (e.g., WHISTLE). Each pair of features described properties from either the same modality (e.g., silver, tiny = visual features) or different modalities (e.g., silver, loud = visual, auditory). Behavioral and EEG data were collected. The results show that verifying features that are putatively represented in the same modality-specific network is faster than verifying features across modalities. At the neural level, integrating features across modalities induces sustained oscillatory activity around the theta range (4-6 Hz) in left anterior temporal lobe (ATL), a putative hub for integrating distributed semantic content. In addition, enhanced long-range network interactions in the theta range were seen between left ATL and a widespread cortical network. These results suggest that oscillatory dynamics in the theta range could be involved in integrating multimodal semantic content by creating transient functional networks linking distributed modality-specific networks and multimodal semantic hubs such as left ATL.
Looking and touching: What extant approaches reveal about the structure of early word knowledge
Hendrickson, Kristi; Mitsven, Samantha; Poulin-Dubois, Diane; Zesiger, Pascal; Friend, Margaret
2014-01-01
The goal of the current study is to assess the temporal dynamics of vision and action to evaluate the underlying word representations that guide infants’ responses. Sixteen-month-old infants participated in a two-alternative forced-choice word-picture matching task. We conducted a moment-by-moment analysis of looking and reaching behaviors as they occurred in tandem to assess the speed with which a prompted word was processed (visual reaction time) as a function of the type of haptic response: Target, Distractor, or No Touch. Visual reaction times (visual RTs) were significantly slower during No Touches compared to Distractor and Target Touches, which were statistically indistinguishable. The finding that visual RTs were significantly faster during Distractor Touches compared to No Touches suggests that incorrect and absent haptic responses appear to index distinct knowledge states: incorrect responses are associated with partial knowledge whereas absent responses appear to reflect a true failure to map lexical items to their target referents. Further, we found that those children who were faster at processing words were also those children who exhibited better haptic performance. This research provides a methodological clarification on knowledge measured by the visual and haptic modalities and new evidence for a continuum of word knowledge in the second year of life. PMID:25444711
Characteristics of Chinese-English bilingual dyslexia in right occipito-temporal lesion.
Ting, Simon Kang Seng; Chia, Pei Shi; Chan, Yiong Huak; Kwek, Kevin Jun Hong; Tan, Wilnard; Hameed, Shahul; Tan, Eng-King
2017-11-01
Current literature suggests that right hemisphere lesions produce predominant spatial-related dyslexic error in English speakers. However, little is known regarding such lesions in Chinese speakers. In this paper, we describe the dyslexic characteristics of a Chinese-English bilingual patient with a right posterior cortical lesion. He was found to have profound spatial-related errors during his English word reading, in both real and non-words. During Chinese word reading, there was significantly less error compared to English, probably due to the ideographic nature of the Chinese language. He was also found to commit phonological-like visual errors in English, characterized by error responses that were visually similar to the actual word. There was no significant difference in visual errors during English word reading compared with Chinese. In general, our patient's performance in both languages appears to be consistent with the current literature on right posterior hemisphere lesions. Additionally, his performance also likely suggests that the right posterior cortical region participates in the visual analysis of orthographical word representation, both in ideographical and alphabetic languages, at least from a bilingual perspective. Future studies should further examine the role of the right posterior region in initial visual analysis of both languages. Copyright © 2017 Elsevier Ltd. All rights reserved.
Individual Differences in Reported Visual Imagery and Memory Performance.
ERIC Educational Resources Information Center
McKelvie, Stuart J.; Demers, Elizabeth G.
1979-01-01
High- and low-visualizing males, identified by the self-report VVIQ, participated in a memory experiment involving abstract words, concrete words, and pictures. High-visualizers were superior on all items in short-term recall but superior only on pictures in long-term recall, supporting the VVIQ's validity. (Author/SJL)
Visual Speech Primes Open-Set Recognition of Spoken Words
ERIC Educational Resources Information Center
Buchwald, Adam B.; Winters, Stephen J.; Pisoni, David B.
2009-01-01
Visual speech perception has become a topic of considerable interest to speech researchers. Previous research has demonstrated that perceivers neurally encode and use speech information from the visual modality, and this information has been found to facilitate spoken word recognition in tasks such as lexical decision (Kim, Davis, & Krins,…
Top-down modulation of ventral occipito-temporal responses during visual word recognition.
Twomey, Tae; Kawabata Duncan, Keith J; Price, Cathy J; Devlin, Joseph T
2011-04-01
Although interactivity is considered a fundamental principle of cognitive (and computational) models of reading, it has received far less attention in neural models of reading that instead focus on serial stages of feed-forward processing from visual input to orthographic processing to accessing the corresponding phonological and semantic information. In particular, the left ventral occipito-temporal (vOT) cortex is proposed to be the first stage where visual word recognition occurs prior to accessing nonvisual information such as semantics and phonology. We used functional magnetic resonance imaging (fMRI) to investigate whether there is evidence that activation in vOT is influenced top-down by the interaction of visual and nonvisual properties of the stimuli during visual word recognition tasks. Participants performed two different types of lexical decision tasks that focused on either visual or nonvisual properties of the word or word-like stimuli. The design allowed us to investigate how vOT activation during visual word recognition was influenced by a task change to the same stimuli and by a stimulus change during the same task. We found both stimulus- and task-driven modulation of vOT activation that can only be explained by top-down processing of nonvisual aspects of the task and stimuli. Our results are consistent with the hypothesis that vOT acts as an interface linking visual form with nonvisual processing in both bottom up and top down directions. Such interactive processing at the neural level is in agreement with cognitive and computational models of reading but challenges some of the assumptions made by current neuro-anatomical models of reading. Copyright © 2011 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Spurgeon, Jessica; Ward, Geoff; Matthews, William J.
2014-01-01
We examined the contribution of the phonological loop to immediate free recall (IFR) and immediate serial recall (ISR) of lists of between one and 15 words. Following Baddeley (1986, 2000, 2007, 2012), we assumed that visual words could be recoded into the phonological store when presented silently but that recoding would be prevented by…
Deep generative learning of location-invariant visual word recognition.
Di Bono, Maria Grazia; Zorzi, Marco
2013-01-01
It is widely believed that orthographic processing implies an approximate, flexible coding of letter position, as shown by relative-position and transposition priming effects in visual word recognition. These findings have inspired alternative proposals about the representation of letter position, ranging from noisy coding across the ordinal positions to relative position coding based on open bigrams. This debate can be cast within the broader problem of learning location-invariant representations of written words, that is, a coding scheme abstracting the identity and position of letters (and combinations of letters) from their eye-centered (i.e., retinal) locations. We asked whether location-invariance would emerge from deep unsupervised learning on letter strings and what type of intermediate coding would emerge in the resulting hierarchical generative model. We trained a deep network with three hidden layers on an artificial dataset of letter strings presented at five possible retinal locations. Though word-level information (i.e., word identity) was never provided to the network during training, linear decoding from the activity of the deepest hidden layer yielded near-perfect accuracy in location-invariant word recognition. Conversely, decoding from lower layers yielded a large number of transposition errors. Analyses of emergent internal representations showed that word selectivity and location invariance increased as a function of layer depth. Word-tuning and location-invariance were found at the level of single neurons, but there was no evidence for bigram coding. Finally, the distributed internal representation of words at the deepest layer showed higher similarity to the representation elicited by the two exterior letters than by other combinations of two contiguous letters, in agreement with the hypothesis that word edges have special status. These results reveal that the efficient coding of written words-which was the model's learning objective-is largely based on letter-level information.
Supervised guiding long-short term memory for image caption generation based on object classes
NASA Astrophysics Data System (ADS)
Wang, Jian; Cao, Zhiguo; Xiao, Yang; Qi, Xinyuan
2018-03-01
The present models of image caption generation have the problems of image visual semantic information attenuation and errors in guidance information. In order to solve these problems, we propose a supervised guiding Long Short Term Memory model based on object classes, named S-gLSTM for short. It uses the object detection results from R-FCN as supervisory information with high confidence, and updates the guidance word set by judging whether the last output matches the supervisory information. S-gLSTM learns how to extract the current interested information from the image visual se-mantic information based on guidance word set. The interested information is fed into the S-gLSTM at each iteration as guidance information, to guide the caption generation. To acquire the text-related visual semantic information, the S-gLSTM fine-tunes the weights of the network through the back-propagation of the guiding loss. Complementing guidance information at each iteration solves the problem of visual semantic information attenuation in the traditional LSTM model. Besides, the supervised guidance information in our model can reduce the impact of the mismatched words on the caption generation. We test our model on MSCOCO2014 dataset, and obtain better performance than the state-of-the- art models.
ERIC Educational Resources Information Center
Calvert, Sandra L.; And Others
The purpose of this study was to examine the impact of visual and auditory presentational features on young children's selection and memory for verbally presented content. Assessed as a function of action and sound were preschool children's preferential selection and recall of words presented in a computer microworld. A computer microworld…
Vorobyev, Victor A; Alho, Kimmo; Medvedev, Svyatoslav V; Pakhomov, Sergey V; Roudas, Marina S; Rutkovskaya, Julia M; Tervaniemi, Mari; Van Zuijen, Titia L; Näätänen, Risto
2004-07-01
Positron emission tomography (PET) was used to investigate the neural basis of selective processing of linguistic material during concurrent presentation of multiple stimulus streams ("cocktail-party effect"). Fifteen healthy right-handed adult males were to attend to one of three simultaneously presented messages: one presented visually, one to the left ear, and one to the right ear. During the control condition, subjects attended to visually presented consonant letter strings and ignored auditory messages. This paper reports the modality-nonspecific language processing and visual word-form processing, whereas the auditory attention effects have been reported elsewhere [Cogn. Brain Res. 17 (2003) 201]. The left-hemisphere areas activated by both the selective processing of text and speech were as follows: the inferior prefrontal (Brodmann's area, BA 45, 47), anterior temporal (BA 38), posterior insular (BA 13), inferior (BA 20) and middle temporal (BA 21), occipital (BA 18/30) cortices, the caudate nucleus, and the amygdala. In addition, bilateral activations were observed in the medial occipito-temporal cortex and the cerebellum. Decreases of activation during both text and speech processing were found in the parietal (BA 7, 40), frontal (BA 6, 8, 44) and occipito-temporal (BA 37) regions of the right hemisphere. Furthermore, the present data suggest that the left occipito-temporal cortex (BA 18, 20, 37, 21) can be subdivided into three functionally distinct regions in the posterior-anterior direction on the basis of their activation during attentive processing of sublexical orthography, visual word form, and supramodal higher-level aspects of language.
Language Effects in Trilinguals: An ERP Study
Aparicio, Xavier; Midgley, Katherine J.; Holcomb, Phillip J.; Pu, He; Lavaur, Jean-Marc; Grainger, Jonathan
2012-01-01
Event-related potentials were recorded during the visual presentation of words in the three languages of French-English-Spanish trilinguals. Participants monitored a mixed list of unrelated non-cognate words in the three languages while performing a semantic categorization task. Words in L1 generated earlier N400 peak amplitudes than both L2 and L3 words, which peaked together. On the other hand, L2 and L3 words did differ significantly in terms of N400 amplitude, with L3 words generating greater mean amplitudes compared with L2 words. We interpret the effects of peak N400 latency as reflecting the special status of the L1 relative to later acquired languages, rather than proficiency in that language per se. On the other hand, the mean amplitude difference between L2 and L3 is thought to reflect different levels of fluency in these two languages. PMID:23133428
ERIC Educational Resources Information Center
Patten, Iomi; Edmonds, Lisa A.
2015-01-01
The present study examines the effects of training native Japanese speakers in the production of American /r/ using spectrographic visual feedback. Within a modified single-subject design, two native Japanese participants produced single words containing /r/ in a variety of positions while viewing live spectrographic feedback with the aim of…
Revolution of View: Visual Presentation under the Influence of Multidimensional Concepts
ERIC Educational Resources Information Center
Feng, Zhu
2011-01-01
The ultimate aim of artistic exploration is to explore the claim that objects are different from experience and beauty is just a by-product of the exploration. In other words, the truth in the eyes of each person may quite literally not be the same. This indicates that differences in the visual apparatus influence the viewing body's mastery of the…
Connell, Louise; Lynott, Dermot
2014-04-01
How does the meaning of a word affect how quickly we can recognize it? Accounts of visual word recognition allow semantic information to facilitate performance but have neglected the role of modality-specific perceptual attention in activating meaning. We predicted that modality-specific semantic information would differentially facilitate lexical decision and reading aloud, depending on how perceptual attention is implicitly directed by each task. Large-scale regression analyses showed the perceptual modalities involved in representing a word's referent concept influence how easily that word is recognized. Both lexical decision and reading-aloud tasks direct attention toward vision, and are faster and more accurate for strongly visual words. Reading aloud additionally directs attention toward audition and is faster and more accurate for strongly auditory words. Furthermore, the overall semantic effects are as large for reading aloud as lexical decision and are separable from age-of-acquisition effects. These findings suggest that implicitly directing perceptual attention toward a particular modality facilitates representing modality-specific perceptual information in the meaning of a word, which in turn contributes to the lexical decision or reading-aloud response.
Chiang, Hsueh-Sheng; Eroh, Justin; Spence, Jeffrey S; Motes, Michael A; Maguire, Mandy J; Krawczyk, Daniel C; Brier, Matthew R; Hart, John; Kraut, Michael A
2016-08-01
How the brain combines the neural representations of features that comprise an object in order to activate a coherent object memory is poorly understood, especially when the features are presented in different modalities (visual vs. auditory) and domains (verbal vs. nonverbal). We examined this question using three versions of a modified Semantic Object Retrieval Test, where object memory was probed by a feature presented as a written word, a spoken word, or a picture, followed by a second feature always presented as a visual word. Participants indicated whether each feature pair elicited retrieval of the memory of a particular object. Sixteen subjects completed one of the three versions (N=48 in total) while their EEG were recorded simultaneously. We analyzed EEG data in four separate frequency bands (delta: 1-4Hz, theta: 4-7Hz; alpha: 8-12Hz; beta: 13-19Hz) using a multivariate data-driven approach. We found that alpha power time-locked to response was modulated by both cross-modality (visual vs. auditory) and cross-domain (verbal vs. nonverbal) probing of semantic object memory. In addition, retrieval trials showed greater changes in all frequency bands compared to non-retrieval trials across all stimulus types in both response-locked and stimulus-locked analyses, suggesting dissociable neural subcomponents involved in binding object features to retrieve a memory. We conclude that these findings support both modality/domain-dependent and modality/domain-independent mechanisms during semantic object memory retrieval. Copyright © 2016 Elsevier B.V. All rights reserved.
The Development of Cortical Sensitivity to Visual Word Forms
ERIC Educational Resources Information Center
Ben-Shachar, Michal; Dougherty, Robert F.; Deutsch, Gayle K.; Wandell, Brian A.
2011-01-01
The ability to extract visual word forms quickly and efficiently is essential for using reading as a tool for learning. We describe the first longitudinal fMRI study to chart individual changes in cortical sensitivity to written words as reading develops. We conducted four annual measurements of brain function and reading skills in a heterogeneous…
Wegrzyn, Martin; Herbert, Cornelia; Ethofer, Thomas; Flaisch, Tobias; Kissler, Johanna
2017-11-01
Visually presented emotional words are processed preferentially and effects of emotional content are similar to those of explicit attention deployment in that both amplify visual processing. However, auditory processing of emotional words is less well characterized and interactions between emotional content and task-induced attention have not been fully understood. Here, we investigate auditory processing of emotional words, focussing on how auditory attention to positive and negative words impacts their cerebral processing. A Functional magnetic resonance imaging (fMRI) study manipulating word valence and attention allocation was performed. Participants heard negative, positive and neutral words to which they either listened passively or attended by counting negative or positive words, respectively. Regardless of valence, active processing compared to passive listening increased activity in primary auditory cortex, left intraparietal sulcus, and right superior frontal gyrus (SFG). The attended valence elicited stronger activity in left inferior frontal gyrus (IFG) and left SFG, in line with these regions' role in semantic retrieval and evaluative processing. No evidence for valence-specific attentional modulation in auditory regions or distinct valence-specific regional activations (i.e., negative > positive or positive > negative) was obtained. Thus, allocation of auditory attention to positive and negative words can substantially increase their processing in higher-order language and evaluative brain areas without modulating early stages of auditory processing. Inferior and superior frontal brain structures mediate interactions between emotional content, attention, and working memory when prosodically neutral speech is processed. Copyright © 2017 Elsevier Ltd. All rights reserved.
Image jitter enhances visual performance when spatial resolution is impaired.
Watson, Lynne M; Strang, Niall C; Scobie, Fraser; Love, Gordon D; Seidel, Dirk; Manahilov, Velitchko
2012-09-06
Visibility of low-spatial frequency stimuli improves when their contrast is modulated at 5 to 10 Hz compared with stationary stimuli. Therefore, temporal modulations of visual objects could enhance the performance of low vision patients who primarily perceive images of low-spatial frequency content. We investigated the effect of retinal-image jitter on word recognition speed and facial emotion recognition in subjects with central visual impairment. Word recognition speed and accuracy of facial emotion discrimination were measured in volunteers with AMD under stationary and jittering conditions. Computer-driven and optoelectronic approaches were used to induce retinal-image jitter with duration of 100 or 166 ms and amplitude within the range of 0.5 to 2.6° visual angle. Word recognition speed was also measured for participants with simulated (Bangerter filters) visual impairment. Text jittering markedly enhanced word recognition speed for people with severe visual loss (101 ± 25%), while for those with moderate visual impairment, this effect was weaker (19 ± 9%). The ability of low vision patients to discriminate the facial emotions of jittering images improved by a factor of 2. A prototype of optoelectronic jitter goggles produced similar improvement in facial emotion discrimination. Word recognition speed in participants with simulated visual impairment was enhanced for interjitter intervals over 100 ms and reduced for shorter intervals. Results suggest that retinal-image jitter with optimal frequency and amplitude is an effective strategy for enhancing visual information processing in the absence of spatial detail. These findings will enable the development of novel tools to improve the quality of life of low vision patients.
The picture superiority effect in categorization: visual or semantic?
Job, R; Rumiati, R; Lotto, L
1992-09-01
Two experiments are reported whose aim was to replicate and generalize the results presented by Snodgrass and McCullough (1986) on the effect of visual similarity in the categorization process. For pictures, Snodgrass and McCullough's results were replicated because Ss took longer to discriminate elements from 2 categories when they were visually similar than when they were visually dissimilar. However, unlike Snodgrass and McCullough, an analogous increase was also observed for word stimuli. The pattern of results obtained here can be explained most parsimoniously with reference to the effect of semantic similarity, or semantic and visual relatedness, rather than to visual similarity alone.
L1 and L2 Spoken Word Processing: Evidence from Divided Attention Paradigm.
Shafiee Nahrkhalaji, Saeedeh; Lotfi, Ahmad Reza; Koosha, Mansour
2016-10-01
The present study aims to reveal some facts concerning first language (L 1 ) and second language (L 2 ) spoken-word processing in unbalanced proficient bilinguals using behavioral measures. The intention here is to examine the effects of auditory repetition word priming and semantic priming in first and second languages of these bilinguals. The other goal is to explore the effects of attention manipulation on implicit retrieval of perceptual and conceptual properties of spoken L 1 and L 2 words. In so doing, the participants performed auditory word priming and semantic priming as memory tests in their L 1 and L 2 . In a half of the trials of each experiment, they carried out the memory test while simultaneously performing a secondary task in visual modality. The results revealed that effects of auditory word priming and semantic priming were present when participants processed L 1 and L 2 words in full attention condition. Attention manipulation could reduce priming magnitude in both experiments in L 2 . Moreover, L 2 word retrieval increases the reaction times and reduces accuracy on the simultaneous secondary task to protect its own accuracy and speed.
A hierarchical word-merging algorithm with class separability measure.
Wang, Lei; Zhou, Luping; Shen, Chunhua; Liu, Lingqiao; Liu, Huan
2014-03-01
In image recognition with the bag-of-features model, a small-sized visual codebook is usually preferred to obtain a low-dimensional histogram representation and high computational efficiency. Such a visual codebook has to be discriminative enough to achieve excellent recognition performance. To create a compact and discriminative codebook, in this paper we propose to merge the visual words in a large-sized initial codebook by maximally preserving class separability. We first show that this results in a difficult optimization problem. To deal with this situation, we devise a suboptimal but very efficient hierarchical word-merging algorithm, which optimally merges two words at each level of the hierarchy. By exploiting the characteristics of the class separability measure and designing a novel indexing structure, the proposed algorithm can hierarchically merge 10,000 visual words down to two words in merely 90 seconds. Also, to show the properties of the proposed algorithm and reveal its advantages, we conduct detailed theoretical analysis to compare it with another hierarchical word-merging algorithm that maximally preserves mutual information, obtaining interesting findings. Experimental studies are conducted to verify the effectiveness of the proposed algorithm on multiple benchmark data sets. As shown, it can efficiently produce more compact and discriminative codebooks than the state-of-the-art hierarchical word-merging algorithms, especially when the size of the codebook is significantly reduced.
Kim, Judy S; Kanjlia, Shipra; Merabet, Lotfi B; Bedny, Marina
2017-11-22
Learning to read causes the development of a letter- and word-selective region known as the visual word form area (VWFA) within the human ventral visual object stream. Why does a reading-selective region develop at this anatomical location? According to one hypothesis, the VWFA develops at the nexus of visual inputs from retinotopic cortices and linguistic input from the frontotemporal language network because reading involves extracting linguistic information from visual symbols. Surprisingly, the anatomical location of the VWFA is also active when blind individuals read Braille by touch, suggesting that vision is not required for the development of the VWFA. In this study, we tested the alternative prediction that VWFA development is in fact influenced by visual experience. We predicted that in the absence of vision, the "VWFA" is incorporated into the frontotemporal language network and participates in high-level language processing. Congenitally blind ( n = 10, 9 female, 1 male) and sighted control ( n = 15, 9 female, 6 male), male and female participants each took part in two functional magnetic resonance imaging experiments: (1) word reading (Braille for blind and print for sighted participants), and (2) listening to spoken sentences of different grammatical complexity (both groups). We find that in blind, but not sighted participants, the anatomical location of the VWFA responds both to written words and to the grammatical complexity of spoken sentences. This suggests that in blindness, this region takes on high-level linguistic functions, becoming less selective for reading. More generally, the current findings suggest that experience during development has a major effect on functional specialization in the human cortex. SIGNIFICANCE STATEMENT The visual word form area (VWFA) is a region in the human cortex that becomes specialized for the recognition of written letters and words. Why does this particular brain region become specialized for reading? We tested the hypothesis that the VWFA develops within the ventral visual stream because reading involves extracting linguistic information from visual symbols. Consistent with this hypothesis, we find that in congenitally blind Braille readers, but not sighted readers of print, the VWFA region is active during grammatical processing of spoken sentences. These results suggest that visual experience contributes to VWFA specialization, and that different neural implementations of reading are possible. Copyright © 2017 the authors 0270-6474/17/3711495-10$15.00/0.
Kanjlia, Shipra; Merabet, Lotfi B.
2017-01-01
Learning to read causes the development of a letter- and word-selective region known as the visual word form area (VWFA) within the human ventral visual object stream. Why does a reading-selective region develop at this anatomical location? According to one hypothesis, the VWFA develops at the nexus of visual inputs from retinotopic cortices and linguistic input from the frontotemporal language network because reading involves extracting linguistic information from visual symbols. Surprisingly, the anatomical location of the VWFA is also active when blind individuals read Braille by touch, suggesting that vision is not required for the development of the VWFA. In this study, we tested the alternative prediction that VWFA development is in fact influenced by visual experience. We predicted that in the absence of vision, the “VWFA” is incorporated into the frontotemporal language network and participates in high-level language processing. Congenitally blind (n = 10, 9 female, 1 male) and sighted control (n = 15, 9 female, 6 male), male and female participants each took part in two functional magnetic resonance imaging experiments: (1) word reading (Braille for blind and print for sighted participants), and (2) listening to spoken sentences of different grammatical complexity (both groups). We find that in blind, but not sighted participants, the anatomical location of the VWFA responds both to written words and to the grammatical complexity of spoken sentences. This suggests that in blindness, this region takes on high-level linguistic functions, becoming less selective for reading. More generally, the current findings suggest that experience during development has a major effect on functional specialization in the human cortex. SIGNIFICANCE STATEMENT The visual word form area (VWFA) is a region in the human cortex that becomes specialized for the recognition of written letters and words. Why does this particular brain region become specialized for reading? We tested the hypothesis that the VWFA develops within the ventral visual stream because reading involves extracting linguistic information from visual symbols. Consistent with this hypothesis, we find that in congenitally blind Braille readers, but not sighted readers of print, the VWFA region is active during grammatical processing of spoken sentences. These results suggest that visual experience contributes to VWFA specialization, and that different neural implementations of reading are possible. PMID:29061700
Topic Transition in Educational Videos Using Visually Salient Words
ERIC Educational Resources Information Center
Gandhi, Ankit; Biswas, Arijit; Deshmukh, Om
2015-01-01
In this paper, we propose a visual saliency algorithm for automatically finding the topic transition points in an educational video. First, we propose a method for assigning a saliency score to each word extracted from an educational video. We design several mid-level features that are indicative of visual saliency. The optimal feature combination…
Crossmodal Semantic Priming by Naturalistic Sounds and Spoken Words Enhances Visual Sensitivity
ERIC Educational Resources Information Center
Chen, Yi-Chuan; Spence, Charles
2011-01-01
We propose a multisensory framework based on Glaser and Glaser's (1989) general reading-naming interference model to account for the semantic priming effect by naturalistic sounds and spoken words on visual picture sensitivity. Four experiments were designed to investigate two key issues: First, can auditory stimuli enhance visual sensitivity when…
Visual Imagery for Letters and Words. Final Report.
ERIC Educational Resources Information Center
Weber, Robert J.
In a series of six experiments, undergraduate college students visually imagined letters or words and then classified as rapidly as possible the imagined letters for some physical property such as vertical height. This procedure allowed for a preliminary assessment of the temporal parameters of visual imagination. The results delineate a number of…
A Visual Literacy Approach to Developmental and Remedial Reading.
ERIC Educational Resources Information Center
Barley, Steven D.
Photography, films, and other visual materials offer a different approach to teaching reading. For example, photographs may be arranged in sequences analogous to the ways words form sentences and sentences for stories. If, as is possible, children respond first to pictures and later to words, training they receive in visual literacy may help them…
Primativo, Silvia; Reilly, Jamie; Crutch, Sebastian J
2016-01-01
The Abstract Conceptual Feature (ACF) framework predicts that word meaning is represented within a high-dimensional semantic space bounded by weighted contributions of perceptual, affective, and encyclopedic information. The ACF, like latent semantic analysis, is amenable to distance metrics between any two words. We applied predictions of the ACF framework to abstract words using eye tracking via an adaptation of the classical ‘visual word paradigm’. Healthy adults (N=20) selected the lexical item most related to a probe word in a 4-item written word array comprising the target and three distractors. The relation between the probe and each of the four words was determined using the semantic distance metrics derived from ACF ratings. Eye-movement data indicated that the word that was most semantically related to the probe received more and longer fixations relative to distractors. Importantly, in sets where participants did not provide an overt behavioral response, the fixation rates were none the less significantly higher for targets than distractors, closely resembling trials where an expected response was given. Furthermore, ACF ratings which are based on individual words predicted eye fixation metrics of probe-target similarity at least as well as latent semantic analysis ratings which are based on word co-occurrence. The results provide further validation of Euclidean distance metrics derived from ACF ratings as a measure of one facet of the semantic relatedness of abstract words and suggest that they represent a reasonable approximation of the organization of abstract conceptual space. The data are also compatible with the broad notion that multiple sources of information (not restricted to sensorimotor and emotion information) shape the organization of abstract concepts. Whilst the adapted ‘visual word paradigm’ is potentially a more metacognitive task than the classical visual world paradigm, we argue that it offers potential utility for studying abstract word comprehension. PMID:26901571
Too little, too late: reduced visual span and speed characterize pure alexia.
Starrfelt, Randi; Habekost, Thomas; Leff, Alexander P
2009-12-01
Whether normal word reading includes a stage of visual processing selectively dedicated to word or letter recognition is highly debated. Characterizing pure alexia, a seemingly selective disorder of reading, has been central to this debate. Two main theories claim either that 1) Pure alexia is caused by damage to a reading specific brain region in the left fusiform gyrus or 2) Pure alexia results from a general visual impairment that may particularly affect simultaneous processing of multiple items. We tested these competing theories in 4 patients with pure alexia using sensitive psychophysical measures and mathematical modeling. Recognition of single letters and digits in the central visual field was impaired in all patients. Visual apprehension span was also reduced for both letters and digits in all patients. The only cortical region lesioned across all 4 patients was the left fusiform gyrus, indicating that this region subserves a function broader than letter or word identification. We suggest that a seemingly pure disorder of reading can arise due to a general reduction of visual speed and span, and explain why this has a disproportionate impact on word reading while recognition of other visual stimuli are less obviously affected.
Too Little, Too Late: Reduced Visual Span and Speed Characterize Pure Alexia
Habekost, Thomas; Leff, Alexander P.
2009-01-01
Whether normal word reading includes a stage of visual processing selectively dedicated to word or letter recognition is highly debated. Characterizing pure alexia, a seemingly selective disorder of reading, has been central to this debate. Two main theories claim either that 1) Pure alexia is caused by damage to a reading specific brain region in the left fusiform gyrus or 2) Pure alexia results from a general visual impairment that may particularly affect simultaneous processing of multiple items. We tested these competing theories in 4 patients with pure alexia using sensitive psychophysical measures and mathematical modeling. Recognition of single letters and digits in the central visual field was impaired in all patients. Visual apprehension span was also reduced for both letters and digits in all patients. The only cortical region lesioned across all 4 patients was the left fusiform gyrus, indicating that this region subserves a function broader than letter or word identification. We suggest that a seemingly pure disorder of reading can arise due to a general reduction of visual speed and span, and explain why this has a disproportionate impact on word reading while recognition of other visual stimuli are less obviously affected. PMID:19366870
Huang, Meng; Baskin, David S; Fung, Steve
2016-05-01
Rapid word recognition and reading fluency is a specialized cortical process governed by the visual word form area (VWFA), which is localized to the dominant posterior lateral occipitotemporal sulcus/fusiform gyrus. A lesion of the VWFA results in pure alexia without agraphia characterized by letter-by-letter reading. Palinopsia is a visual processing distortion characterized by persistent afterimages and has been reported in lesions involving the nondominant occipitotemporal cortex. A 67-year-old right-handed woman with no neurologic history presented to our emergency department with acute cortical ischemic symptoms that began with a transient episode of receptive aphasia. She also reported inability to read, albeit with retained writing ability. She also saw afterimages of objects. During her stroke workup, an intra-axial circumscribed enhancing mass lesion was discovered involving her dominant posterolateral occipitotemporal lobe. Given the eloquent brain involvement, she underwent preoperative functional magnetic resonance imaging with diffusion tensor imaging tractography and awake craniotomy to maximize resection and preserve function. Many organic lesions involving these regions have been reported in the literature, but to the best of our knowledge, glioblastoma involving the VWFA resulting in both clinical syndromes of pure alexia and palinopsia with superimposed functional magnetic resonance imaging and fiber tract mapping has never been reported before. Copyright © 2015 Elsevier Inc. All rights reserved.
Burton, Harold; McLaren, Donald G
2006-01-09
Visual cortex activity in the blind has been shown in Braille literate people, which raise the question of whether Braille literacy influences cross-modal reorganization. We used fMRI to examine visual cortex activation during semantic and phonological tasks with auditory presentation of words in two late-onset blind individuals who lacked Braille literacy. Multiple visual cortical regions were activated in the Braille naive individuals. Positive BOLD responses were noted in lower tier visuotopic (e.g., V1, V2, VP, and V3) and several higher tier visual areas (e.g., V4v, V8, and BA 37). Activity was more extensive and cross-correlation magnitudes were greater during the semantic compared to the phonological task. These results with Braille naive individuals plausibly suggest that visual deprivation alone induces visual cortex reorganization. Cross-modal reorganization of lower tier visual areas may be recruited by developing skills in attending to selected non-visual inputs (e.g., Braille literacy, enhanced auditory skills). Such learning might strengthen remote connections with multisensory cortical areas. Of necessity, the Braille naive participants must attend to auditory stimulation for language. We hypothesize that learning to attend to non-visual inputs probably strengthens the remaining active synapses following visual deprivation, and thereby, increases cross-modal activation of lower tier visual areas when performing highly demanding non-visual tasks of which reading Braille is just one example.
Burton, Harold; McLaren, Donald G.
2013-01-01
Visual cortex activity in the blind has been shown in Braille literate people, which raise the question of whether Braille literacy influences cross-modal reorganization. We used fMRI to examine visual cortex activation during semantic and phonological tasks with auditory presentation of words in two late-onset blind individuals who lacked Braille literacy. Multiple visual cortical regions were activated in the Braille naive individuals. Positive BOLD responses were noted in lower tier visuotopic (e.g., V1, V2, VP, and V3) and several higher tier visual areas (e.g., V4v, V8, and BA 37). Activity was more extensive and cross-correlation magnitudes were greater during the semantic compared to the phonological task. These results with Braille naive individuals plausibly suggest that visual deprivation alone induces visual cortex reorganization. Cross-modal reorganization of lower tier visual areas may be recruited by developing skills in attending to selected non-visual inputs (e.g., Braille literacy, enhanced auditory skills). Such learning might strengthen remote connections with multisensory cortical areas. Of necessity, the Braille naive participants must attend to auditory stimulation for language. We hypothesize that learning to attend to non-visual inputs probably strengthens the remaining active synapses following visual deprivation, and thereby, increases cross-modal activation of lower tier visual areas when performing highly demanding non-visual tasks of which reading Braille is just one example. PMID:16198053
The effect of orthographic neighborhood in the reading span task.
Robert, Christelle; Postal, Virginie; Mathey, Stéphanie
2015-04-01
This study aimed at examining whether and to what extent orthographic neighborhood of words influences performance in a working memory span task. Twenty-five participants performed a reading span task in which final words to be memorized had either no higher frequency orthographic neighbor or at least one. In both neighborhood conditions, each participant completed three series of two, three, four, or five sentences. Results indicated an interaction between orthographic neighborhood and list length. In particular, an inhibitory effect of orthographic neighborhood on recall appeared in list length 5. A view is presented suggesting that words with higher frequency neighbors require more resources to be memorized than words with no such neighbors. The implications of the results are discussed with regard to memory processes and current models of visual word recognition.
Aural mapping of STEM concepts using literature mining
NASA Astrophysics Data System (ADS)
Bharadwaj, Venkatesh
Recent technological applications have made the life of people too much dependent on Science, Technology, Engineering, and Mathematics (STEM) and its applications. Understanding basic level science is a must in order to use and contribute to this technological revolution. Science education in middle and high school levels however depends heavily on visual representations such as models, diagrams, figures, animations and presentations etc. This leaves visually impaired students with very few options to learn science and secure a career in STEM related areas. Recent experiments have shown that small aural clues called Audemes are helpful in understanding and memorization of science concepts among visually impaired students. Audemes are non-verbal sound translations of a science concept. In order to facilitate science concepts as Audemes, for visually impaired students, this thesis presents an automatic system for audeme generation from STEM textbooks. This thesis describes the systematic application of multiple Natural Language Processing tools and techniques, such as dependency parser, POS tagger, Information Retrieval algorithm, Semantic mapping of aural words, machine learning etc., to transform the science concept into a combination of atomic-sounds, thus forming an audeme. We present a rule based classification method for all STEM related concepts. This work also presents a novel way of mapping and extracting most related sounds for the words being used in textbook. Additionally, machine learning methods are used in the system to guarantee the customization of output according to a user's perception. The system being presented is robust, scalable, fully automatic and dynamically adaptable for audeme generation.
Extrinsic Cognitive Load Impairs Spoken Word Recognition in High- and Low-Predictability Sentences.
Hunter, Cynthia R; Pisoni, David B
Listening effort (LE) induced by speech degradation reduces performance on concurrent cognitive tasks. However, a converse effect of extrinsic cognitive load on recognition of spoken words in sentences has not been shown. The aims of the present study were to (a) examine the impact of extrinsic cognitive load on spoken word recognition in a sentence recognition task and (b) determine whether cognitive load and/or LE needed to understand spectrally degraded speech would differentially affect word recognition in high- and low-predictability sentences. Downstream effects of speech degradation and sentence predictability on the cognitive load task were also examined. One hundred twenty young adults identified sentence-final spoken words in high- and low-predictability Speech Perception in Noise sentences. Cognitive load consisted of a preload of short (low-load) or long (high-load) sequences of digits, presented visually before each spoken sentence and reported either before or after identification of the sentence-final word. LE was varied by spectrally degrading sentences with four-, six-, or eight-channel noise vocoding. Level of spectral degradation and order of report (digits first or words first) were between-participants variables. Effects of cognitive load, sentence predictability, and speech degradation on accuracy of sentence-final word identification as well as recall of preload digit sequences were examined. In addition to anticipated main effects of sentence predictability and spectral degradation on word recognition, we found an effect of cognitive load, such that words were identified more accurately under low load than high load. However, load differentially affected word identification in high- and low-predictability sentences depending on the level of sentence degradation. Under severe spectral degradation (four-channel vocoding), the effect of cognitive load on word identification was present for high-predictability sentences but not for low-predictability sentences. Under mild spectral degradation (eight-channel vocoding), the effect of load was present for low-predictability sentences but not for high-predictability sentences. There were also reliable downstream effects of speech degradation and sentence predictability on recall of the preload digit sequences. Long digit sequences were more easily recalled following spoken sentences that were less spectrally degraded. When digits were reported after identification of sentence-final words, short digit sequences were recalled more accurately when the spoken sentences were predictable. Extrinsic cognitive load can impair recognition of spectrally degraded spoken words in a sentence recognition task. Cognitive load affected word identification in both high- and low-predictability sentences, suggesting that load may impact both context use and lower-level perceptual processes. Consistent with prior work, LE also had downstream effects on memory for visual digit sequences. Results support the proposal that extrinsic cognitive load and LE induced by signal degradation both draw on a central, limited pool of cognitive resources that is used to recognize spoken words in sentences under adverse listening conditions.
Vinter, A; Fernandes, V; Orlandi, O; Morgan, P
2013-11-01
The aim of the present study was to examine to what extent the verbal definitions of familiar objects produced by blind children reflect their peculiar perceptual experience and, in consequence, differ from those produced by sighted children. Ninety-six visually impaired children, aged between 6 and 14 years, and 32 age-matched sighted children had to define 10 words denoting concrete animate or inanimate familiar objects. The blind children evoked the tactile and auditory characteristics of objects and expressed personal perceptual experiences in their definitions. The sighted children relied on visual perception, and produced more visually oriented verbalism. In contrast, no differences were observed between children in their propensity to include functional attributes in their verbal definitions. The results are discussed in line with embodied views of cognition that postulate mandatory perceptuomotor processing of words during access to their meaning. © 2012 John Wiley & Sons Ltd.
ERIC Educational Resources Information Center
Kong, Siu Cheung; Li, Ping; Song, Yanjie
2018-01-01
This study evaluated a bilingual text-mining system, which incorporated a bilingual taxonomy of key words and provided hierarchical visualization, for understanding learner-generated text in the learning management systems through automatic identification and counting of matching key words. A class of 27 in-service teachers studied a course…
ERIC Educational Resources Information Center
Vergara-Martinez, Marta; Perea, Manuel; Marin, Alejandro; Carreiras, Manuel
2011-01-01
Recent research suggests that there is a processing distinction between consonants and vowels in visual-word recognition. Here we conjointly examine the time course of consonants and vowels in processes of letter identity and letter position assignment. Event related potentials (ERPs) were recorded while participants read words and pseudowords in…
On the Functional Neuroanatomy of Visual Word Processing: Effects of Case and Letter Deviance
ERIC Educational Resources Information Center
Kronbichler, Martin; Klackl, Johannes; Richlan, Fabio; Schurz, Matthias; Staffen, Wolfgang; Ladurner, Gunther; Wimmer, Heinz
2009-01-01
This functional magnetic resonance imaging study contrasted case-deviant and letter-deviant forms with familiar forms of the same phonological words (e.g., "TaXi" and "Taksi" vs. "Taxi") and found that both types of deviance led to increased activation in a left occipito-temporal region, corresponding to the visual word form area (VWFA). The…
Syllabic Parsing in Children: A Developmental Study Using Visual Word-Spotting in Spanish
ERIC Educational Resources Information Center
Álvarez, Carlos J.; Garcia-Saavedra, Guacimara; Luque, Juan L.; Taft, Marcus
2017-01-01
Some inconsistency is observed in the results from studies of reading development regarding the role of the syllable in visual word recognition, perhaps due to a disparity between the tasks used. We adopted a word-spotting paradigm, with Spanish children of second grade (mean age: 7 years) and sixth grade (mean age: 11 years). The children were…
ERIC Educational Resources Information Center
Nobre, Alexandre de Pontes; de Salles, Jerusa Fumagalli
2016-01-01
The aim of this study was to investigate relations between lexical-semantic processing and two components of reading: visual word recognition and reading comprehension. Sixty-eight children from private schools in Porto Alegre, Brazil, from 7 to 12 years, were evaluated. Reading was assessed with a word/nonword reading task and a reading…
ERIC Educational Resources Information Center
Mishra, Ramesh Kumar; Singh, Niharika
2014-01-01
Previous psycholinguistic studies have shown that bilinguals activate lexical items of both the languages during auditory and visual word processing. In this study we examined if Hindi-English bilinguals activate the orthographic forms of phonological neighbors of translation equivalents of the non target language while listening to words either…
ERIC Educational Resources Information Center
Wheat, Katherine L.; Cornelissen, Piers L.; Sack, Alexander T.; Schuhmann, Teresa; Goebel, Rainer; Blomert, Leo
2013-01-01
Magnetoencephalography (MEG) has shown pseudohomophone priming effects at Broca's area (specifically pars opercularis of left inferior frontal gyrus and precentral gyrus; LIFGpo/PCG) within [approximately]100 ms of viewing a word. This is consistent with Broca's area involvement in fast phonological access during visual word recognition. Here we…
ERIC Educational Resources Information Center
Levinger, Esther
1989-01-01
States that the painted words in Jasper Johns' art act in two different capacities: concealed words partake in the artist's interrogation of visual perception; and visible painted words question classical representation. Argues that words are Johns' means of critiquing modernism. (RS)
Looking and touching: what extant approaches reveal about the structure of early word knowledge.
Hendrickson, Kristi; Mitsven, Samantha; Poulin-Dubois, Diane; Zesiger, Pascal; Friend, Margaret
2015-09-01
The goal of the current study is to assess the temporal dynamics of vision and action to evaluate the underlying word representations that guide infants' responses. Sixteen-month-old infants participated in a two-alternative forced-choice word-picture matching task. We conducted a moment-by-moment analysis of looking and reaching behaviors as they occurred in tandem to assess the speed with which a prompted word was processed (visual reaction time) as a function of the type of haptic response: Target, Distractor, or No Touch. Visual reaction times (visual RTs) were significantly slower during No Touches compared to Distractor and Target Touches, which were statistically indistinguishable. The finding that visual RTs were significantly faster during Distractor Touches compared to No Touches suggests that incorrect and absent haptic responses appear to index distinct knowledge states: incorrect responses are associated with partial knowledge whereas absent responses appear to reflect a true failure to map lexical items to their target referents. Further, we found that those children who were faster at processing words were also those children who exhibited better haptic performance. This research provides a methodological clarification on knowledge measured by the visual and haptic modalities and new evidence for a continuum of word knowledge in the second year of life. © 2014 The Authors Developmental Science Published by John Wiley & Sons Ltd.
A New Perspective on Visual Word Processing Efficiency
Houpt, Joseph W.; Townsend, James T.; Donkin, Christopher
2013-01-01
As a fundamental part of our daily lives, visual word processing has received much attention in the psychological literature. Despite the well established advantage of perceiving letters in a word or in a pseudoword over letters alone or in random sequences using accuracy, a comparable effect using response times has been elusive. Some researchers continue to question whether the advantage due to word context is perceptual. We use the capacity coefficient, a well established, response time based measure of efficiency to provide evidence of word processing as a particularly efficient perceptual process to complement those results from the accuracy domain. PMID:24334151
Willis, Suzi; Goldbart, Juliet; Stansfield, Jois
2014-07-01
To compare verbal short-term memory and visual working memory abilities of six children with congenital hearing-impairment identified as having significant language learning difficulties with normative data from typically hearing children using standardized memory assessments. Six children with hearing loss aged 8-15 years were assessed on measures of verbal short-term memory (Non-word and word recall) and visual working memory annually over a two year period. All children had cognitive abilities within normal limits and used spoken language as the primary mode of communication. The language assessment scores at the beginning of the study revealed that all six participants exhibited delays of two years or more on standardized assessments of receptive and expressive vocabulary and spoken language. The children with hearing-impairment scores were significantly higher on the non-word recall task than the "real" word recall task. They also exhibited significantly higher scores on visual working memory than those of the age-matched sample from the standardized memory assessment. Each of the six participants in this study displayed the same pattern of strengths and weaknesses in verbal short-term memory and visual working memory despite their very different chronological ages. The children's poor ability to recall single syllable words in relation to non-words is a clinical indicator of their difficulties in verbal short-term memory. However, the children with hearing-impairment do not display generalized processing difficulties and indeed demonstrate strengths in visual working memory. The poor ability to recall words, in combination with difficulties with early word learning may be indicators of children with hearing-impairment who will struggle to develop spoken language equal to that of their normally hearing peers. This early identification has the potential to allow for target specific intervention that may remediate their difficulties. Copyright © 2014. Published by Elsevier Ireland Ltd.
Presentation Modality and Proactive Interference in Children's Short-Term Memory.
ERIC Educational Resources Information Center
Douglas, Joan Delahanty
This study examined the role of visual and auditory presentation in memory encoding processes of 80 second-grade children, using the release-from-proactive-interference short-term memory (STM) paradigm. Words were presented over three trials within one of the presentation modes and one taxonomic category, followed by a fourth trial in which the…
Heim, Stefan; Weidner, Ralph; von Overheidt, Ann-Christin; Tholen, Nicole; Grande, Marion; Amunts, Katrin
2014-03-01
Phonological and visual dysfunctions may result in reading deficits like those encountered in developmental dyslexia. Here, we use a novel approach to induce similar reading difficulties in normal readers in an event-related fMRI study, thus systematically investigating which brain regions relate to different pathways relating to orthographic-phonological (e.g. grapheme-to-phoneme conversion, GPC) vs. visual processing. Based upon a previous behavioural study (Tholen et al. 2011), the retrieval of phonemes from graphemes was manipulated by lowering the identifiability of letters in familiar vs. unfamiliar shapes. Visual word and letter processing was impeded by presenting the letters of a word in a moving, non-stationary manner. FMRI revealed that the visual condition activated cytoarchitectonically defined area hOC5 in the magnocellular pathway and area 7A in the right mesial parietal cortex. In contrast, the grapheme manipulation revealed different effects localised predominantly in bilateral inferior frontal gyrus (left cytoarchitectonic area 44; right area 45) and inferior parietal lobule (including areas PF/PFm), regions that have been demonstrated to show abnormal activation in dyslexic as compared to normal readers. This pattern of activation bears close resemblance to recent findings in dyslexic samples both behaviourally and with respect to the neurofunctional activation patterns. The novel paradigm may thus prove useful in future studies to understand reading problems related to distinct pathways, potentially providing a link also to the understanding of real reading impairments in dyslexia.
An eye movement corpus study of the age-of-acquisition effect.
Dirix, Nicolas; Duyck, Wouter
2017-12-01
In the present study, we investigated the effects of word-level age of acquisition (AoA) on natural reading. Previous studies, using multiple language modalities, showed that earlier-learned words are recognized, read, spoken, and responded to faster than words learned later in life. Until now, in visual word recognition the experimental materials were limited to single-word or sentence studies. We analyzed the data of the Ghent Eye-tracking Corpus (GECO; Cop, Dirix, Drieghe, & Duyck, in press), an eyetracking corpus of participants reading an entire novel, resulting in the first eye movement megastudy of AoA effects in natural reading. We found that the ages at which specific words were learned indeed influenced reading times, above other important (correlated) lexical variables, such as word frequency and length. Shorter fixations for earlier-learned words were consistently found throughout the reading process, in both early (single-fixation durations, first-fixation durations, gaze durations) and late (total reading times) measures. Implications for theoretical accounts of AoA effects and eye movements are discussed.
Jarick, Michelle; Dixon, Mike J; Stewart, Mark T; Maxwell, Emily C; Smilek, Daniel
2009-01-01
Synaesthesia is a fascinating condition whereby individuals report extraordinary experiences when presented with ordinary stimuli. Here we examined an individual (L) who experiences time units (i.e., months of the year and hours of the day) as occupying specific spatial locations (January is 30 degrees to the left of midline). This form of time-space synaesthesia has been recently investigated by Smilek et al. (2007) who demonstrated that synaesthetic time-space associations are highly consistent, occur regardless of intention, and can direct spatial attention. We extended this work by showing that for the synaesthete L, her time-space vantage point changes depending on whether the time units are seen or heard. For example, when L sees the word JANUARY, she reports experiencing January on her left side, however when she hears the word "January" she experiences the month on her right side. L's subjective reports were validated using a spatial cueing paradigm. The names of months were centrally presented followed by targets on the left or right. L was faster at detecting targets in validly cued locations relative to invalidly cued locations both for visually presented cues (January orients attention to the left) and for aurally presented cues (January orients attention to the right). We replicated this difference in visual and aural cueing effects using hour of the day. Our findings support previous research showing that time-space synaesthesia can bias visual spatial attention, and further suggest that for this synaesthete, time-space associations differ depending on whether they are visually or aurally induced.
Neural correlates of differential retrieval orientation: Sustained and item-related components.
Woodruff, C Chad; Uncapher, Melina R; Rugg, Michael D
2006-01-01
Retrieval orientation refers to a cognitive state that biases processing of retrieval cues in service of a specific goal. The present study used a mixed fMRI design to investigate whether adoption of different retrieval orientations - as indexed by differences in the activity elicited by retrieval cues corresponding to unstudied items - is associated with differences in the state-related activity sustained across a block of test trials sharing a common retrieval goal. Subjects studied mixed lists comprising visually presented words and pictures. They then undertook a series of short test blocks in which all test items were visually presented words. The blocks varied according to whether the test items were used to cue retrieval of studied words or studied pictures. In several regions, neural activity elicited by correctly classified new items differed according to whether words or pictures were the targeted material. The loci of these effects suggest that one factor driving differential cue processing is modulation of the degree of overlap between cue and targeted memory representations. In addition to these item-related effects, neural activity sustained throughout the test blocks also differed according to the nature of the targeted material. These findings indicate that the adoption of different retrieval orientations is associated with distinct neural states. The loci of these sustained effects were distinct from those where new item activity varied, suggesting that the effects may play a role in biasing retrieval cue processing in favor of the current retrieval goal.