Ueno, Daisuke; Masumoto, Kouhei; Sutani, Kouichi; Iwaki, Sunao
2015-04-15
This study used magnetoencephalography (MEG) to examine the latency of modality-specific reactivation in the visual and auditory cortices during a recognition task to determine the effects of reactivation on episodic memory retrieval. Nine right-handed healthy young adults participated in the experiment. The experiment consisted of a word-encoding phase and two recognition phases. Three encoding conditions were included: encoding words alone (word-only) and encoding words presented with either related pictures (visual) or related sounds (auditory). The recognition task was conducted in the MEG scanner 15 min after the completion of the encoding phase. After the recognition test, a source-recognition task was given, in which participants were required to choose whether each recognition word was not presented or was presented with which information during the encoding phase. Word recognition in the auditory condition was higher than that in the word-only condition. Confidence-of-recognition scores (d') and the source-recognition test showed superior performance in both the visual and the auditory conditions compared with the word-only condition. An equivalent current dipoles analysis of MEG data indicated that higher equivalent current dipole amplitudes in the right fusiform gyrus occurred during the visual condition and in the superior temporal auditory cortices during the auditory condition, both 450-550 ms after onset of the recognition stimuli. Results suggest that reactivation of visual and auditory brain regions during recognition binds language with modality-specific information and that reactivation enhances confidence in one's recognition performance.
Eye-fixation behavior, lexical storage, and visual word recognition in a split processing model.
Shillcock, R; Ellison, T M; Monaghan, P
2000-10-01
Some of the implications of a model of visual word recognition in which processing is conditioned by the anatomical splitting of the visual field between the two hemispheres of the brain are explored. The authors investigate the optimal processing of visually presented words within such an architecture, and, for a realistically sized lexicon of English, characterize a computationally optimal fixation point in reading. They demonstrate that this approach motivates a range of behavior observed in reading isolated words and text, including the optimal viewing position and its relationship with the preferred viewing location, the failure to fixate smaller words, asymmetries in hemisphere-specific processing, and the priority given to the exterior letters of words. The authors also show that split architectures facilitate the uptake of all the letter-position information necessary for efficient word recognition and that this information may be less specific than is normally assumed. A split model of word recognition captures a range of behavior in reading that is greater than that covered by existing models of visual word recognition.
Adult Word Recognition and Visual Sequential Memory
ERIC Educational Resources Information Center
Holmes, V. M.
2012-01-01
Two experiments were conducted investigating the role of visual sequential memory skill in the word recognition efficiency of undergraduate university students. Word recognition was assessed in a lexical decision task using regularly and strangely spelt words, and nonwords that were either standard orthographically legal strings or items made from…
Strand, Julia F; Sommers, Mitchell S
2011-09-01
Much research has explored how spoken word recognition is influenced by the architecture and dynamics of the mental lexicon (e.g., Luce and Pisoni, 1998; McClelland and Elman, 1986). A more recent question is whether the processes underlying word recognition are unique to the auditory domain, or whether visually perceived (lipread) speech may also be sensitive to the structure of the mental lexicon (Auer, 2002; Mattys, Bernstein, and Auer, 2002). The current research was designed to test the hypothesis that both aurally and visually perceived spoken words are isolated in the mental lexicon as a function of their modality-specific perceptual similarity to other words. Lexical competition (the extent to which perceptually similar words influence recognition of a stimulus word) was quantified using metrics that are well-established in the literature, as well as a statistical method for calculating perceptual confusability based on the phi-square statistic. Both auditory and visual spoken word recognition were influenced by modality-specific lexical competition as well as stimulus word frequency. These findings extend the scope of activation-competition models of spoken word recognition and reinforce the hypothesis (Auer, 2002; Mattys et al., 2002) that perceptual and cognitive properties underlying spoken word recognition are not specific to the auditory domain. In addition, the results support the use of the phi-square statistic as a better predictor of lexical competition than metrics currently used in models of spoken word recognition. © 2011 Acoustical Society of America
Syllable Transposition Effects in Korean Word Recognition
ERIC Educational Resources Information Center
Lee, Chang H.; Kwon, Youan; Kim, Kyungil; Rastle, Kathleen
2015-01-01
Research on the impact of letter transpositions in visual word recognition has yielded important clues about the nature of orthographic representations. This study investigated the impact of syllable transpositions on the recognition of Korean multisyllabic words. Results showed that rejection latencies in visual lexical decision for…
Is nevtral NEUTRAL? Visual similarity effects in the early phases of written-word recognition.
Marcet, Ana; Perea, Manuel
2017-08-01
For simplicity, contemporary models of written-word recognition and reading have unspecified feature/letter levels-they predict that the visually similar substituted-letter nonword PEQPLE is as effective at activating the word PEOPLE as the visually dissimilar substituted-letter nonword PEYPLE. Previous empirical evidence on the effects of visual similarly across letters during written-word recognition is scarce and nonconclusive. To examine whether visual similarity across letters plays a role early in word processing, we conducted two masked priming lexical decision experiments (stimulus-onset asynchrony = 50 ms). The substituted-letter primes were visually very similar to the target letters (u/v in Experiment 1 and i/j in Experiment 2; e.g., nevtral-NEUTRAL). For comparison purposes, we included an identity prime condition (neutral-NEUTRAL) and a dissimilar-letter prime condition (neztral-NEUTRAL). Results showed that the similar-letter prime condition produced faster word identification times than the dissimilar-letter prime condition. We discuss how models of written-word recognition should be amended to capture visual similarity effects across letters.
Recognition intent and visual word recognition.
Wang, Man-Ying; Ching, Chi-Le
2009-03-01
This study adopted a change detection task to investigate whether and how recognition intent affects the construction of orthographic representation in visual word recognition. Chinese readers (Experiment 1-1) and nonreaders (Experiment 1-2) detected color changes in radical components of Chinese characters. Explicit recognition demand was imposed in Experiment 2 by an additional recognition task. When the recognition was implicit, a bias favoring the radical location informative of character identity was found in Chinese readers (Experiment 1-1), but not nonreaders (Experiment 1-2). With explicit recognition demands, the effect of radical location interacted with radical function and word frequency (Experiment 2). An estimate of identification performance under implicit recognition was derived in Experiment 3. These findings reflect the joint influence of recognition intent and orthographic regularity in shaping readers' orthographic representation. The implication for the role of visual attention in word recognition was also discussed.
ERIC Educational Resources Information Center
Shafiro, Valeriy; Kharkhurin, Anatoliy V.
2009-01-01
Abstract Does native language phonology influence visual word processing in a second language? This question was investigated in two experiments with two groups of Russian-English bilinguals, differing in their English experience, and a monolingual English control group. Experiment 1 tested visual word recognition following semantic…
Perea, Manuel; Panadero, Victoria
2014-01-01
The vast majority of neural and computational models of visual-word recognition assume that lexical access is achieved via the activation of abstract letter identities. Thus, a word's overall shape should play no role in this process. In the present lexical decision experiment, we compared word-like pseudowords like viotín (same shape as its base word: violín) vs. viocín (different shape) in mature (college-aged skilled readers), immature (normally reading children), and immature/impaired (young readers with developmental dyslexia) word-recognition systems. Results revealed similar response times (and error rates) to consistent-shape and inconsistent-shape pseudowords for both adult skilled readers and normally reading children - this is consistent with current models of visual-word recognition. In contrast, young readers with developmental dyslexia made significantly more errors to viotín-like pseudowords than to viocín-like pseudowords. Thus, unlike normally reading children, young readers with developmental dyslexia are sensitive to a word's visual cues, presumably because of poor letter representations.
Individual Differences in Visual Word Recognition: Insights from the English Lexicon Project
ERIC Educational Resources Information Center
Yap, Melvin J.; Balota, David A.; Sibley, Daragh E.; Ratcliff, Roger
2012-01-01
Empirical work and models of visual word recognition have traditionally focused on group-level performance. Despite the emphasis on the prototypical reader, there is clear evidence that variation in reading skill modulates word recognition performance. In the present study, we examined differences among individuals who contributed to the English…
Visual Word Recognition Across the Adult Lifespan
Cohen-Shikora, Emily R.; Balota, David A.
2016-01-01
The current study examines visual word recognition in a large sample (N = 148) across the adult lifespan and across a large set of stimuli (N = 1187) in three different lexical processing tasks (pronunciation, lexical decision, and animacy judgments). Although the focus of the present study is on the influence of word frequency, a diverse set of other variables are examined as the system ages and acquires more experience with language. Computational models and conceptual theories of visual word recognition and aging make differing predictions for age-related changes in the system. However, these have been difficult to assess because prior studies have produced inconsistent results, possibly due to sample differences, analytic procedures, and/or task-specific processes. The current study confronts these potential differences by using three different tasks, treating age and word variables as continuous, and exploring the influence of individual differences such as vocabulary, vision, and working memory. The primary finding is remarkable stability in the influence of a diverse set of variables on visual word recognition across the adult age spectrum. This pattern is discussed in reference to previous inconsistent findings in the literature and implications for current models of visual word recognition. PMID:27336629
Talker and lexical effects on audiovisual word recognition by adults with cochlear implants.
Kaiser, Adam R; Kirk, Karen Iler; Lachs, Lorin; Pisoni, David B
2003-04-01
The present study examined how postlingually deafened adults with cochlear implants combine visual information from lipreading with auditory cues in an open-set word recognition task. Adults with normal hearing served as a comparison group. Word recognition performance was assessed using lexically controlled word lists presented under auditory-only, visual-only, and combined audiovisual presentation formats. Effects of talker variability were studied by manipulating the number of talkers producing the stimulus tokens. Lexical competition was investigated using sets of lexically easy and lexically hard test words. To assess the degree of audiovisual integration, a measure of visual enhancement, R(a), was used to assess the gain in performance provided in the audiovisual presentation format relative to the maximum possible performance obtainable in the auditory-only format. Results showed that word recognition performance was highest for audiovisual presentation followed by auditory-only and then visual-only stimulus presentation. Performance was better for single-talker lists than for multiple-talker lists, particularly under the audiovisual presentation format. Word recognition performance was better for the lexically easy than for the lexically hard words regardless of presentation format. Visual enhancement scores were higher for single-talker conditions compared to multiple-talker conditions and tended to be somewhat better for lexically easy words than for lexically hard words. The pattern of results suggests that information from the auditory and visual modalities is used to access common, multimodal lexical representations in memory. The findings are discussed in terms of the complementary nature of auditory and visual sources of information that specify the same underlying gestures and articulatory events in speech.
Talker and Lexical Effects on Audiovisual Word Recognition by Adults With Cochlear Implants
Kaiser, Adam R.; Kirk, Karen Iler; Lachs, Lorin; Pisoni, David B.
2012-01-01
The present study examined how postlingually deafened adults with cochlear implants combine visual information from lipreading with auditory cues in an open-set word recognition task. Adults with normal hearing served as a comparison group. Word recognition performance was assessed using lexically controlled word lists presented under auditory-only, visual-only, and combined audiovisual presentation formats. Effects of talker variability were studied by manipulating the number of talkers producing the stimulus tokens. Lexical competition was investigated using sets of lexically easy and lexically hard test words. To assess the degree of audiovisual integration, a measure of visual enhancement, Ra, was used to assess the gain in performance provided in the audiovisual presentation format relative to the maximum possible performance obtainable in the auditory-only format. Results showed that word recognition performance was highest for audiovisual presentation followed by auditory-only and then visual-only stimulus presentation. Performance was better for single-talker lists than for multiple-talker lists, particularly under the audiovisual presentation format. Word recognition performance was better for the lexically easy than for the lexically hard words regardless of presentation format. Visual enhancement scores were higher for single-talker conditions compared to multiple-talker conditions and tended to be somewhat better for lexically easy words than for lexically hard words. The pattern of results suggests that information from the auditory and visual modalities is used to access common, multimodal lexical representations in memory. The findings are discussed in terms of the complementary nature of auditory and visual sources of information that specify the same underlying gestures and articulatory events in speech. PMID:14700380
The impact of inverted text on visual word processing: An fMRI study.
Sussman, Bethany L; Reddigari, Samir; Newman, Sharlene D
2018-06-01
Visual word recognition has been studied for decades. One question that has received limited attention is how different text presentation orientations disrupt word recognition. By examining how word recognition processes may be disrupted by different text orientations it is hoped that new insights can be gained concerning the process. Here, we examined the impact of rotating and inverting text on the neural network responsible for visual word recognition focusing primarily on a region of the occipto-temporal cortex referred to as the visual word form area (VWFA). A lexical decision task was employed in which words and pseudowords were presented in one of three orientations (upright, rotated or inverted). The results demonstrate that inversion caused the greatest disruption of visual word recognition processes. Both rotated and inverted text elicited increased activation in spatial attention regions within the right parietal cortex. However, inverted text recruited phonological and articulatory processing regions within the left inferior frontal and left inferior parietal cortices. Finally, the VWFA was found to not behave similarly to the fusiform face area in that unusual text orientations resulted in increased activation and not decreased activation. It is hypothesized here that the VWFA activation is modulated by feedback from linguistic processes. Copyright © 2018 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Brochard, Renaud; Tassin, Maxime; Zagar, Daniel
2013-01-01
The present research aimed to investigate whether, as previously observed with pictures, background auditory rhythm would also influence visual word recognition. In a lexical decision task, participants were presented with bisyllabic visual words, segmented into two successive groups of letters, while an irrelevant strongly metric auditory…
Shen, Wei; Qu, Qingqing; Li, Xingshan
2016-07-01
In the present study, we investigated whether the activation of semantic information during spoken word recognition can mediate visual attention's deployment to printed Chinese words. We used a visual-world paradigm with printed words, in which participants listened to a spoken target word embedded in a neutral spoken sentence while looking at a visual display of printed words. We examined whether a semantic competitor effect could be observed in the printed-word version of the visual-world paradigm. In Experiment 1, the relationship between the spoken target words and the printed words was manipulated so that they were semantically related (a semantic competitor), phonologically related (a phonological competitor), or unrelated (distractors). We found that the probability of fixations on semantic competitors was significantly higher than that of fixations on the distractors. In Experiment 2, the orthographic similarity between the spoken target words and their semantic competitors was manipulated to further examine whether the semantic competitor effect was modulated by orthographic similarity. We found significant semantic competitor effects regardless of orthographic similarity. Our study not only reveals that semantic information can affect visual attention, it also provides important new insights into the methodology employed to investigate the semantic processing of spoken words during spoken word recognition using the printed-word version of the visual-world paradigm.
The Effect of the Balance of Orthographic Neighborhood Distribution in Visual Word Recognition
ERIC Educational Resources Information Center
Robert, Christelle; Mathey, Stephanie; Zagar, Daniel
2007-01-01
The present study investigated whether the balance of neighborhood distribution (i.e., the way orthographic neighbors are spread across letter positions) influences visual word recognition. Three word conditions were compared. Word neighbors were either concentrated on one letter position (e.g.,nasse/basse-lasse-tasse-masse) or were unequally…
Evidence for Early Morphological Decomposition in Visual Word Recognition
ERIC Educational Resources Information Center
Solomyak, Olla; Marantz, Alec
2010-01-01
We employ a single-trial correlational MEG analysis technique to investigate early processing in the visual recognition of morphologically complex words. Three classes of affixed words were presented in a lexical decision task: free stems (e.g., taxable), bound roots (e.g., tolerable), and unique root words (e.g., vulnerable, the root of which…
Hino, Yasushi; Kusunose, Yuu; Miyamura, Shinobu; Lupker, Stephen J
2017-01-01
In most models of word processing, the degrees of consistency in the mappings between orthographic, phonological, and semantic representations are hypothesized to affect reading time. Following Hino, Miyamura, and Lupker's (2011) examination of the orthographic-phonological (O-P) and orthographic-semantic (O-S) consistency for 1,114 Japanese words (339 katakana and 775 kanji words), in the present research, we initially attempted to measure the phonological-orthographic (P-O) consistency for those same words. In contrast to the O-P and O-S consistencies, which were equivalent for kanji and katakana words, the P-O relationships were much more inconsistent for the kanji words than for the katakana words. The impact of kanji words' P-O consistency was then examined in both visual and auditory word recognition tasks. Although there was no effect of P-O consistency in the standard visual lexical-decision task, significant effects were detected in a lexical-decision task with auditory stimuli, in a perceptual identification task using masked visual stimuli, and in a lexical-decision task with degraded visual stimuli. The implications of these results are discussed in terms of the impact of P-O consistency in auditory and visual word recognition. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
The Role of Derivative Suffix Productivity in the Visual Word Recognition of Complex Words
ERIC Educational Resources Information Center
Lázaro, Miguel; Sainz, Javier; Illera, Víctor
2015-01-01
In this article we present two lexical decision experiments that examine the role of base frequency and of derivative suffix productivity in visual recognition of Spanish words. In the first experiment we find that complex words with productive derivative suffixes result in lower response times than those with unproductive derivative suffixes.…
ERIC Educational Resources Information Center
Janssen, David Rainsford
This study investigated alternate methods of letter discrimination pretraining and word recognition training in young children. Seventy kindergarten children were trained to recognize eight printed words in a vocabulary list by a mixed-list paired-associate method. Four of the stimulus words had visual response choices (pictures) and four had…
Morphological Processing during Visual Word Recognition in Hebrew as a First and a Second Language
ERIC Educational Resources Information Center
Norman, Tal; Degani, Tamar; Peleg, Orna
2017-01-01
The present study examined whether sublexical morphological processing takes place during visual word-recognition in Hebrew, and whether morphological decomposition of written words depends on lexical activation of the complete word. Furthermore, it examined whether morphological processing is similar when reading Hebrew as a first language (L1)…
Siakaluk, Paul D; Pexman, Penny M; Aguilera, Laura; Owen, William J; Sears, Christopher R
2008-01-01
We examined the effects of sensorimotor experience in two visual word recognition tasks. Body-object interaction (BOI) ratings were collected for a large set of words. These ratings assess perceptions of the ease with which a human body can physically interact with a word's referent. A set of high BOI words (e.g., mask) and a set of low BOI words (e.g., ship) were created, matched on imageability and concreteness. Facilitatory BOI effects were observed in lexical decision and phonological lexical decision tasks: responses were faster for high BOI words than for low BOI words. We discuss how our findings may be accounted for by (a) semantic feedback within the visual word recognition system, and (b) an embodied view of cognition (e.g., Barsalou's perceptual symbol systems theory), which proposes that semantic knowledge is grounded in sensorimotor interactions with the environment.
Bonin, Patrick; Méot, Alain; Ferrand, Ludovic; Bugaïska, Aurélia
2015-09-01
We collected sensory experience ratings (SERs) for 1,659 French words in adults. Sensory experience for words is a recently introduced variable that corresponds to the degree to which words elicit sensory and perceptual experiences (Juhasz & Yap Behavior Research Methods, 45, 160-168, 2013; Juhasz, Yap, Dicke, Taylor, & Gullick Quarterly Journal of Experimental Psychology, 64, 1683-1691, 2011). The relationships of the sensory experience norms with other psycholinguistic variables (e.g., imageability and age of acquisition) were analyzed. We also investigated the degree to which SER predicted performance in visual word recognition tasks (lexical decision, word naming, and progressive demasking). The analyses indicated that SER reliably predicted response times in lexical decision, but not in word naming or progressive demasking. The findings are discussed in relation to the status of SER, the role of semantic code activation in visual word recognition, and the embodied view of cognition.
Acquired prosopagnosia without word recognition deficits.
Susilo, Tirta; Wright, Victoria; Tree, Jeremy J; Duchaine, Bradley
2015-01-01
It has long been suggested that face recognition relies on specialized mechanisms that are not involved in visual recognition of other object categories, including those that require expert, fine-grained discrimination at the exemplar level such as written words. But according to the recently proposed many-to-many theory of object recognition (MTMT), visual recognition of faces and words are carried out by common mechanisms [Behrmann, M., & Plaut, D. C. ( 2013 ). Distributed circuits, not circumscribed centers, mediate visual recognition. Trends in Cognitive Sciences, 17, 210-219]. MTMT acknowledges that face and word recognition are lateralized, but posits that the mechanisms that predominantly carry out face recognition still contribute to word recognition and vice versa. MTMT makes a key prediction, namely that acquired prosopagnosics should exhibit some measure of word recognition deficits. We tested this prediction by assessing written word recognition in five acquired prosopagnosic patients. Four patients had lesions limited to the right hemisphere while one had bilateral lesions with more pronounced lesions in the right hemisphere. The patients completed a total of seven word recognition tasks: two lexical decision tasks and five reading aloud tasks totalling more than 1200 trials. The performances of the four older patients (3 female, age range 50-64 years) were compared to those of 12 older controls (8 female, age range 56-66 years), while the performances of the younger prosopagnosic (male, 31 years) were compared to those of 14 younger controls (9 female, age range 20-33 years). We analysed all results at the single-patient level using Crawford's t-test. Across seven tasks, four prosopagnosics performed as quickly and accurately as controls. Our results demonstrate that acquired prosopagnosia can exist without word recognition deficits. These findings are inconsistent with a key prediction of MTMT. They instead support the hypothesis that face recognition is carried out by specialized mechanisms that do not contribute to recognition of written words.
Image jitter enhances visual performance when spatial resolution is impaired.
Watson, Lynne M; Strang, Niall C; Scobie, Fraser; Love, Gordon D; Seidel, Dirk; Manahilov, Velitchko
2012-09-06
Visibility of low-spatial frequency stimuli improves when their contrast is modulated at 5 to 10 Hz compared with stationary stimuli. Therefore, temporal modulations of visual objects could enhance the performance of low vision patients who primarily perceive images of low-spatial frequency content. We investigated the effect of retinal-image jitter on word recognition speed and facial emotion recognition in subjects with central visual impairment. Word recognition speed and accuracy of facial emotion discrimination were measured in volunteers with AMD under stationary and jittering conditions. Computer-driven and optoelectronic approaches were used to induce retinal-image jitter with duration of 100 or 166 ms and amplitude within the range of 0.5 to 2.6° visual angle. Word recognition speed was also measured for participants with simulated (Bangerter filters) visual impairment. Text jittering markedly enhanced word recognition speed for people with severe visual loss (101 ± 25%), while for those with moderate visual impairment, this effect was weaker (19 ± 9%). The ability of low vision patients to discriminate the facial emotions of jittering images improved by a factor of 2. A prototype of optoelectronic jitter goggles produced similar improvement in facial emotion discrimination. Word recognition speed in participants with simulated visual impairment was enhanced for interjitter intervals over 100 ms and reduced for shorter intervals. Results suggest that retinal-image jitter with optimal frequency and amplitude is an effective strategy for enhancing visual information processing in the absence of spatial detail. These findings will enable the development of novel tools to improve the quality of life of low vision patients.
ERIC Educational Resources Information Center
Sauval, Karinne; Casalis, Séverine; Perre, Laetitia
2017-01-01
This study investigated the phonological contribution during visual word recognition in child readers as a function of general reading expertise (third and fifth grades) and specific word exposure (frequent and less-frequent words). An intermodal priming in lexical decision task was performed. Auditory primes (identical and unrelated) were used in…
ERIC Educational Resources Information Center
Siakaluk, Paul D.; Pexman, Penny M.; Aguilera, Laura; Owen, William J.; Sears, Christopher R.
2008-01-01
We examined the effects of sensorimotor experience in two visual word recognition tasks. Body-object interaction (BOI) ratings were collected for a large set of words. These ratings assess perceptions of the ease with which a human body can physically interact with a word's referent. A set of high BOI words (e.g., "mask") and a set of low BOI…
Reading laterally: the cerebral hemispheric use of spatial frequencies in visual word recognition.
Tadros, Karine; Dupuis-Roy, Nicolas; Fiset, Daniel; Arguin, Martin; Gosselin, Frédéric
2013-01-04
It is generally accepted that the left hemisphere (LH) is more capable for reading than the right hemisphere (RH). Left hemifield presentations (initially processed by the RH) lead to a globally higher error rate, slower word identification, and a significantly stronger word length effect (i.e., slower reaction times for longer words). Because the visuo-perceptual mechanisms of the brain for word recognition are primarily localized in the LH (Cohen et al., 2003), it is possible that this part of the brain possesses better spatial frequency (SF) tuning for processing the visual properties of words than the RH. The main objective of this study is to determine the SF tuning functions of the LH and RH for word recognition. Each word image was randomly sampled in the SF domain using the SF bubbles method (Willenbockel et al., 2010) and was presented laterally to the left or right visual hemifield. As expected, the LH requires less visual information than the RH to reach the same level of performance, illustrating the well-known LH advantage for word recognition. Globally, the SF tuning of both hemispheres is similar. However, these seemingly identical tuning functions hide important differences. Most importantly, we argue that the RH requires higher SFs to identify longer words because of crowding.
Do handwritten words magnify lexical effects in visual word recognition?
Perea, Manuel; Gil-López, Cristina; Beléndez, Victoria; Carreiras, Manuel
2016-01-01
An examination of how the word recognition system is able to process handwritten words is fundamental to formulate a comprehensive model of visual word recognition. Previous research has revealed that the magnitude of lexical effects (e.g., the word-frequency effect) is greater with handwritten words than with printed words. In the present lexical decision experiments, we examined whether the quality of handwritten words moderates the recruitment of top-down feedback, as reflected in word-frequency effects. Results showed a reading cost for difficult-to-read and easy-to-read handwritten words relative to printed words. But the critical finding was that difficult-to-read handwritten words, but not easy-to-read handwritten words, showed a greater word-frequency effect than printed words. Therefore, the inherent physical variability of handwritten words does not necessarily boost the magnitude of lexical effects.
Rapid extraction of gist from visual text and its influence on word recognition.
Asano, Michiko; Yokosawa, Kazuhiko
2011-01-01
Two experiments explored rapid extraction of gist from a visual text and its influence on word recognition. In both, a short text (sentence) containing a target word was presented for 200 ms and was followed by a target recognition task. Results showed that participants recognized contextually anomalous word targets less frequently than contextually consistent counterparts (Experiment 1). This context effect was obtained when sentences contained the same semantic content but with disrupted syntactic structure (Experiment 2). Results demonstrate that words in a briefly presented visual sentence are processed in parallel and that rapid extraction of sentence gist relies on a primitive representation of sentence context (termed protocontext) that is semantically activated by the simultaneous presentation of multiple words (i.e., a sentence) before syntactic processing.
Visual Speech Primes Open-Set Recognition of Spoken Words
ERIC Educational Resources Information Center
Buchwald, Adam B.; Winters, Stephen J.; Pisoni, David B.
2009-01-01
Visual speech perception has become a topic of considerable interest to speech researchers. Previous research has demonstrated that perceivers neurally encode and use speech information from the visual modality, and this information has been found to facilitate spoken word recognition in tasks such as lexical decision (Kim, Davis, & Krins,…
ERIC Educational Resources Information Center
Hsiao, Janet H.; Lam, Sze Man
2013-01-01
Through computational modeling, here we examine whether visual and task characteristics of writing systems alone can account for lateralization differences in visual word recognition between different languages without assuming influence from left hemisphere (LH) lateralized language processes. We apply a hemispheric processing model of face…
Age-of-Acquisition Effects in Visual Word Recognition: Evidence from Expert Vocabularies
ERIC Educational Resources Information Center
Stadthagen-Gonzalez, Hans; Bowers, Jeffrey S.; Damian, Markus F.
2004-01-01
Three experiments assessed the contributions of age-of-acquisition (AoA) and frequency to visual word recognition. Three databases were created from electronic journals in chemistry, psychology and geology in order to identify technical words that are extremely frequent in each discipline but acquired late in life. In Experiment 1, psychologists…
Is Syntactic-Category Processing Obligatory in Visual Word Recognition? Evidence from Chinese
ERIC Educational Resources Information Center
Wong, Andus Wing-Kuen; Chen, Hsuan-Chih
2012-01-01
Three experiments were conducted to investigate how syntactic-category and semantic information is processed in visual word recognition. The stimuli were two-character Chinese words in which semantic and syntactic-category ambiguities were factorially manipulated. A lexical decision task was employed in Experiment 1, whereas a semantic relatedness…
Conrad, Markus; Carreiras, Manuel; Tamm, Sascha; Jacobs, Arthur M
2009-04-01
Over the last decade, there has been increasing evidence for syllabic processing during visual word recognition. If syllabic effects prove to be independent from orthographic redundancy, this would seriously challenge the ability of current computational models to account for the processing of polysyllabic words. Three experiments are presented to disentangle effects of the frequency of syllabic units and orthographic segments in lexical decision. In Experiment 1 the authors obtained an inhibitory syllable frequency effect that was unaffected by the presence or absence of a bigram trough at the syllable boundary. In Experiments 2 and 3 an inhibitory effect of initial syllable frequency but a facilitative effect of initial bigram frequency emerged when manipulating 1 of the 2 measures and controlling for the other in Spanish words starting with consonant-vowel syllables. The authors conclude that effects of syllable frequency and letter-cluster frequency are independent and arise at different processing levels of visual word recognition. Results are discussed within the framework of an interactive activation model of visual word recognition. (c) 2009 APA, all rights reserved.
Interpreting Chicken-Scratch: Lexical Access for Handwritten Words
ERIC Educational Resources Information Center
Barnhart, Anthony S.; Goldinger, Stephen D.
2010-01-01
Handwritten word recognition is a field of study that has largely been neglected in the psychological literature, despite its prevalence in society. Whereas studies of spoken word recognition almost exclusively employ natural, human voices as stimuli, studies of visual word recognition use synthetic typefaces, thus simplifying the process of word…
Word position affects stimulus recognition: evidence for early ERP short-term plastic modulation.
Spironelli, Chiara; Galfano, Giovanni; Umiltà, Carlo; Angrilli, Alessandro
2011-12-01
The present study was aimed at investigating the short-term plastic changes that follow word learning at a neurophysiological level. The main hypothesis was that word position (left or right visual field, LVF/RH or RVF/LH) in the initial learning phase would leave a trace that affected, in the subsequent recognition phase, the Recognition Potential (i.e., the first negative component distinguishing words from other stimuli) elicited 220-240 ms after centrally presented stimuli. Forty-eight students were administered, in the learning phase, 125 words for 4s, randomly presented half in the left and half in the right visual field. In the recognition phase, participants were split into two equal groups, one was assigned to the Word task, the other to the Picture task (in which half of the 125 pictures were new, and half matched prior studied words). During the Word task, old RVF/LH words elicited significantly greater negativity in left posterior sites with respect to old LVF/RH words, which in turn showed the same pattern of activation evoked by new words. Therefore, correspondence between stimulus spatial position and hemisphere specialized in automatic word recognition created a robust prime for subsequent recognition. During the Picture task, pictures matching old RVF/LH words showed no differences compared with new pictures, but evoked significantly greater negativity than pictures matching old LVF/RH words. Thus, the priming effect vanished when the task required a switch from visual analysis to stored linguistic information, whereas the lack of correspondence between stimulus position and network specialized in automatic word recognition (i.e., when words were presented to the LVF/RH) revealed the implicit costs for recognition. Results support the view that short-term plastic changes occurring in a linguistic learning task interact with both stimulus position and modality (written word vs. picture representation). Copyright © 2011 Elsevier B.V. All rights reserved.
Shen, Wei; Qu, Qingqing; Tong, Xiuhong
2018-05-01
The aim of this study was to investigate the extent to which phonological information mediates the visual attention shift to printed Chinese words in spoken word recognition by using an eye-movement technique with a printed-word paradigm. In this paradigm, participants are visually presented with four printed words on a computer screen, which include a target word, a phonological competitor, and two distractors. Participants are then required to select the target word using a computer mouse, and the eye movements are recorded. In Experiment 1, phonological information was manipulated at the full-phonological overlap; in Experiment 2, phonological information at the partial-phonological overlap was manipulated; and in Experiment 3, the phonological competitors were manipulated to share either fulloverlap or partial-overlap with targets directly. Results of the three experiments showed that the phonological competitor effects were observed at both the full-phonological overlap and partial-phonological overlap conditions. That is, phonological competitors attracted more fixations than distractors, which suggested that phonological information mediates the visual attention shift during spoken word recognition. More importantly, we found that the mediating role of phonological information varies as a function of the phonological similarity between target words and phonological competitors.
ERIC Educational Resources Information Center
Lavidor, Michal; Hayes, Adrian; Shillcock, Richard; Ellis, Andrew W.
2004-01-01
The split fovea theory proposes that visual word recognition of centrally presented words is mediated by the splitting of the foveal image, with letters to the left of fixation being projected to the right hemisphere (RH) and letters to the right of fixation being projected to the left hemisphere (LH). Two lexical decision experiments aimed to…
ERIC Educational Resources Information Center
Vergara-Martinez, Marta; Perea, Manuel; Marin, Alejandro; Carreiras, Manuel
2011-01-01
Recent research suggests that there is a processing distinction between consonants and vowels in visual-word recognition. Here we conjointly examine the time course of consonants and vowels in processes of letter identity and letter position assignment. Event related potentials (ERPs) were recorded while participants read words and pseudowords in…
ERIC Educational Resources Information Center
Nobre, Alexandre de Pontes; de Salles, Jerusa Fumagalli
2016-01-01
The aim of this study was to investigate relations between lexical-semantic processing and two components of reading: visual word recognition and reading comprehension. Sixty-eight children from private schools in Porto Alegre, Brazil, from 7 to 12 years, were evaluated. Reading was assessed with a word/nonword reading task and a reading…
ERIC Educational Resources Information Center
Wheat, Katherine L.; Cornelissen, Piers L.; Sack, Alexander T.; Schuhmann, Teresa; Goebel, Rainer; Blomert, Leo
2013-01-01
Magnetoencephalography (MEG) has shown pseudohomophone priming effects at Broca's area (specifically pars opercularis of left inferior frontal gyrus and precentral gyrus; LIFGpo/PCG) within [approximately]100 ms of viewing a word. This is consistent with Broca's area involvement in fast phonological access during visual word recognition. Here we…
Reading Habits, Perceptual Learning, and Recognition of Printed Words
ERIC Educational Resources Information Center
Nazir, Tatjana A.; Ben-Boutayab, Nadia; Decoppet, Nathalie; Deutsch, Avital; Frost, Ram
2004-01-01
The present work aims at demonstrating that visual training associated with the act of reading modifies the way we perceive printed words. As reading does not train all parts of the retina in the same way but favors regions on the side in the direction of scanning, visual word recognition should be better at retinal locations that are frequently…
Caffeine Improves Left Hemisphere Processing of Positive Words
Kuchinke, Lars; Lux, Vanessa
2012-01-01
A positivity advantage is known in emotional word recognition in that positive words are consistently processed faster and with fewer errors compared to emotionally neutral words. A similar advantage is not evident for negative words. Results of divided visual field studies, where stimuli are presented in either the left or right visual field and are initially processed by the contra-lateral brain hemisphere, point to a specificity of the language-dominant left hemisphere. The present study examined this effect by showing that the intake of caffeine further enhanced the recognition performance of positive, but not negative or neutral stimuli compared to a placebo control group. Because this effect was only present in the right visual field/left hemisphere condition, and based on the close link between caffeine intake and dopaminergic transmission, this result points to a dopaminergic explanation of the positivity advantage in emotional word recognition. PMID:23144893
Prosodic Phonological Representations Early in Visual Word Recognition
ERIC Educational Resources Information Center
Ashby, Jane; Martin, Andrea E.
2008-01-01
Two experiments examined the nature of the phonological representations used during visual word recognition. We tested whether a minimality constraint (R. Frost, 1998) limits the complexity of early representations to a simple string of phonemes. Alternatively, readers might activate elaborated representations that include prosodic syllable…
[Representation of letter position in visual word recognition process].
Makioka, S
1994-08-01
Two experiments investigated the representation of letter position in visual word recognition process. In Experiment 1, subjects (12 undergraduates and graduates) were asked to detect a target word in a briefly-presented probe. Probes consisted of two kanji words. The latters which formed targets (critical letters) were always contained in probes. (e.g. target: [symbol: see text] probe: [symbol: see text]) High false alarm rate was observed when critical letters occupied the same within-word relative position (left or right within the word) in the probe words as in the target word. In Experiment 2 (subject were ten undergraduates and graduates), spaces adjacent to probe words were replaced by randomly chosen hiragana letters (e.g. [symbol: see text]), because spaces are not used to separate words in regular Japanese sentences. In addition to the effect of within-word relative position as in Experiment 1, the effect of between-word relative position (left or right across the probe words) was observed. These results suggest that information about within-word relative position of a letter is used in word recognition process. The effect of within-word relative position was explained by a connectionist model of word recognition.
ERIC Educational Resources Information Center
Sauval, Karinne; Perre, Laetitia; Casalis, Séverine
2017-01-01
The present study aimed to investigate the development of automatic phonological processes involved in visual word recognition during reading acquisition in French. A visual masked priming lexical decision experiment was carried out with third, fifth graders and adult skilled readers. Three different types of partial overlap between the prime and…
Short-Term and Long-Term Effects on Visual Word Recognition
ERIC Educational Resources Information Center
Protopapas, Athanassios; Kapnoula, Efthymia C.
2016-01-01
Effects of lexical and sublexical variables on visual word recognition are often treated as homogeneous across participants and stable over time. In this study, we examine the modulation of frequency, length, syllable and bigram frequency, orthographic neighborhood, and graphophonemic consistency effects by (a) individual differences, and (b) item…
ERIC Educational Resources Information Center
Schmidtke, Daniel; Matsuki, Kazunaga; Kuperman, Victor
2017-01-01
The current study addresses a discrepancy in the psycholinguistic literature about the chronology of information processing during the visual recognition of morphologically complex words. "Form-then-meaning" accounts of complex word recognition claim that morphemes are processed as units of form prior to any influence of their meanings,…
Specifying Theories of Developmental Dyslexia: A Diffusion Model Analysis of Word Recognition
ERIC Educational Resources Information Center
Zeguers, Maaike H. T.; Snellings, Patrick; Tijms, Jurgen; Weeda, Wouter D.; Tamboer, Peter; Bexkens, Anika; Huizenga, Hilde M.
2011-01-01
The nature of word recognition difficulties in developmental dyslexia is still a topic of controversy. We investigated the contribution of phonological processing deficits and uncertainty to the word recognition difficulties of dyslexic children by mathematical diffusion modeling of visual and auditory lexical decision data. The first study showed…
Individual Differences in Visual Word Recognition: Insights from the English Lexicon Project
Yap, Melvin J.; Balota, David A.; Sibley, Daragh E.; Ratcliff, Roger
2011-01-01
Empirical work and models of visual word recognition have traditionally focused on group-level performance. Despite the emphasis on the prototypical reader, there is clear evidence that variation in reading skill modulates word recognition performance. In the present study, we examined differences between individuals who contributed to the English Lexicon Project (http://elexicon.wustl.edu), an online behavioral database containing nearly four million word recognition (speeded pronunciation and lexical decision) trials from over 1,200 participants. We observed considerable within- and between-session reliability across distinct sets of items, in terms of overall mean response time (RT), RT distributional characteristics, diffusion model parameters (Ratcliff, Gomez, & McKoon, 2004), and sensitivity to underlying lexical dimensions. This indicates reliably detectable individual differences in word recognition performance. In addition, higher vocabulary knowledge was associated with faster, more accurate word recognition performance, attenuated sensitivity to stimuli characteristics, and more efficient accumulation of information. Finally, in contrast to suggestions in the literature, we did not find evidence that individuals were trading-off in their utilization of lexical and nonlexical information. PMID:21728459
Multisensory speech perception in autism spectrum disorder: From phoneme to whole-word perception.
Stevenson, Ryan A; Baum, Sarah H; Segers, Magali; Ferber, Susanne; Barense, Morgan D; Wallace, Mark T
2017-07-01
Speech perception in noisy environments is boosted when a listener can see the speaker's mouth and integrate the auditory and visual speech information. Autistic children have a diminished capacity to integrate sensory information across modalities, which contributes to core symptoms of autism, such as impairments in social communication. We investigated the abilities of autistic and typically-developing (TD) children to integrate auditory and visual speech stimuli in various signal-to-noise ratios (SNR). Measurements of both whole-word and phoneme recognition were recorded. At the level of whole-word recognition, autistic children exhibited reduced performance in both the auditory and audiovisual modalities. Importantly, autistic children showed reduced behavioral benefit from multisensory integration with whole-word recognition, specifically at low SNRs. At the level of phoneme recognition, autistic children exhibited reduced performance relative to their TD peers in auditory, visual, and audiovisual modalities. However, and in contrast to their performance at the level of whole-word recognition, both autistic and TD children showed benefits from multisensory integration for phoneme recognition. In accordance with the principle of inverse effectiveness, both groups exhibited greater benefit at low SNRs relative to high SNRs. Thus, while autistic children showed typical multisensory benefits during phoneme recognition, these benefits did not translate to typical multisensory benefit of whole-word recognition in noisy environments. We hypothesize that sensory impairments in autistic children raise the SNR threshold needed to extract meaningful information from a given sensory input, resulting in subsequent failure to exhibit behavioral benefits from additional sensory information at the level of whole-word recognition. Autism Res 2017. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. Autism Res 2017, 10: 1280-1290. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. © 2017 International Society for Autism Research, Wiley Periodicals, Inc.
ERIC Educational Resources Information Center
Beech, John R.; Mayall, Kate A.
2005-01-01
This study investigates the relative roles of internal and external letter features in word recognition. In Experiment 1 the efficacy of outer word fragments (words with all their horizontal internal features removed) was compared with inner word fragments (words with their outer features removed) as primes in a forward masking paradigm. These…
Functions of graphemic and phonemic codes in visual word-recognition.
Meyer, D E; Schvaneveldt, R W; Ruddy, M G
1974-03-01
Previous investigators have argued that printed words are recognized directly from visual representations and/or phonological representations obtained through phonemic recoding. The present research tested these hypotheses by manipulating graphemic and phonemic relations within various pairs of letter strings. Ss in two experiments classified the pairs as words or nonwords. Reaction times and error rates were relatively small for word pairs (e.g., BRIBE-TRIBE) that were both graphemically, and phonemically similar. Graphemic similarity alone inhibited performance on other word pairs (e.g., COUCH-TOUCH). These and other results suggest that phonological representations play a significant role in visual word recognition and that there is a dependence between successive phonemic-encoding operations. An encoding-bias model is proposed to explain the data.
Phonological Activation in Multi-Syllabic Sord Recognition
ERIC Educational Resources Information Center
Lee, Chang H.
2007-01-01
Three experiments were conducted to test the phonological recoding hypothesis in visual word recognition. Most studies on this issue have been conducted using mono-syllabic words, eventually constructing various models of phonological processing. Yet in many languages including English, the majority of words are multi-syllabic words. English…
Soares, Ana Paula; Perea, Manuel; Comesaña, Montserrat
2014-01-01
Recent research with skilled adult readers has consistently revealed an advantage of consonants over vowels in visual-word recognition (i.e., the so-called "consonant bias"). Nevertheless, little is known about how early in development the consonant bias emerges. This work aims to address this issue by studying the relative contribution of consonants and vowels at the early stages of visual-word recognition in developing readers (2(nd) and 4(th) Grade children) and skilled adult readers (college students) using a masked priming lexical decision task. Target words starting either with a consonant or a vowel were preceded by a briefly presented masked prime (50 ms) that could be the same as the target (e.g., pirata-PIRATA [pirate-PIRATE]), a consonant-preserving prime (e.g., pureto-PIRATA), a vowel-preserving prime (e.g., gicala-PIRATA), or an unrelated prime (e.g., bocelo -PIRATA). Results revealed significant priming effects for the identity and consonant-preserving conditions in adult readers and 4(th) Grade children, whereas 2(nd) graders only showed priming for the identity condition. In adult readers, the advantage of consonants was observed both for words starting with a consonant or a vowel, while in 4(th) graders this advantage was restricted to words with an initial consonant. Thus, the present findings suggest that a Consonant/Vowel skeleton should be included in future (developmental) models of visual-word recognition and reading.
Morphological Influences on the Recognition of Monosyllabic Monomorphemic Words
ERIC Educational Resources Information Center
Baayen, R. H.; Feldman, L. B.; Schreuder, R.
2006-01-01
Balota et al. [Balota, D., Cortese, M., Sergent-Marshall, S., Spieler, D., & Yap, M. (2004). Visual word recognition for single-syllable words. "Journal of Experimental Psychology: General, 133," 283-316] studied lexical processing in word naming and lexical decision using hierarchical multiple regression techniques for a large data set of…
ERIC Educational Resources Information Center
Duyck, Wouter; Van Assche, Eva; Drieghe, Denis; Hartsuiker, Robert J.
2007-01-01
Recent research on bilingualism has shown that lexical access in visual word recognition by bilinguals is not selective with respect to language. In the present study, the authors investigated language-independent lexical access in bilinguals reading sentences, which constitutes a strong unilingual linguistic context. In the first experiment,…
Early Decomposition in Visual Word Recognition: Dissociating Morphology, Form, and Meaning
ERIC Educational Resources Information Center
Marslen-Wilson, William D.; Bozic, Mirjana; Randall, Billi
2008-01-01
The role of morphological, semantic, and form-based factors in the early stages of visual word recognition was investigated across different SOAs in a masked priming paradigm, focusing on English derivational morphology. In a first set of experiments, stimulus pairs co-varying in morphological decomposability and in semantic and orthographic…
ERIC Educational Resources Information Center
Weaver, Phyllis A.; Rosner, Jerome
1979-01-01
Scores of 25 learning disabled students (aged 9 to 13) were compared on five tests: a visual-perceptual test (Coloured Progressive Matrices); an auditory-perceptual test (Auditory Motor Placement); a listening and reading comprehension test (Durrell Listening-Reading Series); and a word recognition test (Word Recognition subtest, Diagnostic…
ERP Evidence of Hemispheric Independence in Visual Word Recognition
ERIC Educational Resources Information Center
Nemrodov, Dan; Harpaz, Yuval; Javitt, Daniel C.; Lavidor, Michal
2011-01-01
This study examined the capability of the left hemisphere (LH) and the right hemisphere (RH) to perform a visual recognition task independently as formulated by the Direct Access Model (Fernandino, Iacoboni, & Zaidel, 2007). Healthy native Hebrew speakers were asked to categorize nouns and non-words (created from nouns by transposing two middle…
Top-down modulation of ventral occipito-temporal responses during visual word recognition.
Twomey, Tae; Kawabata Duncan, Keith J; Price, Cathy J; Devlin, Joseph T
2011-04-01
Although interactivity is considered a fundamental principle of cognitive (and computational) models of reading, it has received far less attention in neural models of reading that instead focus on serial stages of feed-forward processing from visual input to orthographic processing to accessing the corresponding phonological and semantic information. In particular, the left ventral occipito-temporal (vOT) cortex is proposed to be the first stage where visual word recognition occurs prior to accessing nonvisual information such as semantics and phonology. We used functional magnetic resonance imaging (fMRI) to investigate whether there is evidence that activation in vOT is influenced top-down by the interaction of visual and nonvisual properties of the stimuli during visual word recognition tasks. Participants performed two different types of lexical decision tasks that focused on either visual or nonvisual properties of the word or word-like stimuli. The design allowed us to investigate how vOT activation during visual word recognition was influenced by a task change to the same stimuli and by a stimulus change during the same task. We found both stimulus- and task-driven modulation of vOT activation that can only be explained by top-down processing of nonvisual aspects of the task and stimuli. Our results are consistent with the hypothesis that vOT acts as an interface linking visual form with nonvisual processing in both bottom up and top down directions. Such interactive processing at the neural level is in agreement with cognitive and computational models of reading but challenges some of the assumptions made by current neuro-anatomical models of reading. Copyright © 2011 Elsevier Inc. All rights reserved.
Knowledge of a Second Language Influences Auditory Word Recognition in the Native Language
ERIC Educational Resources Information Center
Lagrou, Evelyne; Hartsuiker, Robert J.; Duyck, Wouter
2011-01-01
Many studies in bilingual visual word recognition have demonstrated that lexical access is not language selective. However, research on bilingual word recognition in the auditory modality has been scarce, and it has yielded mixed results with regard to the degree of this language nonselectivity. In the present study, we investigated whether…
Winsler, Kurt; Holcomb, Phillip J; Midgley, Katherine J; Grainger, Jonathan
2017-01-01
Previous studies have shown that different spatial frequency information processing streams interact during the recognition of visual stimuli. However, it is a matter of debate as to the contributions of high and low spatial frequency (HSF and LSF) information for visual word recognition. This study examined the role of different spatial frequencies in visual word recognition using event-related potential (ERP) masked priming. EEG was recorded from 32 scalp sites in 30 English-speaking adults in a go/no-go semantic categorization task. Stimuli were white characters on a neutral gray background. Targets were uppercase five letter words preceded by a forward-mask (#######) and a 50 ms lowercase prime. Primes were either the same word (repeated) or a different word (un-repeated) than the subsequent target and either contained only high, only low, or full spatial frequency information. Additionally within each condition, half of the prime-target pairs were high lexical frequency, and half were low. In the full spatial frequency condition, typical ERP masked priming effects were found with an attenuated N250 (sub-lexical) and N400 (lexical-semantic) for repeated compared to un-repeated primes. For HSF primes there was a weaker N250 effect which interacted with lexical frequency, a significant reversal of the effect around 300 ms, and an N400-like effect for only high lexical frequency word pairs. LSF primes did not produce any of the classic ERP repetition priming effects, however they did elicit a distinct early effect around 200 ms in the opposite direction of typical repetition effects. HSF information accounted for many of the masked repetition priming ERP effects and therefore suggests that HSFs are more crucial for word recognition. However, LSFs did produce their own pattern of priming effects indicating that larger scale information may still play a role in word recognition.
Embedded Words in Visual Word Recognition: Does the Left Hemisphere See the Rain in Brain?
ERIC Educational Resources Information Center
McCormick, Samantha F.; Davis, Colin J.; Brysbaert, Marc
2010-01-01
To examine whether interhemispheric transfer during foveal word recognition entails a discontinuity between the information presented to the left and right of fixation, we presented target words in such a way that participants fixated immediately left or right of an embedded word (as in "gr*apple", "bull*et") or in the middle…
Additive and Interactive Effects on Response Time Distributions in Visual Word Recognition
ERIC Educational Resources Information Center
Yap, Melvin J.; Balota, David A.
2007-01-01
Across 3 different word recognition tasks, distributional analyses were used to examine the joint effects of stimulus quality and word frequency on underlying response time distributions. Consistent with the extant literature, stimulus quality and word frequency produced additive effects in lexical decision, not only in the means but also in the…
Morphological Structures in Visual Word Recognition: The Case of Arabic
ERIC Educational Resources Information Center
Abu-Rabia, Salim; Awwad, Jasmin (Shalhoub)
2004-01-01
This research examined the function within lexical access of the main morphemic units from which most Arabic words are assembled, namely roots and word patterns. The present study focused on the derivation of nouns, in particular, whether the lexical representation of Arabic words reflects their morphological structure and whether recognition of a…
ERIC Educational Resources Information Center
Khateb, Asaid; Khateb-Abdelgani, Manal; Taha, Haitham Y.; Ibrahim, Raphiq
2014-01-01
This study aimed at assessing the effects of letters' connectivity in Arabic on visual word recognition. For this purpose, reaction times (RTs) and accuracy scores were collected from ninety-third, sixth and ninth grade native Arabic speakers during a lexical decision task, using fully connected (Cw), partially connected (PCw) and…
The Influence of Semantic Neighbours on Visual Word Recognition
ERIC Educational Resources Information Center
Yates, Mark
2012-01-01
Although it is assumed that semantics is a critical component of visual word recognition, there is still much that we do not understand. One recent way of studying semantic processing has been in terms of semantic neighbourhood (SN) density, and this research has shown that semantic neighbours facilitate lexical decisions. However, it is not clear…
Too little, too late: reduced visual span and speed characterize pure alexia.
Starrfelt, Randi; Habekost, Thomas; Leff, Alexander P
2009-12-01
Whether normal word reading includes a stage of visual processing selectively dedicated to word or letter recognition is highly debated. Characterizing pure alexia, a seemingly selective disorder of reading, has been central to this debate. Two main theories claim either that 1) Pure alexia is caused by damage to a reading specific brain region in the left fusiform gyrus or 2) Pure alexia results from a general visual impairment that may particularly affect simultaneous processing of multiple items. We tested these competing theories in 4 patients with pure alexia using sensitive psychophysical measures and mathematical modeling. Recognition of single letters and digits in the central visual field was impaired in all patients. Visual apprehension span was also reduced for both letters and digits in all patients. The only cortical region lesioned across all 4 patients was the left fusiform gyrus, indicating that this region subserves a function broader than letter or word identification. We suggest that a seemingly pure disorder of reading can arise due to a general reduction of visual speed and span, and explain why this has a disproportionate impact on word reading while recognition of other visual stimuli are less obviously affected.
Too Little, Too Late: Reduced Visual Span and Speed Characterize Pure Alexia
Habekost, Thomas; Leff, Alexander P.
2009-01-01
Whether normal word reading includes a stage of visual processing selectively dedicated to word or letter recognition is highly debated. Characterizing pure alexia, a seemingly selective disorder of reading, has been central to this debate. Two main theories claim either that 1) Pure alexia is caused by damage to a reading specific brain region in the left fusiform gyrus or 2) Pure alexia results from a general visual impairment that may particularly affect simultaneous processing of multiple items. We tested these competing theories in 4 patients with pure alexia using sensitive psychophysical measures and mathematical modeling. Recognition of single letters and digits in the central visual field was impaired in all patients. Visual apprehension span was also reduced for both letters and digits in all patients. The only cortical region lesioned across all 4 patients was the left fusiform gyrus, indicating that this region subserves a function broader than letter or word identification. We suggest that a seemingly pure disorder of reading can arise due to a general reduction of visual speed and span, and explain why this has a disproportionate impact on word reading while recognition of other visual stimuli are less obviously affected. PMID:19366870
Early Visual Word Processing Is Flexible: Evidence from Spatiotemporal Brain Dynamics.
Chen, Yuanyuan; Davis, Matthew H; Pulvermüller, Friedemann; Hauk, Olaf
2015-09-01
Visual word recognition is often described as automatic, but the functional locus of top-down effects is still a matter of debate. Do task demands modulate how information is retrieved, or only how it is used? We used EEG/MEG recordings to assess whether, when, and how task contexts modify early retrieval of specific psycholinguistic information in occipitotemporal cortex, an area likely to contribute to early stages of visual word processing. Using a parametric approach, we analyzed the spatiotemporal response patterns of occipitotemporal cortex for orthographic, lexical, and semantic variables in three psycholinguistic tasks: silent reading, lexical decision, and semantic decision. Task modulation of word frequency and imageability effects occurred simultaneously in ventral occipitotemporal regions-in the vicinity of the putative visual word form area-around 160 msec, following task effects on orthographic typicality around 100 msec. Frequency and typicality also produced task-independent effects in anterior temporal lobe regions after 200 msec. The early task modulation for several specific psycholinguistic variables indicates that occipitotemporal areas integrate perceptual input with prior knowledge in a task-dependent manner. Still, later task-independent effects in anterior temporal lobes suggest that word recognition eventually leads to retrieval of semantic information irrespective of task demands. We conclude that even a highly overlearned visual task like word recognition should be described as flexible rather than automatic.
Soares, Ana Paula; Perea, Manuel; Comesaña, Montserrat
2014-01-01
Recent research with skilled adult readers has consistently revealed an advantage of consonants over vowels in visual-word recognition (i.e., the so-called “consonant bias”). Nevertheless, little is known about how early in development the consonant bias emerges. This work aims to address this issue by studying the relative contribution of consonants and vowels at the early stages of visual-word recognition in developing readers (2nd and 4th Grade children) and skilled adult readers (college students) using a masked priming lexical decision task. Target words starting either with a consonant or a vowel were preceded by a briefly presented masked prime (50 ms) that could be the same as the target (e.g., pirata-PIRATA [pirate-PIRATE]), a consonant-preserving prime (e.g., pureto-PIRATA), a vowel-preserving prime (e.g., gicala-PIRATA), or an unrelated prime (e.g., bocelo -PIRATA). Results revealed significant priming effects for the identity and consonant-preserving conditions in adult readers and 4th Grade children, whereas 2nd graders only showed priming for the identity condition. In adult readers, the advantage of consonants was observed both for words starting with a consonant or a vowel, while in 4th graders this advantage was restricted to words with an initial consonant. Thus, the present findings suggest that a Consonant/Vowel skeleton should be included in future (developmental) models of visual-word recognition and reading. PMID:24523917
A multistream model of visual word recognition.
Allen, Philip A; Smith, Albert F; Lien, Mei-Ching; Kaut, Kevin P; Canfield, Angie
2009-02-01
Four experiments are reported that test a multistream model of visual word recognition, which associates letter-level and word-level processing channels with three known visual processing streams isolated in macaque monkeys: the magno-dominated (MD) stream, the interblob-dominated (ID) stream, and the blob-dominated (BD) stream (Van Essen & Anderson, 1995). We show that mixing the color of adjacent letters of words does not result in facilitation of response times or error rates when the spatial-frequency pattern of a whole word is familiar. However, facilitation does occur when the spatial-frequency pattern of a whole word is not familiar. This pattern of results is not due to different luminance levels across the different-colored stimuli and the background because isoluminant displays were used. Also, the mixed-case, mixed-hue facilitation occurred when different display distances were used (Experiments 2 and 3), so this suggests that image normalization can adjust independently of object size differences. Finally, we show that this effect persists in both spaced and unspaced conditions (Experiment 4)--suggesting that inappropriate letter grouping by hue cannot account for these results. These data support a model of visual word recognition in which lower spatial frequencies are processed first in the more rapid MD stream. The slower ID and BD streams may process some lower spatial frequency information in addition to processing higher spatial frequency information, but these channels tend to lose the processing race to recognition unless the letter string is unfamiliar to the MD stream--as with mixed-case presentation.
Ease of identifying words degraded by visual noise.
Barber, P; de la Mahotière, C
1982-08-01
A technique is described for investigating word recognition involving the superimposition of 'noise' on the visual target word. For this task a word is printed in the form of letters made up of separate elements; noise consists of additional elements which serve to reduce the ease whereby the words may be recognized, and a threshold-like measure can be obtained in terms of the amount of noise. A word frequency effect was obtained for the noise task, and for words presented tachistoscopically but in conventional typography. For the tachistoscope task, however, the frequency effect depended on the method of presentation. A second study showed no effect of inspection interval on performance on the noise task. A word-frequency effect was also found in a third experiment with tachistoscopic exposure of the noise task stimuli in undegraded form. The question of whether common processes are drawn on by tasks entailing different ways of varying ease of recognition is addressed, and the suitability of different tasks for word recognition research is discussed.
A Critical Boundary to the Left-Hemisphere Advantage in Visual-Word Processing
ERIC Educational Resources Information Center
Deason, R.G.; Marsolek, C.J.
2005-01-01
Two experiments explored boundary conditions for the ubiquitous left-hemisphere advantage in visual-word recognition. Subjects perceptually identified words presented directly to the left or right hemisphere. Strong left-hemisphere advantages were observed for UPPERCASE and lowercase words. However, only a weak effect was observed for…
The impact of task demand on visual word recognition.
Yang, J; Zevin, J
2014-07-11
The left occipitotemporal cortex has been found sensitive to the hierarchy of increasingly complex features in visually presented words, from individual letters to bigrams and morphemes. However, whether this sensitivity is a stable property of the brain regions engaged by word recognition is still unclear. To address the issue, the current study investigated whether different task demands modify this sensitivity. Participants viewed real English words and stimuli with hierarchical word-likeness while performing a lexical decision task (i.e., to decide whether each presented stimulus is a real word) and a symbol detection task. General linear model and independent component analysis indicated strong activation in the fronto-parietal and temporal regions during the two tasks. Furthermore, the bilateral inferior frontal gyrus and insula showed significant interaction effects between task demand and stimulus type in the pseudoword condition. The occipitotemporal cortex showed strong main effects for task demand and stimulus type, but no sensitivity to the hierarchical word-likeness was found. These results suggest that different task demands on semantic, phonological and orthographic processes can influence the involvement of the relevant regions during visual word recognition. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.
An ERP investigation of visual word recognition in syllabary scripts.
Okano, Kana; Grainger, Jonathan; Holcomb, Phillip J
2013-06-01
The bimodal interactive-activation model has been successfully applied to understanding the neurocognitive processes involved in reading words in alphabetic scripts, as reflected in the modulation of ERP components in masked repetition priming. In order to test the generalizability of this approach, in the present study we examined word recognition in a different writing system, the Japanese syllabary scripts hiragana and katakana. Native Japanese participants were presented with repeated or unrelated pairs of Japanese words in which the prime and target words were both in the same script (within-script priming, Exp. 1) or were in the opposite script (cross-script priming, Exp. 2). As in previous studies with alphabetic scripts, in both experiments the N250 (sublexical processing) and N400 (lexical-semantic processing) components were modulated by priming, although the time course was somewhat delayed. The earlier N/P150 effect (visual feature processing) was present only in "Experiment 1: Within-script priming", in which the prime and target words shared visual features. Overall, the results provide support for the hypothesis that visual word recognition involves a generalizable set of neurocognitive processes that operate in similar manners across different writing systems and languages, as well as pointing to the viability of the bimodal interactive-activation framework for modeling such processes.
An ERP Investigation of Visual Word Recognition in Syllabary Scripts
Okano, Kana; Grainger, Jonathan; Holcomb, Phillip J.
2013-01-01
The bi-modal interactive-activation model has been successfully applied to understanding the neuro-cognitive processes involved in reading words in alphabetic scripts, as reflected in the modulation of ERP components in masked repetition priming. In order to test the generalizability of this approach, the current study examined word recognition in a different writing system, the Japanese syllabary scripts Hiragana and Katakana. Native Japanese participants were presented with repeated or unrelated pairs of Japanese words where the prime and target words were both in the same script (within-script priming, Experiment 1) or were in the opposite script (cross-script priming, Experiment 2). As in previous studies with alphabetic scripts, in both experiments the N250 (sub-lexical processing) and N400 (lexical-semantic processing) components were modulated by priming, although the time-course was somewhat delayed. The earlier N/P150 effect (visual feature processing) was present only in Experiment 1 where prime and target words shared visual features. Overall, the results provide support for the hypothesis that visual word recognition involves a generalizable set of neuro-cognitive processes that operate in a similar manner across different writing systems and languages, as well as pointing to the viability of the bi-modal interactive activation framework for modeling such processes. PMID:23378278
From Numbers to Letters: Feedback Regularization in Visual Word Recognition
ERIC Educational Resources Information Center
Molinaro, Nicola; Dunabeitia, Jon Andoni; Marin-Gutierrez, Alejandro; Carreiras, Manuel
2010-01-01
Word reading in alphabetic languages involves letter identification, independently of the format in which these letters are written. This process of letter "regularization" is sensitive to word context, leading to the recognition of a word even when numbers that resemble letters are inserted among other real letters (e.g., M4TERI4L). The present…
ERIC Educational Resources Information Center
Hsiao, Janet H.; Cheung, Kit
2016-01-01
In Chinese orthography, the most common character structure consists of a semantic radical on the left and a phonetic radical on the right (SP characters); the minority, opposite arrangement also exists (PS characters). Recent studies showed that SP character processing is more left hemisphere (LH) lateralized than PS character processing.…
Kim, Albert; Lai, Vicky
2012-05-01
We used ERPs to investigate the time course of interactions between lexical semantic and sublexical visual word form processing during word recognition. Participants read sentence-embedded pseudowords that orthographically resembled a contextually supported real word (e.g., "She measured the flour so she could bake a ceke…") or did not (e.g., "She measured the flour so she could bake a tont…") along with nonword consonant strings (e.g., "She measured the flour so she could bake a srdt…"). Pseudowords that resembled a contextually supported real word ("ceke") elicited an enhanced positivity at 130 msec (P130), relative to real words (e.g., "She measured the flour so she could bake a cake…"). Pseudowords that did not resemble a plausible real word ("tont") enhanced the N170 component, as did nonword consonant strings ("srdt"). The effect pattern shows that the visual word recognition system is, perhaps, counterintuitively, more rapidly sensitive to minor than to flagrant deviations from contextually predicted inputs. The findings are consistent with rapid interactions between lexical and sublexical representations during word recognition, in which rapid lexical access of a contextually supported word (CAKE) provides top-down excitation of form features ("cake"), highlighting the anomaly of an unexpected word "ceke."
ERIC Educational Resources Information Center
Van der Haegen, Lise; Brysbaert, Marc; Davis, Colin J.
2009-01-01
It has recently been shown that interhemispheric communication is needed for the processing of foveally presented words. In this study, we examine whether the integration of information happens at an early stage, before word recognition proper starts, or whether the integration is part of the recognition process itself. Two lexical decision…
Strand, Julia F
2014-03-01
A widely agreed-upon feature of spoken word recognition is that multiple lexical candidates in memory are simultaneously activated in parallel when a listener hears a word, and that those candidates compete for recognition (Luce, Goldinger, Auer, & Vitevitch, Perception 62:615-625, 2000; Luce & Pisoni, Ear and Hearing 19:1-36, 1998; McClelland & Elman, Cognitive Psychology 18:1-86, 1986). Because the presence of those competitors influences word recognition, much research has sought to quantify the processes of lexical competition. Metrics that quantify lexical competition continuously are more effective predictors of auditory and visual (lipread) spoken word recognition than are the categorical metrics traditionally used (Feld & Sommers, Speech Communication 53:220-228, 2011; Strand & Sommers, Journal of the Acoustical Society of America 130:1663-1672, 2011). A limitation of the continuous metrics is that they are somewhat computationally cumbersome and require access to existing speech databases. This article describes the Phi-square Lexical Competition Database (Phi-Lex): an online, searchable database that provides access to multiple metrics of auditory and visual (lipread) lexical competition for English words, available at www.juliastrand.com/phi-lex .
Identifiable Orthographically Similar Word Primes Interfere in Visual Word Identification
ERIC Educational Resources Information Center
Burt, Jennifer S.
2009-01-01
University students participated in five experiments concerning the effects of unmasked, orthographically similar, primes on visual word recognition in the lexical decision task (LDT) and naming tasks. The modal prime-target stimulus onset asynchrony (SOA) was 350 ms. When primes were words that were orthographic neighbors of the targets, and…
Nilakantan, Aneesha S; Voss, Joel L; Weintraub, Sandra; Mesulam, M-Marsel; Rogalski, Emily J
2017-06-01
Primary progressive aphasia (PPA) is clinically defined by an initial loss of language function and preservation of other cognitive abilities, including episodic memory. While PPA primarily affects the left-lateralized perisylvian language network, some clinical neuropsychological tests suggest concurrent initial memory loss. The goal of this study was to test recognition memory of objects and words in the visual and auditory modality to separate language-processing impairments from retentive memory in PPA. Individuals with non-semantic PPA had longer reaction times and higher false alarms for auditory word stimuli compared to visual object stimuli. Moreover, false alarms for auditory word recognition memory were related to cortical thickness within the left inferior frontal gyrus and left temporal pole, while false alarms for visual object recognition memory was related to cortical thickness within the right-temporal pole. This pattern of results suggests that specific vulnerability in processing verbal stimuli can hinder episodic memory in PPA, and provides evidence for differential contributions of the left and right temporal poles in word and object recognition memory. Copyright © 2017 Elsevier Ltd. All rights reserved.
Cross-modal working memory binding and word recognition skills: how specific is the link?
Wang, Shinmin; Allen, Richard J
2018-04-01
Recent research has suggested that the creation of temporary bound representations of information from different sources within working memory uniquely relates to word recognition abilities in school-age children. However, it is unclear to what extent this link is attributable specifically to the binding ability for cross-modal information. This study examined the performance of Grade 3 (8-9 years old) children on binding tasks requiring either temporary association formation of two visual items (i.e., within-modal binding) or pairs of visually presented abstract shapes and auditorily presented nonwords (i.e., cross-modal binding). Children's word recognition skills were related to performance on the cross-modal binding task but not on the within-modal binding task. Further regression models showed that cross-modal binding memory was a significant predictor of word recognition when memory for its constituent elements, general abilities, and crucially, within-modal binding memory were taken into account. These findings may suggest a specific link between the ability to bind information across modalities within working memory and word recognition skills.
Brébion, Gildas; David, Anthony S; Pilowsky, Lyn S; Jones, Hugh
2004-11-01
Verbal and visual recognition tasks were administered to 40 patients with schizophrenia and 40 healthy comparison subjects. The verbal recognition task consisted of discriminating between 16 target words and 16 new words. The visual recognition task consisted of discriminating between 16 target pictures (8 black-and-white and 8 color) and 16 new pictures (8 black-and-white and 8 color). Visual recognition was followed by a spatial context discrimination task in which subjects were required to remember the spatial location of the target pictures at encoding. Results showed that recognition deficit in patients was similar for verbal and visual material. In both schizophrenic and healthy groups, men, but not women, obtained better recognition scores for the colored than for the black-and-white pictures. However, men and women similarly benefited from color to reduce spatial context discrimination errors. Patients showed a significant deficit in remembering the spatial location of the pictures, independently of accuracy in remembering the pictures themselves. These data suggest that patients are impaired in the amount of visual information that they can encode. With regards to the perceptual attributes of the stimuli, memory for spatial information appears to be affected, but not processing of color information.
Task-Dependent Masked Priming Effects in Visual Word Recognition
Kinoshita, Sachiko; Norris, Dennis
2012-01-01
A method used widely to study the first 250 ms of visual word recognition is masked priming: These studies have yielded a rich set of data concerning the processes involved in recognizing letters and words. In these studies, there is an implicit assumption that the early processes in word recognition tapped by masked priming are automatic, and masked priming effects should therefore be invariant across tasks. Contrary to this assumption, masked priming effects are modulated by the task goal: For example, only word targets show priming in the lexical decision task, but both words and non-words do in the same-different task; semantic priming effects are generally weak in the lexical decision task but are robust in the semantic categorization task. We explain how such task dependence arises within the Bayesian Reader account of masked priming (Norris and Kinoshita, 2008), and how the task dissociations can be used to understand the early processes in lexical access. PMID:22675316
ERIC Educational Resources Information Center
Obregon, Mateo; Shillcock, Richard
2012-01-01
Recognition of a single word is an elemental task in innumerable cognitive psychology experiments, but involves unexpected complexity. We test a controversial claim that the human fovea is vertically divided, with each half projecting to either the contralateral or ipsilateral hemisphere, thereby influencing foveal word recognition. We report a…
The Temporal Dynamics of Spoken Word Recognition in Adverse Listening Conditions
ERIC Educational Resources Information Center
Brouwer, Susanne; Bradlow, Ann R.
2016-01-01
This study examined the temporal dynamics of spoken word recognition in noise and background speech. In two visual-world experiments, English participants listened to target words while looking at four pictures on the screen: a target (e.g. "candle"), an onset competitor (e.g. "candy"), a rhyme competitor (e.g.…
The Neural Basis of Obligatory Decomposition of Suffixed Words
ERIC Educational Resources Information Center
Lewis, Gwyneth; Solomyak, Olla; Marantz, Alec
2011-01-01
Recent neurolinguistic studies present somewhat conflicting evidence concerning the role of the inferior temporal cortex (IT) in visual word recognition within the first 200 ms after presentation. On the one hand, fMRI studies of the Visual Word Form Area (VWFA) suggest that the IT might recover representations of the orthographic form of words.…
Wheat, Katherine L; Cornelissen, Piers L; Sack, Alexander T; Schuhmann, Teresa; Goebel, Rainer; Blomert, Leo
2013-05-01
Magnetoencephalography (MEG) has shown pseudohomophone priming effects at Broca's area (specifically pars opercularis of left inferior frontal gyrus and precentral gyrus; LIFGpo/PCG) within ∼100ms of viewing a word. This is consistent with Broca's area involvement in fast phonological access during visual word recognition. Here we used online transcranial magnetic stimulation (TMS) to investigate whether LIFGpo/PCG is necessary for (not just correlated with) visual word recognition by ∼100ms. Pulses were delivered to individually fMRI-defined LIFGpo/PCG in Dutch speakers 75-500ms after stimulus onset during reading and picture naming. Reading and picture naming reactions times were significantly slower following pulses at 225-300ms. Contrary to predictions, there was no disruption to reading for pulses before 225ms. This does not provide evidence in favour of a functional role for LIFGpo/PCG in reading before 225ms in this case, but does extend previous findings in picture stimuli to written Dutch words. Copyright © 2012 Elsevier Inc. All rights reserved.
[The role of external letter positions in visual word recognition].
Perea, Manuel; Lupker, Sthephen J
2007-11-01
A key issue for any computational model of visual word recognition is the choice of an input coding schema, which is responsible for assigning letter positions. Such a schema must reflect the fact that, according to recent research, nonwords created by transposing letters (e.g., caniso for CASINO ), typically, appear to be more similar to the word than nonwords created by replacing letters (e.g., caviro ). In the present research, we initially carried out a computational analysis examining the degree to which the position of the transposition influences transposed-letter similarity effects. We next conducted a masked priming experiment with the lexical decision task to determine whether a transposed-letter priming advantage occurs when the first letter position is involved. Primes were created by either transposing the first and third letters (démula-MEDULA ) or replacing the first and third letters (bérula-MEDULA). Results showed that there was no transposed-letter priming advantage in this situation. We discuss the implications of these results for models of visual word recognition.
The Effects of Semantic Transparency and Base Frequency on the Recognition of English Complex Words
ERIC Educational Resources Information Center
Xu, Joe; Taft, Marcus
2015-01-01
A visual lexical decision task was used to examine the interaction between base frequency (i.e., the cumulative frequencies of morphologically related forms) and semantic transparency for a list of derived words. Linear mixed effects models revealed that high base frequency facilitates the recognition of the complex word (i.e., a "base…
MEGALEX: A megastudy of visual and auditory word recognition.
Ferrand, Ludovic; Méot, Alain; Spinelli, Elsa; New, Boris; Pallier, Christophe; Bonin, Patrick; Dufau, Stéphane; Mathôt, Sebastiaan; Grainger, Jonathan
2018-06-01
Using the megastudy approach, we report a new database (MEGALEX) of visual and auditory lexical decision times and accuracy rates for tens of thousands of words. We collected visual lexical decision data for 28,466 French words and the same number of pseudowords, and auditory lexical decision data for 17,876 French words and the same number of pseudowords (synthesized tokens were used for the auditory modality). This constitutes the first large-scale database for auditory lexical decision, and the first database to enable a direct comparison of word recognition in different modalities. Different regression analyses were conducted to illustrate potential ways to exploit this megastudy database. First, we compared the proportions of variance accounted for by five word frequency measures. Second, we conducted item-level regression analyses to examine the relative importance of the lexical variables influencing performance in the different modalities (visual and auditory). Finally, we compared the similarities and differences between the two modalities. All data are freely available on our website ( https://sedufau.shinyapps.io/megalex/ ) and are searchable at www.lexique.org , inside the Open Lexique search engine.
Encoding context and false recognition memories.
Bruce, Darryl; Phillips-Grant, Kimberly; Conrad, Nicole; Bona, Susan
2004-09-01
False recognition of an extralist word that is thematically related to all words of a study list may reflect internal activation of the theme word during encoding followed by impaired source monitoring at retrieval, that is, difficulty in determining whether the word had actually been experienced or merely thought of. To assist source monitoring, distinctive visual or verbal contexts were added to study words at input. Both types of context produced similar effects: False alarms to theme-word (critical) lures were reduced; remember judgements of critical lures called old were lower; and if contextual information had been added to lists, subjects indicated as much for list items and associated critical foils identified as old. The visual and verbal contexts used in the present studies were held to disrupt semantic categorisation of list words at input and to facilitate source monitoring at output.
Connell, Louise; Lynott, Dermot
2014-04-01
How does the meaning of a word affect how quickly we can recognize it? Accounts of visual word recognition allow semantic information to facilitate performance but have neglected the role of modality-specific perceptual attention in activating meaning. We predicted that modality-specific semantic information would differentially facilitate lexical decision and reading aloud, depending on how perceptual attention is implicitly directed by each task. Large-scale regression analyses showed the perceptual modalities involved in representing a word's referent concept influence how easily that word is recognized. Both lexical decision and reading-aloud tasks direct attention toward vision, and are faster and more accurate for strongly visual words. Reading aloud additionally directs attention toward audition and is faster and more accurate for strongly auditory words. Furthermore, the overall semantic effects are as large for reading aloud as lexical decision and are separable from age-of-acquisition effects. These findings suggest that implicitly directing perceptual attention toward a particular modality facilitates representing modality-specific perceptual information in the meaning of a word, which in turn contributes to the lexical decision or reading-aloud response.
Interpreting Chicken-Scratch: Lexical Access for Handwritten Words
Barnhart, Anthony S.; Goldinger, Stephen D.
2014-01-01
Handwritten word recognition is a field of study that has largely been neglected in the psychological literature, despite its prevalence in society. Whereas studies of spoken word recognition almost exclusively employ natural, human voices as stimuli, studies of visual word recognition use synthetic typefaces, thus simplifying the process of word recognition. The current study examined the effects of handwriting on a series of lexical variables thought to influence bottom-up and top-down processing, including word frequency, regularity, bidirectional consistency, and imageability. The results suggest that the natural physical ambiguity of handwritten stimuli forces a greater reliance on top-down processes, because almost all effects were magnified, relative to conditions with computer print. These findings suggest that processes of word perception naturally adapt to handwriting, compensating for physical ambiguity by increasing top-down feedback. PMID:20695708
Chang, Yu-Cherng C; Khan, Sheraz; Taulu, Samu; Kuperberg, Gina; Brown, Emery N; Hämäläinen, Matti S; Temereanca, Simona
2018-01-01
Saccadic eye movements are an inherent component of natural reading, yet their contribution to information processing at subsequent fixation remains elusive. Here we use anatomically-constrained magnetoencephalography (MEG) to examine cortical activity following saccades as healthy human subjects engaged in a one-back word recognition task. This activity was compared with activity following external visual stimulation that mimicked saccades. A combination of procedures was employed to eliminate saccadic ocular artifacts from the MEG signal. Both saccades and saccade-like external visual stimulation produced early-latency responses beginning ~70 ms after onset in occipital cortex and spreading through the ventral and dorsal visual streams to temporal, parietal and frontal cortices. Robust differential activity following the onset of saccades vs. similar external visual stimulation emerged during 150-350 ms in a left-lateralized cortical network. This network included: (i) left lateral occipitotemporal (LOT) and nearby inferotemporal (IT) cortex; (ii) left posterior Sylvian fissure (PSF) and nearby multimodal cortex; and (iii) medial parietooccipital (PO), posterior cingulate and retrosplenial cortices. Moreover, this left-lateralized network colocalized with word repetition priming effects. Together, results suggest that central saccadic mechanisms influence a left-lateralized language network in occipitotemporal and temporal cortex above and beyond saccadic influences at preceding stages of information processing during visual word recognition.
Chang, Yu-Cherng C.; Khan, Sheraz; Taulu, Samu; Kuperberg, Gina; Brown, Emery N.; Hämäläinen, Matti S.; Temereanca, Simona
2018-01-01
Saccadic eye movements are an inherent component of natural reading, yet their contribution to information processing at subsequent fixation remains elusive. Here we use anatomically-constrained magnetoencephalography (MEG) to examine cortical activity following saccades as healthy human subjects engaged in a one-back word recognition task. This activity was compared with activity following external visual stimulation that mimicked saccades. A combination of procedures was employed to eliminate saccadic ocular artifacts from the MEG signal. Both saccades and saccade-like external visual stimulation produced early-latency responses beginning ~70 ms after onset in occipital cortex and spreading through the ventral and dorsal visual streams to temporal, parietal and frontal cortices. Robust differential activity following the onset of saccades vs. similar external visual stimulation emerged during 150–350 ms in a left-lateralized cortical network. This network included: (i) left lateral occipitotemporal (LOT) and nearby inferotemporal (IT) cortex; (ii) left posterior Sylvian fissure (PSF) and nearby multimodal cortex; and (iii) medial parietooccipital (PO), posterior cingulate and retrosplenial cortices. Moreover, this left-lateralized network colocalized with word repetition priming effects. Together, results suggest that central saccadic mechanisms influence a left-lateralized language network in occipitotemporal and temporal cortex above and beyond saccadic influences at preceding stages of information processing during visual word recognition. PMID:29867372
Visual recognition of permuted words
NASA Astrophysics Data System (ADS)
Rashid, Sheikh Faisal; Shafait, Faisal; Breuel, Thomas M.
2010-02-01
In current study we examine how letter permutation affects in visual recognition of words for two orthographically dissimilar languages, Urdu and German. We present the hypothesis that recognition or reading of permuted and non-permuted words are two distinct mental level processes, and that people use different strategies in handling permuted words as compared to normal words. A comparison between reading behavior of people in these languages is also presented. We present our study in context of dual route theories of reading and it is observed that the dual-route theory is consistent with explanation of our hypothesis of distinction in underlying cognitive behavior for reading permuted and non-permuted words. We conducted three experiments in lexical decision tasks to analyze how reading is degraded or affected by letter permutation. We performed analysis of variance (ANOVA), distribution free rank test, and t-test to determine the significance differences in response time latencies for two classes of data. Results showed that the recognition accuracy for permuted words is decreased 31% in case of Urdu and 11% in case of German language. We also found a considerable difference in reading behavior for cursive and alphabetic languages and it is observed that reading of Urdu is comparatively slower than reading of German due to characteristics of cursive script.
Semantic Neighborhood Effects for Abstract versus Concrete Words
Danguecan, Ashley N.; Buchanan, Lori
2016-01-01
Studies show that semantic effects may be task-specific, and thus, that semantic representations are flexible and dynamic. Such findings are critical to the development of a comprehensive theory of semantic processing in visual word recognition, which should arguably account for how semantic effects may vary by task. It has been suggested that semantic effects are more directly examined using tasks that explicitly require meaning processing relative to those for which meaning processing is not necessary (e.g., lexical decision task). The purpose of the present study was to chart the processing of concrete versus abstract words in the context of a global co-occurrence variable, semantic neighborhood density (SND), by comparing word recognition response times (RTs) across four tasks varying in explicit semantic demands: standard lexical decision task (with non-pronounceable non-words), go/no-go lexical decision task (with pronounceable non-words), progressive demasking task, and sentence relatedness task. The same experimental stimulus set was used across experiments and consisted of 44 concrete and 44 abstract words, with half of these being low SND, and half being high SND. In this way, concreteness and SND were manipulated in a factorial design using a number of visual word recognition tasks. A consistent RT pattern emerged across tasks, in which SND effects were found for abstract (but not necessarily concrete) words. Ultimately, these findings highlight the importance of studying interactive effects in word recognition, and suggest that linguistic associative information is particularly important for abstract words. PMID:27458422
Semantic Neighborhood Effects for Abstract versus Concrete Words.
Danguecan, Ashley N; Buchanan, Lori
2016-01-01
Studies show that semantic effects may be task-specific, and thus, that semantic representations are flexible and dynamic. Such findings are critical to the development of a comprehensive theory of semantic processing in visual word recognition, which should arguably account for how semantic effects may vary by task. It has been suggested that semantic effects are more directly examined using tasks that explicitly require meaning processing relative to those for which meaning processing is not necessary (e.g., lexical decision task). The purpose of the present study was to chart the processing of concrete versus abstract words in the context of a global co-occurrence variable, semantic neighborhood density (SND), by comparing word recognition response times (RTs) across four tasks varying in explicit semantic demands: standard lexical decision task (with non-pronounceable non-words), go/no-go lexical decision task (with pronounceable non-words), progressive demasking task, and sentence relatedness task. The same experimental stimulus set was used across experiments and consisted of 44 concrete and 44 abstract words, with half of these being low SND, and half being high SND. In this way, concreteness and SND were manipulated in a factorial design using a number of visual word recognition tasks. A consistent RT pattern emerged across tasks, in which SND effects were found for abstract (but not necessarily concrete) words. Ultimately, these findings highlight the importance of studying interactive effects in word recognition, and suggest that linguistic associative information is particularly important for abstract words.
The impact of left and right intracranial tumors on picture and word recognition memory.
Goldstein, Bram; Armstrong, Carol L; Modestino, Edward; Ledakis, George; John, Cameron; Hunter, Jill V
2004-02-01
This study investigated the effects of left and right intracranial tumors on picture and word recognition memory. We hypothesized that left hemispheric (LH) patients would exhibit greater word recognition memory impairment than right hemispheric (RH) patients, with no significant hemispheric group picture recognition memory differences. The LH patient group obtained a significantly slower mean picture recognition reaction time than the RH group. The LH group had a higher proportion of tumors extending into the temporal lobes, possibly accounting for their greater pictorial processing impairments. Dual coding and enhanced visual imagery may have contributed to the patient groups' similar performance on the remainder of the measures.
Does a pear growl? Interference from semantic properties of orthographic neighbors.
Pecher, Diane; de Rooij, Jimmy; Zeelenberg, René
2009-07-01
In this study, we investigated whether semantic properties of a word's orthographic neighbors are activated during visual word recognition. In two experiments, words were presented with a property that was not true for the word itself. We manipulated whether the property was true for an orthographic neighbor of the word. Our results showed that rejection of the property was slower and less accurate when the property was true for a neighbor than when the property was not true for a neighbor. These findings indicate that semantic information is activated before orthographic processing is finished. The present results are problematic for the links model (Forster, 2006; Forster & Hector, 2002) that was recently proposed in order to bring form-first models of visual word recognition into line with previously reported findings (Forster & Hector, 2002; Pecher, Zeelenberg, & Wagenmakers, 2005; Rodd, 2004).
The time course of morphological processing during spoken word recognition in Chinese.
Shen, Wei; Qu, Qingqing; Ni, Aiping; Zhou, Junyi; Li, Xingshan
2017-12-01
We investigated the time course of morphological processing during spoken word recognition using the printed-word paradigm. Chinese participants were asked to listen to a spoken disyllabic compound word while simultaneously viewing a printed-word display. Each visual display consisted of three printed words: a semantic associate of the first constituent of the compound word (morphemic competitor), a semantic associate of the whole compound word (whole-word competitor), and an unrelated word (distractor). Participants were directed to detect whether the spoken target word was on the visual display. Results indicated that both the morphemic and whole-word competitors attracted more fixations than the distractor. More importantly, the morphemic competitor began to diverge from the distractor immediately at the acoustic offset of the first constituent, which was earlier than the whole-word competitor. These results suggest that lexical access to the auditory word is incremental and morphological processing (i.e., semantic access to the first constituent) that occurs at an early processing stage before access to the representation of the whole word in Chinese.
Immediate effects of anticipatory coarticulation in spoken-word recognition
Salverda, Anne Pier; Kleinschmidt, Dave; Tanenhaus, Michael K.
2014-01-01
Two visual-world experiments examined listeners’ use of pre word-onset anticipatory coarticulation in spoken-word recognition. Experiment 1 established the shortest lag with which information in the speech signal influences eye-movement control, using stimuli such as “The … ladder is the target”. With a neutral token of the definite article preceding the target word, saccades to the referent were not more likely than saccades to an unrelated distractor until 200–240 ms after the onset of the target word. In Experiment 2, utterances contained definite articles which contained natural anticipatory coarticulation pertaining to the onset of the target word (“ The ladder … is the target”). A simple Gaussian classifier was able to predict the initial sound of the upcoming target word from formant information from the first few pitch periods of the article’s vowel. With these stimuli, effects of speech on eye-movement control began about 70 ms earlier than in Experiment 1, suggesting rapid use of anticipatory coarticulation. The results are interpreted as support for “data explanation” approaches to spoken-word recognition. Methodological implications for visual-world studies are also discussed. PMID:24511179
Semantic size does not matter: "bigger" words are not recognized faster.
Kang, Sean H K; Yap, Melvin J; Tse, Chi-Shing; Kurby, Christopher A
2011-06-01
Sereno, O'Donnell, and Sereno (2009) reported that words are recognized faster in a lexical decision task when their referents are physically large than when they are small, suggesting that "semantic size" might be an important variable that should be considered in visual word recognition research and modelling. We sought to replicate their size effect, but failed to find a significant latency advantage in lexical decision for "big" words (cf. "small" words), even though we used the same word stimuli as Sereno et al. and had almost three times as many subjects. We also examined existing data from visual word recognition megastudies (e.g., English Lexicon Project) and found that semantic size is not a significant predictor of lexical decision performance after controlling for the standard lexical variables. In summary, the null results from our lab experiment--despite a much larger subject sample size than Sereno et al.--converged with our analysis of megastudy lexical decision performance, leading us to conclude that semantic size does not matter for word recognition. Discussion focuses on why semantic size (unlike some other semantic variables) is unlikely to play a role in lexical decision.
Effect of word familiarity on visually evoked magnetic fields.
Harada, N; Iwaki, S; Nakagawa, S; Yamaguchi, M; Tonoike, M
2004-11-30
This study investigated the effect of word familiarity of visual stimuli on the word recognizing function of the human brain. Word familiarity is an index of the relative ease of word perception, and is characterized by facilitation and accuracy on word recognition. We studied the effect of word familiarity, using "Hiragana" (phonetic characters in Japanese orthography) characters as visual stimuli, on the elicitation of visually evoked magnetic fields with a word-naming task. The words were selected from a database of lexical properties of Japanese. The four "Hiragana" characters used were grouped and presented in 4 classes of degree of familiarity. The three components were observed in averaged waveforms of the root mean square (RMS) value on latencies at about 100 ms, 150 ms and 220 ms. The RMS value of the 220 ms component showed a significant positive correlation (F=(3/36); 5.501; p=0.035) with the value of familiarity. ECDs of the 220 ms component were observed in the intraparietal sulcus (IPS). Increments in the RMS value of the 220 ms component, which might reflect ideographical word recognition, retrieving "as a whole" were enhanced with increments of the value of familiarity. The interaction of characters, which increased with the value of familiarity, might function "as a large symbol"; and enhance a "pop-out" function with an escaping character inhibiting other characters and enhancing the segmentation of the character (as a figure) from the ground.
Face and Word Recognition Can Be Selectively Affected by Brain Injury or Developmental Disorders.
Robotham, Ro J; Starrfelt, Randi
2017-01-01
Face and word recognition have traditionally been thought to rely on highly specialised and relatively independent cognitive processes. Some of the strongest evidence for this has come from patients with seemingly category-specific visual perceptual deficits such as pure prosopagnosia, a selective face recognition deficit, and pure alexia, a selective word recognition deficit. Together, the patterns of impaired reading with preserved face recognition and impaired face recognition with preserved reading constitute a double dissociation. The existence of these selective deficits has been questioned over the past decade. It has been suggested that studies describing patients with these pure deficits have failed to measure the supposedly preserved functions using sensitive enough measures, and that if tested using sensitive measurements, all patients with deficits in one visual category would also have deficits in the other. The implications of this would be immense, with most textbooks in cognitive neuropsychology requiring drastic revisions. In order to evaluate the evidence for dissociations, we review studies that specifically investigate whether face or word recognition can be selectively affected by acquired brain injury or developmental disorders. We only include studies published since 2004, as comprehensive reviews of earlier studies are available. Most of the studies assess the supposedly preserved functions using sensitive measurements. We found convincing evidence that reading can be preserved in acquired and developmental prosopagnosia and also evidence (though weaker) that face recognition can be preserved in acquired or developmental dyslexia, suggesting that face and word recognition are at least in part supported by independent processes.
Reconsidering the role of temporal order in spoken word recognition.
Toscano, Joseph C; Anderson, Nathaniel D; McMurray, Bob
2013-10-01
Models of spoken word recognition assume that words are represented as sequences of phonemes. We evaluated this assumption by examining phonemic anadromes, words that share the same phonemes but differ in their order (e.g., sub and bus). Using the visual-world paradigm, we found that listeners show more fixations to anadromes (e.g., sub when bus is the target) than to unrelated words (well) and to words that share the same vowel but not the same set of phonemes (sun). This contrasts with the predictions of existing models and suggests that words are not defined as strict sequences of phonemes.
Recognition-induced forgetting of faces in visual long-term memory.
Rugo, Kelsi F; Tamler, Kendall N; Woodman, Geoffrey F; Maxcey, Ashleigh M
2017-10-01
Despite more than a century of evidence that long-term memory for pictures and words are different, much of what we know about memory comes from studies using words. Recent research examining visual long-term memory has demonstrated that recognizing an object induces the forgetting of objects from the same category. This recognition-induced forgetting has been shown with a variety of everyday objects. However, unlike everyday objects, faces are objects of expertise. As a result, faces may be immune to recognition-induced forgetting. However, despite excellent memory for such stimuli, we found that faces were susceptible to recognition-induced forgetting. Our findings have implications for how models of human memory account for recognition-induced forgetting as well as represent objects of expertise and consequences for eyewitness testimony and the justice system.
Syllabic Parsing in Children: A Developmental Study Using Visual Word-Spotting in Spanish
ERIC Educational Resources Information Center
Álvarez, Carlos J.; Garcia-Saavedra, Guacimara; Luque, Juan L.; Taft, Marcus
2017-01-01
Some inconsistency is observed in the results from studies of reading development regarding the role of the syllable in visual word recognition, perhaps due to a disparity between the tasks used. We adopted a word-spotting paradigm, with Spanish children of second grade (mean age: 7 years) and sixth grade (mean age: 11 years). The children were…
Contextual diversity is a main determinant of word identification times in young readers.
Perea, Manuel; Soares, Ana Paula; Comesaña, Montserrat
2013-09-01
Recent research with college-aged skilled readers by Adelman and colleagues revealed that contextual diversity (i.e., the number of contexts in which a word appears) is a more critical determinant of visual word recognition than mere repeated exposure (i.e., word frequency) (Psychological Science, 2006, Vol. 17, pp. 814-823). Given that contextual diversity has been claimed to be a relevant factor to word acquisition in developing readers, the effects of contextual diversity should also be a main determinant of word identification times in developing readers. A lexical decision experiment was conducted to examine the effects of contextual diversity and word frequency in young readers (children in fourth grade). Results revealed a sizable effect of contextual diversity, but not of word frequency, thereby generalizing Adelman and colleagues' data to a child population. These findings call for the implementation of dynamic developmental models of visual word recognition that go beyond a learning rule by mere exposure. Copyright © 2012 Elsevier Inc. All rights reserved.
Hargreaves, Ian S; Pexman, Penny M
2014-05-01
According to several current frameworks, semantic processing involves an early influence of language-based information followed by later influences of object-based information (e.g., situated simulations; Santos, Chaigneau, Simmons, & Barsalou, 2011). In the present study we examined whether these predictions extend to the influence of semantic variables in visual word recognition. We investigated the time course of semantic richness effects in visual word recognition using a signal-to-respond (STR) paradigm fitted to a lexical decision (LDT) and a semantic categorization (SCT) task. We used linear mixed effects to examine the relative contributions of language-based (number of senses, ARC) and object-based (imageability, number of features, body-object interaction ratings) descriptions of semantic richness at four STR durations (75, 100, 200, and 400ms). Results showed an early influence of number of senses and ARC in the SCT. In both LDT and SCT, object-based effects were the last to influence participants' decision latencies. We interpret our results within a framework in which semantic processes are available to influence word recognition as a function of their availability over time, and of their relevance to task-specific demands. Copyright © 2014 Elsevier B.V. All rights reserved.
Talker variability in audio-visual speech perception
Heald, Shannon L. M.; Nusbaum, Howard C.
2014-01-01
A change in talker is a change in the context for the phonetic interpretation of acoustic patterns of speech. Different talkers have different mappings between acoustic patterns and phonetic categories and listeners need to adapt to these differences. Despite this complexity, listeners are adept at comprehending speech in multiple-talker contexts, albeit at a slight but measurable performance cost (e.g., slower recognition). So far, this talker variability cost has been demonstrated only in audio-only speech. Other research in single-talker contexts have shown, however, that when listeners are able to see a talker’s face, speech recognition is improved under adverse listening (e.g., noise or distortion) conditions that can increase uncertainty in the mapping between acoustic patterns and phonetic categories. Does seeing a talker’s face reduce the cost of word recognition in multiple-talker contexts? We used a speeded word-monitoring task in which listeners make quick judgments about target word recognition in single- and multiple-talker contexts. Results show faster recognition performance in single-talker conditions compared to multiple-talker conditions for both audio-only and audio-visual speech. However, recognition time in a multiple-talker context was slower in the audio-visual condition compared to audio-only condition. These results suggest that seeing a talker’s face during speech perception may slow recognition by increasing the importance of talker identification, signaling to the listener a change in talker has occurred. PMID:25076919
Talker variability in audio-visual speech perception.
Heald, Shannon L M; Nusbaum, Howard C
2014-01-01
A change in talker is a change in the context for the phonetic interpretation of acoustic patterns of speech. Different talkers have different mappings between acoustic patterns and phonetic categories and listeners need to adapt to these differences. Despite this complexity, listeners are adept at comprehending speech in multiple-talker contexts, albeit at a slight but measurable performance cost (e.g., slower recognition). So far, this talker variability cost has been demonstrated only in audio-only speech. Other research in single-talker contexts have shown, however, that when listeners are able to see a talker's face, speech recognition is improved under adverse listening (e.g., noise or distortion) conditions that can increase uncertainty in the mapping between acoustic patterns and phonetic categories. Does seeing a talker's face reduce the cost of word recognition in multiple-talker contexts? We used a speeded word-monitoring task in which listeners make quick judgments about target word recognition in single- and multiple-talker contexts. Results show faster recognition performance in single-talker conditions compared to multiple-talker conditions for both audio-only and audio-visual speech. However, recognition time in a multiple-talker context was slower in the audio-visual condition compared to audio-only condition. These results suggest that seeing a talker's face during speech perception may slow recognition by increasing the importance of talker identification, signaling to the listener a change in talker has occurred.
Van der Haegen, Lise; Brysbaert, Marc; Davis, Colin J
2009-02-01
It has recently been shown that interhemispheric communication is needed for the processing of foveally presented words. In this study, we examine whether the integration of information happens at an early stage, before word recognition proper starts, or whether the integration is part of the recognition process itself. Two lexical decision experiments are reported in which words were presented at different fixation positions. In Experiment 1, a masked form priming task was used with primes that had two adjacent letters transposed. The results showed that although the fixation position had a substantial influence on the transposed letter priming effect, the priming was not smaller when the transposed letters were sent to different hemispheres than when they were projected to the same hemisphere. In Experiment 2, stimuli were presented that either had high frequency hemifield competitors or could be identified unambiguously on the basis of the information in one hemifield. Again, the lexical decision times did not vary as a function of hemifield competitors. These results are consistent with the early integration account, as presented in the SERIOL model of visual word recognition.
Automatization and Orthographic Development in Second Language Visual Word Recognition
ERIC Educational Resources Information Center
Kida, Shusaku
2016-01-01
The present study investigated second language (L2) learners' acquisition of automatic word recognition and the development of L2 orthographic representation in the mental lexicon. Participants in the study were Japanese university students enrolled in a compulsory course involving a weekly 30-minute sustained silent reading (SSR) activity with…
The Overlap Model: A Model of Letter Position Coding
ERIC Educational Resources Information Center
Gomez, Pablo; Ratcliff, Roger; Perea, Manuel
2008-01-01
Recent research has shown that letter identity and letter position are not integral perceptual dimensions (e.g., jugde primes judge in word-recognition experiments). Most comprehensive computational models of visual word recognition (e.g., the interactive activation model, J. L. McClelland & D. E. Rumelhart, 1981, and its successors) assume that…
Morphological effects in children word reading: a priming study in fourth graders.
Casalis, Séverine; Dusautoir, Marion; Colé, Pascale; Ducrot, Stéphanie
2009-09-01
A growing corpus of evidence suggests that morphology could play a role in reading acquisition, and that young readers could be sensitive to the morphemic structure of written words. In the present experiment, we examined whether and when morphological information is activated in word recognition. French fourth graders made visual lexical decisions to derived words preceded by primes sharing either a morphological or an orthographic relationship with the target. Results showed significant and equivalent facilitation priming effects in cases of morphologically and orthographically related primes at the shortest prime duration, and a significant facilitation priming effect in the case of only morphologically related primes at the longer prime duration. Thus, these results strongly suggest that a morphological level is involved in children's visual word recognition, although it is not distinct from the formal one at an early stage of word processing.
The effects of articulatory suppression on word recognition in Serbian.
Tenjović, Lazar; Lalović, Dejan
2005-11-01
The relatedness of phonological coding to the articulatory mechanisms in visual word recognition vary in different writing systems. While articulatory suppression (i.e., continuous verbalising during a visual word processing task) has a detrimental effect on the processing of Japanese words printed in regular syllabic Khana script, it has no such effect on the processing of irregular alphabetic English words. Besner (1990) proposed an experiment in the Serbian language, written in Cyrillic and Roman regular but alphabetic scripts, to disentangle the importance of script regularity vs. the syllabic-alphabetic dimension for the effects observed. Articulatory suppression had an equally detrimental effect in a lexical decision task for both alphabetically regular and distorted (by a mixture of the two alphabets) Serbian words, but comparisons of articulatory suppression effect size obtained in Serbian to those obtained in English and Japanese suggest "alphabeticity-syllabicity" to be the more critical dimension in determining the relatedness of phonological coding and articulatory activity.
Word learning and the cerebral hemispheres: from serial to parallel processing of written words
Ellis, Andrew W.; Ferreira, Roberto; Cathles-Hagan, Polly; Holt, Kathryn; Jarvis, Lisa; Barca, Laura
2009-01-01
Reading familiar words differs from reading unfamiliar non-words in two ways. First, word reading is faster and more accurate than reading of unfamiliar non-words. Second, effects of letter length are reduced for words, particularly when they are presented in the right visual field in familiar formats. Two experiments are reported in which right-handed participants read aloud non-words presented briefly in their left and right visual fields before and after training on those items. The non-words were interleaved with familiar words in the naming tests. Before training, naming was slow and error prone, with marked effects of length in both visual fields. After training, fewer errors were made, naming was faster, and the effect of length was much reduced in the right visual field compared with the left. We propose that word learning creates orthographic word forms in the mid-fusiform gyrus of the left cerebral hemisphere. Those word forms allow words to access their phonological and semantic representations on a lexical basis. But orthographic word forms also interact with more posterior letter recognition systems in the middle/inferior occipital gyri, inducing more parallel processing of right visual field words than is possible for any left visual field stimulus, or for unfamiliar non-words presented in the right visual field. PMID:19933140
ERIC Educational Resources Information Center
Wu, Shiyu; Ma, Zheng
2017-01-01
Previous research has indicated that, in viewing a visual word, the activated phonological representation in turn activates its homophone, causing semantic interference. Using this mechanism of phonological mediation, this study investigated native-language phonological interference in visual recognition of Chinese two-character compounds by early…
Learning during processing Word learning doesn’t wait for word recognition to finish
Apfelbaum, Keith S.; McMurray, Bob
2017-01-01
Previous research on associative learning has uncovered detailed aspects of the process, including what types of things are learned, how they are learned, and where in the brain such learning occurs. However, perceptual processes, such as stimulus recognition and identification, take time to unfold. Previous studies of learning have not addressed when, during the course of these dynamic recognition processes, learned representations are formed and updated. If learned representations are formed and updated while recognition is ongoing, the result of learning may incorporate spurious, partial information. For example, during word recognition, words take time to be identified, and competing words are often active in parallel. If learning proceeds before this competition resolves, representations may be influenced by the preliminary activations present at the time of learning. In three experiments using word learning as a model domain, we provide evidence that learning reflects the ongoing dynamics of auditory and visual processing during a learning event. These results show that learning can occur before stimulus recognition processes are complete; learning does not wait for ongoing perceptual processing to complete. PMID:27471082
A hierarchical word-merging algorithm with class separability measure.
Wang, Lei; Zhou, Luping; Shen, Chunhua; Liu, Lingqiao; Liu, Huan
2014-03-01
In image recognition with the bag-of-features model, a small-sized visual codebook is usually preferred to obtain a low-dimensional histogram representation and high computational efficiency. Such a visual codebook has to be discriminative enough to achieve excellent recognition performance. To create a compact and discriminative codebook, in this paper we propose to merge the visual words in a large-sized initial codebook by maximally preserving class separability. We first show that this results in a difficult optimization problem. To deal with this situation, we devise a suboptimal but very efficient hierarchical word-merging algorithm, which optimally merges two words at each level of the hierarchy. By exploiting the characteristics of the class separability measure and designing a novel indexing structure, the proposed algorithm can hierarchically merge 10,000 visual words down to two words in merely 90 seconds. Also, to show the properties of the proposed algorithm and reveal its advantages, we conduct detailed theoretical analysis to compare it with another hierarchical word-merging algorithm that maximally preserves mutual information, obtaining interesting findings. Experimental studies are conducted to verify the effectiveness of the proposed algorithm on multiple benchmark data sets. As shown, it can efficiently produce more compact and discriminative codebooks than the state-of-the-art hierarchical word-merging algorithms, especially when the size of the codebook is significantly reduced.
Hauk, Olaf; Davis, Matthew H; Pulvermüller, Friedemann
2008-09-01
Psycholinguistic research has documented a range of variables that influence visual word recognition performance. Many of these variables are highly intercorrelated. Most previous studies have used factorial designs, which do not exploit the full range of values available for continuous variables, and are prone to skewed stimulus selection as well as to effects of the baseline (e.g. when contrasting words with pseudowords). In our study, we used a parametric approach to study the effects of several psycholinguistic variables on brain activation. We focussed on the variable word frequency, which has been used in numerous previous behavioural, electrophysiological and neuroimaging studies, in order to investigate the neuronal network underlying visual word processing. Furthermore, we investigated the variable orthographic typicality as well as a combined variable for word length and orthographic neighbourhood size (N), for which neuroimaging results are still either scarce or inconsistent. Data were analysed using multiple linear regression analysis of event-related fMRI data acquired from 21 subjects in a silent reading paradigm. The frequency variable correlated negatively with activation in left fusiform gyrus, bilateral inferior frontal gyri and bilateral insulae, indicating that word frequency can affect multiple aspects of word processing. N correlated positively with brain activity in left and right middle temporal gyri as well as right inferior frontal gyrus. Thus, our analysis revealed multiple distinct brain areas involved in visual word processing within one data set.
READINESS AND PHONETIC ANALYSIS OF WORDS IN GRADES K-2.
ERIC Educational Resources Information Center
CAMPBELL, BONNIE; QUINN, GOLDIE
THE METHOD USED AT THE BELLEVUE, NEBRASKA, PUBLIC SCHOOLS TO TEACH READING READINESS AND THE PHONETIC ANALYSIS OF WORDS IN KINDERGARTEN THROUGH GRADE TWO IS DESCRIBED. SUGGESTIONS FOR TEACHING THE READINESS SKILLS OF AUDITORY AND VISUAL PERCEPTION, VOCABULARY SKILLS OF WORD RECOGNITION AND WORD MEANING, AND THE PHONETIC ANALYSIS OF WORDS IN GRADES…
Ambiguity and Relatedness Effects in Semantic Tasks: Are They Due to Semantic Coding?
ERIC Educational Resources Information Center
Hino, Yasushi; Pexman, Penny M.; Lupker, Stephen J.
2006-01-01
According to parallel distributed processing (PDP) models of visual word recognition, the speed of semantic coding is modulated by the nature of the orthographic-to-semantic mappings. Consistent with this idea, an ambiguity disadvantage and a relatedness-of-meaning (ROM) advantage have been reported in some word recognition tasks in which semantic…
ERIC Educational Resources Information Center
Jimenez, Juan E.; Ortiz, Maria del Rosario; Rodrigo, Mercedes; Hernandez-Valle, Isabel; Ramirez, Gustavo; Estevez, Adelina; O'Shanahan, Isabel; Trabaue, Maria de la Luz
2003-01-01
A study assessed whether the effects of computer-assisted practice on visual word recognition differed for 73 Spanish children with reading disabilities with or without aptitude-achievement discrepancy. Computer-assisted intervention improved word recognition. However, children with dyslexia had more difficulties than poor readers during…
Visual Cortical Representation of Whole Words and Hemifield-split Word Parts.
Strother, Lars; Coros, Alexandra M; Vilis, Tutis
2016-02-01
Reading requires the neural integration of visual word form information that is split between our retinal hemifields. We examined multiple visual cortical areas involved in this process by measuring fMRI responses while observers viewed words that changed or repeated in one or both hemifields. We were specifically interested in identifying brain areas that exhibit decreased fMRI responses as a result of repeated versus changing visual word form information in each visual hemifield. Our method yielded highly significant effects of word repetition in a previously reported visual word form area (VWFA) in occipitotemporal cortex, which represents hemifield-split words as whole units. We also identified a more posterior occipital word form area (OWFA), which represents word form information in the right and left hemifields independently and is thus both functionally and anatomically distinct from the VWFA. Both the VWFA and the OWFA were left-lateralized in our study and strikingly symmetric in anatomical location relative to known face-selective visual cortical areas in the right hemisphere. Our findings are consistent with the observation that category-selective visual areas come in pairs and support the view that neural mechanisms in left visual cortex--especially those that evolved to support the visual processing of faces--are developmentally malleable and become incorporated into a left-lateralized visual word form network that supports rapid word recognition and reading.
ERP correlates of letter identity and letter position are modulated by lexical frequency
Vergara-Martínez, Marta; Perea, Manuel; Gómez, Pablo; Swaab, Tamara Y.
2013-01-01
The encoding of letter position is a key aspect in all recently proposed models of visual-word recognition. We analyzed the impact of lexical frequency on letter position assignment by examining the temporal dynamics of lexical activation induced by pseudowords extracted from words of different frequencies. For each word (e.g., BRIDGE), we created two pseudowords: A transposed-letter (TL: BRIGDE) and a replaced-letter pseudoword (RL: BRITGE). ERPs were recorded while participants read words and pseudowords in two tasks: Semantic categorization (Experiment 1) and lexical decision (Experiment 2). For high-frequency stimuli, similar ERPs were obtained for words and TL-pseudowords, but the N400 component to words was reduced relative to RL-pseudowords, indicating less lexical/semantic activation. In contrast, TL- and RL-pseudowords created from low-frequency stimuli elicited similar ERPs. Behavioral responses in the lexical decision task paralleled this asymmetry. The present findings impose constraints on computational and neural models of visual-word recognition. PMID:23454070
Stimulus-driven changes in the direction of neural priming during visual word recognition.
Pas, Maciej; Nakamura, Kimihiro; Sawamoto, Nobukatsu; Aso, Toshihiko; Fukuyama, Hidenao
2016-01-15
Visual object recognition is generally known to be facilitated when targets are preceded by the same or relevant stimuli. For written words, however, the beneficial effect of priming can be reversed when primes and targets share initial syllables (e.g., "boca" and "bono"). Using fMRI, the present study explored neuroanatomical correlates of this negative syllabic priming. In each trial, participants made semantic judgment about a centrally presented target, which was preceded by a masked prime flashed either to the left or right visual field. We observed that the inhibitory priming during reading was associated with a left-lateralized effect of repetition enhancement in the inferior frontal gyrus (IFG), rather than repetition suppression in the ventral visual region previously associated with facilitatory behavioral priming. We further performed a second fMRI experiment using a classical whole-word repetition priming paradigm with the same hemifield procedure and task instruction, and obtained well-known effects of repetition suppression in the left occipito-temporal cortex. These results therefore suggest that the left IFG constitutes a fast word processing system distinct from the posterior visual word-form system and that the directions of repetition effects can change with intrinsic properties of stimuli even when participants' cognitive and attentional states are kept constant. Copyright © 2015 Elsevier Inc. All rights reserved.
Schröter, Pauline; Schroeder, Sascha
2017-12-01
With the Developmental Lexicon Project (DeveL), we present a large-scale study that was conducted to collect data on visual word recognition in German across the lifespan. A total of 800 children from Grades 1 to 6, as well as two groups of younger and older adults, participated in the study and completed a lexical decision and a naming task. We provide a database for 1,152 German words, comprising behavioral data from seven different stages of reading development, along with sublexical and lexical characteristics for all stimuli. The present article describes our motivation for this project, explains the methods we used to collect the data, and reports analyses on the reliability of our results. In addition, we explored developmental changes in three marker effects in psycholinguistic research: word length, word frequency, and orthographic similarity. The database is available online.
Orthographic similarity: the case of "reversed anagrams".
Morris, Alison L; Still, Mary L
2012-07-01
How orthographically similar are words such as paws and swap, flow and wolf, or live and evil? According to the letter position coding schemes used in models of visual word recognition, these reversed anagrams are considered to be less similar than words that share letters in the same absolute or relative positions (such as home and hose or plan and lane). Therefore, reversed anagrams should not produce the standard orthographic similarity effects found using substitution neighbors (e.g., home, hose). Simulations using the spatial coding model (Davis, Psychological Review 117, 713-758, 2010), for example, predict an inhibitory masked-priming effect for substitution neighbor word pairs but a null effect for reversed anagrams. Nevertheless, we obtained significant inhibitory priming using both stimulus types (Experiment 1). We also demonstrated that robust repetition blindness can be obtained for reversed anagrams (Experiment 2). Reversed anagrams therefore provide a new test for models of visual word recognition and orthographic similarity.
Fischer-Baum, Simon; Englebretson, Robert
2016-08-01
Reading relies on the recognition of units larger than single letters and smaller than whole words. Previous research has linked sublexical structures in reading to properties of the visual system, specifically on the parallel processing of letters that the visual system enables. But whether the visual system is essential for this to happen, or whether the recognition of sublexical structures may emerge by other means, is an open question. To address this question, we investigate braille, a writing system that relies exclusively on the tactile rather than the visual modality. We provide experimental evidence demonstrating that adult readers of (English) braille are sensitive to sublexical units. Contrary to prior assumptions in the braille research literature, we find strong evidence that braille readers do indeed access sublexical structure, namely the processing of multi-cell contractions as single orthographic units and the recognition of morphemes within morphologically-complex words. Therefore, we conclude that the recognition of sublexical structure is not exclusively tied to the visual system. However, our findings also suggest that there are aspects of morphological processing on which braille and print readers differ, and that these differences may, crucially, be related to reading using the tactile rather than the visual sensory modality. Copyright © 2016 Elsevier B.V. All rights reserved.
Ma, Wei Ji; Zhou, Xiang; Ross, Lars A; Foxe, John J; Parra, Lucas C
2009-01-01
Watching a speaker's facial movements can dramatically enhance our ability to comprehend words, especially in noisy environments. From a general doctrine of combining information from different sensory modalities (the principle of inverse effectiveness), one would expect that the visual signals would be most effective at the highest levels of auditory noise. In contrast, we find, in accord with a recent paper, that visual information improves performance more at intermediate levels of auditory noise than at the highest levels, and we show that a novel visual stimulus containing only temporal information does the same. We present a Bayesian model of optimal cue integration that can explain these conflicts. In this model, words are regarded as points in a multidimensional space and word recognition is a probabilistic inference process. When the dimensionality of the feature space is low, the Bayesian model predicts inverse effectiveness; when the dimensionality is high, the enhancement is maximal at intermediate auditory noise levels. When the auditory and visual stimuli differ slightly in high noise, the model makes a counterintuitive prediction: as sound quality increases, the proportion of reported words corresponding to the visual stimulus should first increase and then decrease. We confirm this prediction in a behavioral experiment. We conclude that auditory-visual speech perception obeys the same notion of optimality previously observed only for simple multisensory stimuli.
Hsiao, Janet Hui-Wen
2011-11-01
In Chinese orthography, a dominant character structure exists in which a semantic radical appears on the left and a phonetic radical on the right (SP characters); a minority opposite arrangement also exists (PS characters). As the number of phonetic radical types is much greater than semantic radical types, in SP characters the information is skewed to the right, whereas in PS characters it is skewed to the left. Through training a computational model for SP and PS character recognition that takes into account of the locations in which the characters appear in the visual field during learning, but does not assume any fundamental hemispheric processing difference, we show that visual field differences can emerge as a consequence of the fundamental structural differences in information between SP and PS characters, as opposed to the fundamental processing differences between the two hemispheres. This modeling result is also consistent with behavioral naming performance. This work provides strong evidence that perceptual learning, i.e., the information structure of word stimuli to which the readers have long been exposed, is one of the factors that accounts for hemispheric asymmetry effects in visual word recognition. Copyright © 2011 Elsevier Inc. All rights reserved.
Brébion, G; Ohlsen, R I; Bressan, R A; David, A S
2012-12-01
Previous research has shown associations between source memory errors and hallucinations in patients with schizophrenia. We bring together here findings from a broad memory investigation to specify better the type of source memory failure that is associated with auditory and visual hallucinations. Forty-one patients with schizophrenia and 43 healthy participants underwent a memory task involving recall and recognition of lists of words, recognition of pictures, memory for temporal and spatial context of presentation of the stimuli, and remembering whether target items were presented as words or pictures. False recognition of words and pictures was associated with hallucination scores. The extra-list intrusions in free recall were associated with verbal hallucinations whereas the intra-list intrusions were associated with a global hallucination score. Errors in discriminating the temporal context of word presentation and the spatial context of picture presentation were associated with auditory hallucinations. The tendency to remember verbal labels of items as pictures of these items was associated with visual hallucinations. Several memory errors were also inversely associated with affective flattening and anhedonia. Verbal and visual hallucinations are associated with confusion between internal verbal thoughts or internal visual images and perception. In addition, auditory hallucinations are associated with failure to process or remember the context of presentation of the events. Certain negative symptoms have an opposite effect on memory errors.
English Listeners Use Suprasegmental Cues to Lexical Stress Early during Spoken-Word Recognition
ERIC Educational Resources Information Center
Jesse, Alexandra; Poellmann, Katja; Kong, Ying-Yee
2017-01-01
Purpose: We used an eye-tracking technique to investigate whether English listeners use suprasegmental information about lexical stress to speed up the recognition of spoken words in English. Method: In a visual world paradigm, 24 young English listeners followed spoken instructions to choose 1 of 4 printed referents on a computer screen (e.g.,…
ERIC Educational Resources Information Center
Solomyak, Olla; Marantz, Alec
2009-01-01
We present an MEG study of heteronym recognition, aiming to distinguish between two theories of lexical access: the "early access" theory, which entails that lexical access occurs at early (pre 200 ms) stages of processing, and the "late access" theory, which interprets this early activity as orthographic word-form identification rather than…
Audiovisual speech facilitates voice learning.
Sheffert, Sonya M; Olson, Elizabeth
2004-02-01
In this research, we investigated the effects of voice and face information on the perceptual learning of talkers and on long-term memory for spoken words. In the first phase, listeners were trained over several days to identify voices from words presented auditorily or audiovisually. The training data showed that visual information about speakers enhanced voice learning, revealing cross-modal connections in talker processing akin to those observed in speech processing. In the second phase, the listeners completed an auditory or audiovisual word recognition memory test in which equal numbers of words were spoken by familiar and unfamiliar talkers. The data showed that words presented by familiar talkers were more likely to be retrieved from episodic memory, regardless of modality. Together, these findings provide new information about the representational code underlying familiar talker recognition and the role of stimulus familiarity in episodic word recognition.
Kirk, Karen Iler; Prusick, Lindsay; French, Brian; Gotch, Chad; Eisenberg, Laurie S; Young, Nancy
2012-06-01
Under natural conditions, listeners use both auditory and visual speech cues to extract meaning from speech signals containing many sources of variability. However, traditional clinical tests of spoken word recognition routinely employ isolated words or sentences produced by a single talker in an auditory-only presentation format. The more central cognitive processes used during multimodal integration, perceptual normalization, and lexical discrimination that may contribute to individual variation in spoken word recognition performance are not assessed in conventional tests of this kind. In this article, we review our past and current research activities aimed at developing a series of new assessment tools designed to evaluate spoken word recognition in children who are deaf or hard of hearing. These measures are theoretically motivated by a current model of spoken word recognition and also incorporate "real-world" stimulus variability in the form of multiple talkers and presentation formats. The goal of this research is to enhance our ability to estimate real-world listening skills and to predict benefit from sensory aid use in children with varying degrees of hearing loss. American Academy of Audiology.
ERIC Educational Resources Information Center
Gallistel, Elizabeth; And Others
Ten auditory and ten visual aptitude measures were administered in the middle of first grade to a sample of 58 low readers. More than half of this low reader sample had scored more than a year below expected grade level on two or more aptitudes. Word recognition measures were administered after four months of sight word instruction and again after…
Matching Heard and Seen Speech: An ERP Study of Audiovisual Word Recognition
Kaganovich, Natalya; Schumaker, Jennifer; Rowland, Courtney
2016-01-01
Seeing articulatory gestures while listening to speech-in-noise (SIN) significantly improves speech understanding. However, the degree of this improvement varies greatly among individuals. We examined a relationship between two distinct stages of visual articulatory processing and the SIN accuracy by combining a cross-modal repetition priming task with ERP recordings. Participants first heard a word referring to a common object (e.g., pumpkin) and then decided whether the subsequently presented visual silent articulation matched the word they had just heard. Incongruent articulations elicited a significantly enhanced N400, indicative of a mismatch detection at the pre-lexical level. Congruent articulations elicited a significantly larger LPC, indexing articulatory word recognition. Only the N400 difference between incongruent and congruent trials was significantly correlated with individuals’ SIN accuracy improvement in the presence of the talker’s face. PMID:27155219
Similarity and Difference in Learning L2 Word-Form
ERIC Educational Resources Information Center
Hamada, Megumi; Koda, Keiko
2011-01-01
This study explored similarity and difference in L2 written word-form learning from a cross-linguistic perspective. This study investigated whether learners' L1 orthographic background, which influences L2 visual word recognition (e.g., Wang et al., 2003), also influences L2 word-form learning, in particular, the sensitivity to phonological and…
What Do Letter Migration Errors Reveal About Letter Position Coding in Visual Word Recognition?
ERIC Educational Resources Information Center
Davis, Colin J.; Bowers, Jeffrey S.
2004-01-01
Dividing attention across multiple words occasionally results in misidentifications whereby letters apparently migrate between words. Previous studies have found that letter migrations preserve within-word letter position, which has been interpreted as support for position-specific letter coding. To investigate this issue, the authors used word…
Word Stress in German Single-Word Reading
ERIC Educational Resources Information Center
Beyermann, Sandra; Penke, Martina
2014-01-01
This article reports a lexical-decision experiment that was conducted to investigate the impact of word stress on visual word recognition in German. Reaction-time latencies and error rates of German readers on different levels of reading proficiency (i.e., third graders and fifth graders from primary school and university students) were compared…
Cognate and Word Class Ambiguity Effects in Noun and Verb Processing
ERIC Educational Resources Information Center
Bultena, Sybrine; Dijkstra, Ton; van Hell, Janet G.
2013-01-01
This study examined how noun and verb processing in bilingual visual word recognition are affected by within and between-language overlap. We investigated how word class ambiguous noun and verb cognates are processed by bilinguals, to see if co-activation of overlapping word forms between languages benefits from additional overlap within a…
Velan, Hadas; Frost, Ram
2010-01-01
Recent studies suggest that basic effects which are markers of visual word recognition in Indo-European languages cannot be obtained in Hebrew or in Arabic. Although Hebrew has an alphabetic writing system, just like English, French, or Spanish, a series of studies consistently suggested that simple form-orthographic priming, or letter-transposition priming are not found in Hebrew. In four experiments, we tested the hypothesis that this is due to the fact that Semitic words have an underlying structure that constrains the possible alignment of phonemes and their respective letters. The experiments contrasted typical Semitic words which are root-derived, with Hebrew words of non-Semitic origin, which are morphologically simple and resemble base words in European languages. Using RSVP, TL priming, and form-priming manipulations, we show that Hebrew readers process Hebrew words which are morphologically simple similar to the way they process English words. These words indeed reveal the typical form-priming and TL priming effects reported in European languages. In contrast, words with internal structure are processed differently, and require a different code for lexical access. We discuss the implications of these findings for current models of visual word recognition. PMID:21163472
Towards a Universal Model of Reading
Frost, Ram
2013-01-01
In the last decade, reading research has seen a paradigmatic shift. A new wave of computational models of orthographic processing that offer various forms of noisy position or context-sensitive coding, have revolutionized the field of visual word recognition. The influx of such models stems mainly from consistent findings, coming mostly from European languages, regarding an apparent insensitivity of skilled readers to letter-order. Underlying the current revolution is the theoretical assumption that the insensitivity of readers to letter order reflects the special way in which the human brain encodes the position of letters in printed words. The present paper discusses the theoretical shortcomings and misconceptions of this approach to visual word recognition. A systematic review of data obtained from a variety of languages demonstrates that letter-order insensitivity is not a general property of the cognitive system, neither it is a property of the brain in encoding letters. Rather, it is a variant and idiosyncratic characteristic of some languages, mostly European, reflecting a strategy of optimizing encoding resources, given the specific structure of words. Since the main goal of reading research is to develop theories that describe the fundamental and invariant phenomena of reading across orthographies, an alternative approach to model visual word recognition is offered. The dimensions of a possible universal model of reading, which outlines the common cognitive operations involved in orthographic processing in all writing systems, are discussed. PMID:22929057
Towards a universal model of reading.
Frost, Ram
2012-10-01
In the last decade, reading research has seen a paradigmatic shift. A new wave of computational models of orthographic processing that offer various forms of noisy position or context-sensitive coding have revolutionized the field of visual word recognition. The influx of such models stems mainly from consistent findings, coming mostly from European languages, regarding an apparent insensitivity of skilled readers to letter order. Underlying the current revolution is the theoretical assumption that the insensitivity of readers to letter order reflects the special way in which the human brain encodes the position of letters in printed words. The present article discusses the theoretical shortcomings and misconceptions of this approach to visual word recognition. A systematic review of data obtained from a variety of languages demonstrates that letter-order insensitivity is neither a general property of the cognitive system nor a property of the brain in encoding letters. Rather, it is a variant and idiosyncratic characteristic of some languages, mostly European, reflecting a strategy of optimizing encoding resources, given the specific structure of words. Since the main goal of reading research is to develop theories that describe the fundamental and invariant phenomena of reading across orthographies, an alternative approach to model visual word recognition is offered. The dimensions of a possible universal model of reading, which outlines the common cognitive operations involved in orthographic processing in all writing systems, are discussed.
Recognition and reading aloud of kana and kanji word: an fMRI study.
Ino, Tadashi; Nakai, Ryusuke; Azuma, Takashi; Kimura, Toru; Fukuyama, Hidenao
2009-03-16
It has been proposed that different brain regions are recruited for processing two Japanese writing systems, namely, kanji (morphograms) and kana (syllabograms). However, this difference may depend upon what type of word was used and also on what type of task was performed. Using fMRI, we investigated brain activation for processing kanji and kana words with similar high familiarity in two tasks: word recognition and reading aloud. During both tasks, words and non-words were presented side by side, and the subjects were required to press a button corresponding to the real word in the word recognition task and were required to read aloud the real word in the reading aloud task. Brain activations were similar between kanji and kana during reading aloud task, whereas during word recognition task in which accurate identification and selection were required, kanji relative to kana activated regions of bilateral frontal, parietal and occipitotemporal cortices, all of which were related mainly to visual word-form analysis and visuospatial attention. Concerning the difference of brain activity between two tasks, differential activation was found only in the regions associated with task-specific sensorimotor processing for kana, whereas visuospatial attention network also showed greater activation during word recognition task than during reading aloud task for kanji. We conclude that the differences in brain activation between kanji and kana depend on the interaction between the script characteristics and the task demands.
ERIC Educational Resources Information Center
von Feldt, James R.; Subtelny, Joanne
The Webster diacritical system provides a discrete symbol for each sound and designates the appropriate syllable to be stressed in any polysyllabic word; the symbol system presents cues for correct production, auditory discriminiation, and visual recognition of new words in print and as visual speech gestures. The Webster's Diacritical CAI Program…
Neural Correlates of Morphological Decomposition in a Morphologically Rich Language: An fMRI Study
ERIC Educational Resources Information Center
Lehtonen, Minna; Vorobyev, Victor A.; Hugdahl, Kenneth; Tuokkola, Terhi; Laine, Matti
2006-01-01
By employing visual lexical decision and functional MRI, we studied the neural correlates of morphological decomposition in a highly inflected language (Finnish) where most inflected noun forms elicit a consistent processing cost during word recognition. This behavioral effect could reflect suffix stripping at the visual word form level and/or…
Hsiao, Janet H; Cheung, Kit
2016-03-01
In Chinese orthography, the most common character structure consists of a semantic radical on the left and a phonetic radical on the right (SP characters); the minority, opposite arrangement also exists (PS characters). Recent studies showed that SP character processing is more left hemisphere (LH) lateralized than PS character processing. Nevertheless, it remains unclear whether this is due to phonetic radical position or character type frequency. Through computational modeling with artificial lexicons, in which we implement a theory of hemispheric asymmetry in perception but do not assume phonological processing being LH lateralized, we show that the difference in character type frequency alone is sufficient to exhibit the effect that the dominant type has a stronger LH lateralization than the minority type. This effect is due to higher visual similarity among characters in the dominant type than the minority type, demonstrating the modulation of visual similarity of words on hemispheric lateralization. Copyright © 2015 Cognitive Science Society, Inc.
Dundas, Eva M.; Plaut, David C.; Behrmann, Marlene
2014-01-01
The adult human brain would appear to have specialized and independent neural systems for the visual processing of words and faces. Extensive evidence has demonstrated greater selectivity for written words in the left over right hemisphere, and, conversely, greater selectivity for faces in the right over left hemisphere. This study examines the emergence of these complementary neural profiles, as well as the possible relationship between them. Using behavioral and neurophysiological measures, in adults, we observed the standard finding of greater accuracy and a larger N170 ERP component in the left over right hemisphere for words, and conversely, greater accuracy and a larger N170 in the right over the left hemisphere for faces. We also found that, although children aged 7-12 years revealed the adult hemispheric pattern for words, they showed neither a behavioral nor a neural hemispheric superiority for faces. Of particular interest, the magnitude of their N170 for faces in the right hemisphere was related to that of the N170 for words in their left hemisphere. These findings suggest that the hemispheric organization of face recognition and of word recognition do not develop independently, and that word lateralization may precede and drive later face lateralization. A theoretical account for the findings, in which competition for visual representations unfolds over the course of development, is discussed. PMID:24933662
Dundas, Eva M; Plaut, David C; Behrmann, Marlene
2014-08-01
The adult human brain would appear to have specialized and independent neural systems for the visual processing of words and faces. Extensive evidence has demonstrated greater selectivity for written words in the left over right hemisphere, and, conversely, greater selectivity for faces in the right over left hemisphere. This study examines the emergence of these complementary neural profiles, as well as the possible relationship between them. Using behavioral and neurophysiological measures, in adults, we observed the standard finding of greater accuracy and a larger N170 ERP component in the left over right hemisphere for words, and conversely, greater accuracy and a larger N170 in the right over the left hemisphere for faces. We also found that although children aged 7-12 years revealed the adult hemispheric pattern for words, they showed neither a behavioral nor a neural hemispheric superiority for faces. Of particular interest, the magnitude of their N170 for faces in the right hemisphere was related to that of the N170 for words in their left hemisphere. These findings suggest that the hemispheric organization of face recognition and of word recognition does not develop independently, and that word lateralization may precede and drive later face lateralization. A theoretical account for the findings, in which competition for visual representations unfolds over the course of development, is discussed. Copyright © 2014 Elsevier Ltd. All rights reserved.
Reduced effects of pictorial distinctiveness on false memory following dynamic visual noise.
Parker, Andrew; Kember, Timothy; Dagnall, Neil
2017-07-01
High levels of false recognition for non-presented items typically occur following exposure to lists of associated words. These false recognition effects can be reduced by making the studied items more distinctive by the presentation of pictures during encoding. One explanation of this is that during recognition, participants expect or attempt to retrieve distinctive pictorial information in order to evaluate the study status of the test item. If this involves the retrieval and use of visual imagery, then interfering with imagery processing should reduce the effectiveness of pictorial information in false memory reduction. In the current experiment, visual-imagery processing was disrupted at retrieval by the use of dynamic visual noise (DVN). It was found that effects of DVN dissociated true from false memory. Memory for studied words was not influenced by the presence of an interfering noise field. However, false memory was increased and the effects of picture-induced distinctiveness was eliminated. DVN also increased false recollection and remember responses to unstudied items.
Individual differences in online spoken word recognition: Implications for SLI
McMurray, Bob; Samelson, Vicki M.; Lee, Sung Hee; Tomblin, J. Bruce
2012-01-01
Thirty years of research has uncovered the broad principles that characterize spoken word processing across listeners. However, there have been few systematic investigations of individual differences. Such an investigation could help refine models of word recognition by indicating which processing parameters are likely to vary, and could also have important implications for work on language impairment. The present study begins to fill this gap by relating individual differences in overall language ability to variation in online word recognition processes. Using the visual world paradigm, we evaluated online spoken word recognition in adolescents who varied in both basic language abilities and non-verbal cognitive abilities. Eye movements to target, cohort and rhyme objects were monitored during spoken word recognition, as an index of lexical activation. Adolescents with poor language skills showed fewer looks to the target and more fixations to the cohort and rhyme competitors. These results were compared to a number of variants of the TRACE model (McClelland & Elman, 1986) that were constructed to test a range of theoretical approaches to language impairment: impairments at sensory and phonological levels; vocabulary size, and generalized slowing. None were strongly supported, and variation in lexical decay offered the best fit. Thus, basic word recognition processes like lexical decay may offer a new way to characterize processing differences in language impairment. PMID:19836014
Universal brain systems for recognizing word shapes and handwriting gestures during reading
Nakamura, Kimihiro; Kuo, Wen-Jui; Pegado, Felipe; Cohen, Laurent; Tzeng, Ovid J. L.; Dehaene, Stanislas
2012-01-01
Do the neural circuits for reading vary across culture? Reading of visually complex writing systems such as Chinese has been proposed to rely on areas outside the classical left-hemisphere network for alphabetic reading. Here, however, we show that, once potential confounds in cross-cultural comparisons are controlled for by presenting handwritten stimuli to both Chinese and French readers, the underlying network for visual word recognition may be more universal than previously suspected. Using functional magnetic resonance imaging in a semantic task with words written in cursive font, we demonstrate that two universal circuits, a shape recognition system (reading by eye) and a gesture recognition system (reading by hand), are similarly activated and show identical patterns of activation and repetition priming in the two language groups. These activations cover most of the brain regions previously associated with culture-specific tuning. Our results point to an extended reading network that invariably comprises the occipitotemporal visual word-form system, which is sensitive to well-formed static letter strings, and a distinct left premotor region, Exner’s area, which is sensitive to the forward or backward direction with which cursive letters are dynamically presented. These findings suggest that cultural effects in reading merely modulate a fixed set of invariant macroscopic brain circuits, depending on surface features of orthographies. PMID:23184998
ERIC Educational Resources Information Center
Marcet, Ana; Perea, Manuel
2018-01-01
Previous research has shown that early in the word recognition process, there is some degree of uncertainty concerning letter identity and letter position. Here, we examined whether this uncertainty also extends to the mapping of letter features onto letters, as predicted by the Bayesian Reader (Norris & Kinoshita, 2012). Indeed, anecdotal…
Pictures, images, and recollective experience.
Dewhurst, S A; Conway, M A
1994-09-01
Five experiments investigated the influence of picture processing on recollective experience in recognition memory. Subjects studied items that differed in visual or imaginal detail, such as pictures versus words and high-imageability versus low-imageability words, and performed orienting tasks that directed processing either toward a stimulus as a word or toward a stimulus as a picture or image. Standard effects of imageability (e.g., the picture superiority effect and memory advantages following imagery) were obtained only in recognition judgments that featured recollective experience and were eliminated or reversed when recognition was not accompanied by recollective experience. It is proposed that conscious recollective experience in recognition memory is cued by attributes of retrieved memories such as sensory-perceptual attributes and records of cognitive operations performed at encoding.
Boles, D B
1989-01-01
Three attributes of words are their imageability, concreteness, and familiarity. From a literature review and several experiments, I previously concluded (Boles, 1983a) that only familiarity affects the overall near-threshold recognition of words, and that none of the attributes affects right-visual-field superiority for word recognition. Here these conclusions are modified by two experiments demonstrating a critical mediating influence of intentional versus incidental memory instructions. In Experiment 1, subjects were instructed to remember the words they were shown, for subsequent recall. The results showed effects of both imageability and familiarity on overall recognition, as well as an effect of imageability on lateralization. In Experiment 2, word-memory instructions were deleted and the results essentially reinstated the findings of Boles (1983a). It is concluded that right-hemisphere imagery processes can participate in word recognition under intentional memory instructions. Within the dual coding theory (Paivio, 1971), the results argue that both discrete and continuous processing modes are available, that the modes can be used strategically, and that continuous processing can occur prior to response stages.
Generating descriptive visual words and visual phrases for large-scale image applications.
Zhang, Shiliang; Tian, Qi; Hua, Gang; Huang, Qingming; Gao, Wen
2011-09-01
Bag-of-visual Words (BoWs) representation has been applied for various problems in the fields of multimedia and computer vision. The basic idea is to represent images as visual documents composed of repeatable and distinctive visual elements, which are comparable to the text words. Notwithstanding its great success and wide adoption, visual vocabulary created from single-image local descriptors is often shown to be not as effective as desired. In this paper, descriptive visual words (DVWs) and descriptive visual phrases (DVPs) are proposed as the visual correspondences to text words and phrases, where visual phrases refer to the frequently co-occurring visual word pairs. Since images are the carriers of visual objects and scenes, a descriptive visual element set can be composed by the visual words and their combinations which are effective in representing certain visual objects or scenes. Based on this idea, a general framework is proposed for generating DVWs and DVPs for image applications. In a large-scale image database containing 1506 object and scene categories, the visual words and visual word pairs descriptive to certain objects or scenes are identified and collected as the DVWs and DVPs. Experiments show that the DVWs and DVPs are informative and descriptive and, thus, are more comparable with the text words than the classic visual words. We apply the identified DVWs and DVPs in several applications including large-scale near-duplicated image retrieval, image search re-ranking, and object recognition. The combination of DVW and DVP performs better than the state of the art in large-scale near-duplicated image retrieval in terms of accuracy, efficiency and memory consumption. The proposed image search re-ranking algorithm: DWPRank outperforms the state-of-the-art algorithm by 12.4% in mean average precision and about 11 times faster in efficiency.
Marcet, Ana; Perea, Manuel
2018-05-01
Previous research has shown that early in the word recognition process, there is some degree of uncertainty concerning letter identity and letter position. Here, we examined whether this uncertainty also extends to the mapping of letter features onto letters, as predicted by the Bayesian Reader (Norris & Kinoshita, 2012). Indeed, anecdotal evidence suggests that nonwords containing multi-letter homoglyphs (e.g., rn→m), such as docurnent, can be confusable with their base word. We conducted 2 masked priming lexical decision experiments in which the words/nonwords contained a middle letter that was visually similar to a multi-letter homoglyph (e.g., docurnent [rn-m], presiclent [cl-d]). Three types of primes were employed: identity, multi-letter homoglyph, and orthographic control. We used 2 commonly used fonts: Tahoma in Experiment 1 and Calibri in Experiment 2. Results in both experiments showed faster word identification times in the homoglyph condition than in the control condition (e.g., docurnento-DOCUMENTO faster than docusnento-DOCUMENTO). Furthermore, the homoglyph condition produced nearly the same latencies as the identity condition. These findings have important implications not only at a theoretical level (models of printed word recognition) but also at an applied level (Internet administrators/users). (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Schurz, Matthias; Sturm, Denise; Richlan, Fabio; Kronbichler, Martin; Ladurner, Gunther; Wimmer, Heinz
2010-01-01
Based on our previous work, we expected the Visual Word Form Area (VWFA) in the left ventral visual pathway to be engaged by both whole-word recognition and by serial sublexical coding of letter strings. To examine this double function, a phonological lexical decision task (i.e., “Does xxx sound like an existing word?”) presented short and long letter strings of words, pseudohomophones, and pseudowords (e.g., Taxi, Taksi and Tazi). Main findings were that the length effect for words was limited to occipital regions and absent in the VWFA. In contrast, a marked length effect for pseudowords was found throughout the ventral visual pathway including the VWFA, as well as in regions presumably engaged by visual attention and silent-articulatory processes. The length by lexicality interaction on brain activation corresponds to well-established behavioral findings of a length by lexicality interaction on naming latencies and speaks for the engagement of the VWFA by both lexical and sublexical processes. PMID:19896538
Schurz, Matthias; Sturm, Denise; Richlan, Fabio; Kronbichler, Martin; Ladurner, Gunther; Wimmer, Heinz
2010-02-01
Based on our previous work, we expected the Visual Word Form Area (VWFA) in the left ventral visual pathway to be engaged by both whole-word recognition and by serial sublexical coding of letter strings. To examine this double function, a phonological lexical decision task (i.e., "Does xxx sound like an existing word?") presented short and long letter strings of words, pseudohomophones, and pseudowords (e.g., Taxi, Taksi and Tazi). Main findings were that the length effect for words was limited to occipital regions and absent in the VWFA. In contrast, a marked length effect for pseudowords was found throughout the ventral visual pathway including the VWFA, as well as in regions presumably engaged by visual attention and silent-articulatory processes. The length by lexicality interaction on brain activation corresponds to well-established behavioral findings of a length by lexicality interaction on naming latencies and speaks for the engagement of the VWFA by both lexical and sublexical processes. Copyright (c) 2009 Elsevier Inc. All rights reserved.
The time course of spoken word learning and recognition: studies with artificial lexicons.
Magnuson, James S; Tanenhaus, Michael K; Aslin, Richard N; Dahan, Delphine
2003-06-01
The time course of spoken word recognition depends largely on the frequencies of a word and its competitors, or neighbors (similar-sounding words). However, variability in natural lexicons makes systematic analysis of frequency and neighbor similarity difficult. Artificial lexicons were used to achieve precise control over word frequency and phonological similarity. Eye tracking provided time course measures of lexical activation and competition (during spoken instructions to perform visually guided tasks) both during and after word learning, as a function of word frequency, neighbor type, and neighbor frequency. Apparent shifts from holistic to incremental competitor effects were observed in adults and neural network simulations, suggesting such shifts reflect general properties of learning rather than changes in the nature of lexical representations.
ERIC Educational Resources Information Center
Malins, Jeffrey G.; Joanisse, Marc F.
2012-01-01
We investigated the influences of phonological similarity on the time course of spoken word processing in Mandarin Chinese. Event related potentials were recorded while adult native speakers of Mandarin ("N" = 19) judged whether auditory words matched or mismatched visually presented pictures. Mismatching words were of the following…
Computational Modeling of Morphological Effects in Bangla Visual Word Recognition
ERIC Educational Resources Information Center
Dasgupta, Tirthankar; Sinha, Manjira; Basu, Anupam
2015-01-01
In this paper we aim to model the organization and processing of Bangla polymorphemic words in the mental lexicon. Our objective is to determine whether the mental lexicon accesses a polymorphemic word as a whole or decomposes the word into its constituent morphemes and then recognize them accordingly. To address this issue, we adopted two…
Schmidtke, Daniel; Matsuki, Kazunaga; Kuperman, Victor
2017-11-01
The current study addresses a discrepancy in the psycholinguistic literature about the chronology of information processing during the visual recognition of morphologically complex words. Form-then-meaning accounts of complex word recognition claim that morphemes are processed as units of form prior to any influence of their meanings, whereas form-and-meaning models posit that recognition of complex word forms involves the simultaneous access of morphological and semantic information. The study reported here addresses this theoretical discrepancy by applying a nonparametric distributional technique of survival analysis (Reingold & Sheridan, 2014) to 2 behavioral measures of complex word processing. Across 7 experiments reported here, this technique is employed to estimate the point in time at which orthographic, morphological, and semantic variables exert their earliest discernible influence on lexical decision RTs and eye movement fixation durations. Contrary to form-then-meaning predictions, Experiments 1-4 reveal that surface frequency is the earliest lexical variable to exert a demonstrable influence on lexical decision RTs for English and Dutch derived words (e.g., badness ; bad + ness ), English pseudoderived words (e.g., wander ; wand + er ) and morphologically simple control words (e.g., ballad ; ball + ad ). Furthermore, for derived word processing across lexical decision and eye-tracking paradigms (Experiments 1-2; 5-7), semantic effects emerge early in the time-course of word recognition, and their effects either precede or emerge simultaneously with morphological effects. These results are not consistent with the premises of the form-then-meaning view of complex word recognition, but are convergent with a form-and-meaning account of complex word recognition. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Wang, Shinmin; Allen, Richard J; Lee, Jun Ren; Hsieh, Chia-En
2015-05-01
The creation of temporary bound representation of information from different sources is one of the key abilities attributed to the episodic buffer component of working memory. Whereas the role of working memory in word learning has received substantial attention, very little is known about the link between the development of word recognition skills and the ability to bind information in the episodic buffer of working memory and how it may develop with age. This study examined the performance of Grade 2 children (8 years old), Grade 3 children (9 years old), and young adults on a task designed to measure their ability to bind visual and auditory-verbal information in working memory. Children's performance on this task significantly correlated with their word recognition skills even when chronological age, memory for individual elements, and other possible reading-related factors were taken into account. In addition, clear developmental trajectories were observed, with improvements in the ability to hold temporary bound information in working memory between Grades 2 and 3, and between the child and adult groups, that were independent from memory for the individual elements. These findings suggest that the capacity to temporarily bind novel auditory-verbal information to visual form in working memory is linked to the development of word recognition in children and improves with age. Copyright © 2015 Elsevier Inc. All rights reserved.
The influence of print exposure on the body-object interaction effect in visual word recognition.
Hansen, Dana; Siakaluk, Paul D; Pexman, Penny M
2012-01-01
We examined the influence of print exposure on the body-object interaction (BOI) effect in visual word recognition. High print exposure readers and low print exposure readers either made semantic categorizations ("Is the word easily imageable?"; Experiment 1) or phonological lexical decisions ("Does the item sound like a real English word?"; Experiment 2). The results from Experiment 1 showed that there was a larger BOI effect for the low print exposure readers than for the high print exposure readers in semantic categorization, though an effect was observed for both print exposure groups. However, the results from Experiment 2 showed that the BOI effect was observed only for the high print exposure readers in phonological lexical decision. The results of the present study suggest that print exposure does influence the BOI effect, and that this influence varies as a function of task demands.
Associative priming effects with visible, transposed-letter nonwords: JUGDE facilitates COURT.
Perea, Manuel; Palti, Dafna; Gomez, Pablo
2012-04-01
Associative priming effects can be obtained with masked nonword primes or with masked pseudohomophone primes (e.g., judpe-COURT, tode-FROG), but not with visible primes. The usual explanation is that when the prime is visible, these stimuli no longer activate the semantic representations of their base words. Given the important role of transposed-letter stimuli (e.g., jugde) in visual word recognition, here we examined whether or not an associative priming effect could be obtained with visible transposed-letter nonword primes (e.g., jugde-COURT) in a series of lexical decision experiments. Results showed a sizable associative priming effect with visible transposed-letter nonword primes (i.e., jugde-COURT faster than neevr-COURT) in Experiments 1-3 that was close to that with word primes. In contrast, we failed to find a parallel effect with replacement-letter nonword primes (Experiment 2). These findings pose some constraints to models of visual word recognition.
Perea, Manuel; Jiménez, María; Martín-Suesta, Miguel; Gómez, Pablo
2015-04-01
This article explores how letter position coding is attained during braille reading and its implications for models of word recognition. When text is presented visually, the reading process easily adjusts to the jumbling of some letters (jugde-judge), with a small cost in reading speed. Two explanations have been proposed: One relies on a general mechanism of perceptual uncertainty at the visual level, and the other focuses on the activation of an abstract level of representation (i.e., bigrams) that is shared by all orthographic codes. Thus, these explanations make differential predictions about reading in a tactile modality. In the present study, congenitally blind readers read sentences presented on a braille display that tracked the finger position. The sentences either were intact or involved letter transpositions. A parallel experiment was conducted in the visual modality. Results revealed a substantially greater reading cost for the sentences with transposed-letter words in braille readers. In contrast with the findings with sighted readers, in which there is a cost of transpositions in the external (initial and final) letters, the reading cost in braille readers occurs serially, with a large cost for initial letter transpositions. Thus, these data suggest that the letter-position-related effects in visual word recognition are due to the characteristics of the visual stream.
Automatic lip reading by using multimodal visual features
NASA Astrophysics Data System (ADS)
Takahashi, Shohei; Ohya, Jun
2013-12-01
Since long time ago, speech recognition has been researched, though it does not work well in noisy places such as in the car or in the train. In addition, people with hearing-impaired or difficulties in hearing cannot receive benefits from speech recognition. To recognize the speech automatically, visual information is also important. People understand speeches from not only audio information, but also visual information such as temporal changes in the lip shape. A vision based speech recognition method could work well in noisy places, and could be useful also for people with hearing disabilities. In this paper, we propose an automatic lip-reading method for recognizing the speech by using multimodal visual information without using any audio information such as speech recognition. First, the ASM (Active Shape Model) is used to track and detect the face and lip in a video sequence. Second, the shape, optical flow and spatial frequencies of the lip features are extracted from the lip detected by ASM. Next, the extracted multimodal features are ordered chronologically so that Support Vector Machine is performed in order to learn and classify the spoken words. Experiments for classifying several words show promising results of this proposed method.
Does letter rotation slow down orthographic processing in word recognition?
Perea, Manuel; Marcet, Ana; Fernández-López, María
2018-02-01
Leading neural models of visual word recognition assume that letter rotation slows down the conversion of the visual input to a stable orthographic representation (e.g., local detectors combination model; Dehaene, Cohen, Sigman, & Vinckier, 2005, Trends in Cognitive Sciences, 9, 335-341). If this premise is true, briefly presented rotated primes should be less effective at activating word representations than those primes with upright letters. To test this question, we conducted a masked priming lexical decision experiment with vertically presented words either rotated 90° or in marquee format (i.e., vertically but with upright letters). We examined the impact of the format on both letter identity (masked identity priming: identity vs. unrelated) and letter position (masked transposed-letter priming: transposed-letter prime vs. replacement-letter prime). Results revealed sizeable masked identity and transposed-letter priming effects that were similar in magnitude for rotated and marquee words. Therefore, the reading cost from letter rotation does not arise in the initial access to orthographic/lexical representations.
Context-dependent similarity effects in letter recognition.
Kinoshita, Sachiko; Robidoux, Serje; Guilbert, Daniel; Norris, Dennis
2015-10-01
In visual word recognition tasks, digit primes that are visually similar to letter string targets (e.g., 4/A, 8/B) are known to facilitate letter identification relative to visually dissimilar digits (e.g., 6/A, 7/B); in contrast, with letter primes, visual similarity effects have been elusive. In the present study we show that the visual similarity effect with letter primes can be made to come and go, depending on whether it is necessary to discriminate between visually similar letters. The results support a Bayesian view which regards letter recognition not as a passive activation process driven by the fixed stimulus properties, but as a dynamic evidence accumulation process for a decision that is guided by the task context.
Implicit phonological priming during visual word recognition.
Wilson, Lisa B; Tregellas, Jason R; Slason, Erin; Pasko, Bryce E; Rojas, Donald C
2011-03-15
Phonology is a lower-level structural aspect of language involving the sounds of a language and their organization in that language. Numerous behavioral studies utilizing priming, which refers to an increased sensitivity to a stimulus following prior experience with that or a related stimulus, have provided evidence for the role of phonology in visual word recognition. However, most language studies utilizing priming in conjunction with functional magnetic resonance imaging (fMRI) have focused on lexical-semantic aspects of language processing. The aim of the present study was to investigate the neurobiological substrates of the automatic, implicit stages of phonological processing. While undergoing fMRI, eighteen individuals performed a lexical decision task (LDT) on prime-target pairs including word-word homophone and pseudoword-word pseudohomophone pairs with a prime presentation below perceptual threshold. Whole-brain analyses revealed several cortical regions exhibiting hemodynamic response suppression due to phonological priming including bilateral superior temporal gyri (STG), middle temporal gyri (MTG), and angular gyri (AG) with additional region of interest (ROI) analyses revealing response suppression in the left lateralized supramarginal gyrus (SMG). Homophone and pseudohomophone priming also resulted in different patterns of hemodynamic responses relative to one another. These results suggest that phonological processing plays a key role in visual word recognition. Furthermore, enhanced hemodynamic responses for unrelated stimuli relative to primed stimuli were observed in midline cortical regions corresponding to the default-mode network (DMN) suggesting that DMN activity can be modulated by task requirements within the context of an implicit task. Copyright © 2010 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
de Zeeuw, Marlies; Verhoeven, Ludo; Schreuder, Robert
2012-01-01
This study examined to what extent young second language (L2) learners showed morphological family size effects in L2 word recognition and whether the effects were grade-level related. Turkish-Dutch bilingual children (L2) and Dutch (first language, L1) children from second, fourth, and sixth grade performed a Dutch lexical decision task on words…
Auditory, Visual, and Auditory-Visual Perception of Vowels by Hearing-Impaired Children.
ERIC Educational Resources Information Center
Hack, Zarita Caplan; Erber, Norman P.
1982-01-01
Vowels were presented through auditory, visual, and auditory-visual modalities to 18 hearing impaired children (12 to 15 years old) having good, intermediate, and poor auditory word recognition skills. All the groups had difficulty with acoustic information and visual information alone. The first two groups had only moderate difficulty identifying…
How Yellow Is Your Banana? Toddlers' Language-Mediated Visual Search in Referent-Present Tasks
ERIC Educational Resources Information Center
Mani, Nivedita; Johnson, Elizabeth; McQueen, James M.; Huettig, Falk
2013-01-01
What is the relative salience of different aspects of word meaning in the developing lexicon? The current study examines the time-course of retrieval of semantic and color knowledge associated with words during toddler word recognition: At what point do toddlers orient toward an image of a yellow cup upon hearing color-matching words such as…
Morphological Effects in Children Word Reading: A Priming Study in Fourth Graders
ERIC Educational Resources Information Center
Casalis, Severine; Dusautoir, Marion; Cole, Pascale; Ducrot, Stephanie
2009-01-01
A growing corpus of evidence suggests that morphology could play a role in reading acquisition, and that young readers could be sensitive to the morphemic structure of written words. In the present experiment, we examined whether and when morphological information is activated in word recognition. French fourth graders made visual lexical…
Hemispheric Differences in Bilingual Word and Language Recognition.
ERIC Educational Resources Information Center
Roberts, William T.; And Others
The linguistic role of the right hemisphere in bilingual language processing was examined. Ten right-handed Spanish-English bilinguals were tachistoscopically presented with mixed lists of Spanish and English words to either the right or left visual field and asked to identify the language and the word presented. Five of the subjects identified…
Processing Trade-Offs in the Reading of Dutch Derived Words
ERIC Educational Resources Information Center
Kuperman, Victor; Bertram, Raymond; Baayen, R. Harald
2010-01-01
This eye-tracking study explores visual recognition of Dutch suffixed words (e.g., "plaats+ing" "placing") embedded in sentential contexts, and provides new evidence on the interplay between storage and computation in morphological processing. We show that suffix length crucially moderates the use of morphological properties. In words with shorter…
There Is Something about Grammatical Category in Chinese Visual Word Recognition
ERIC Educational Resources Information Center
Kwong, Oi Yee
2016-01-01
The differential processing of nouns and verbs has been attributed to a combination of morphological, syntactic and semantic factors which are often intertwined with other general lexical properties. This study tested the noun-verb difference with Chinese disyllabic words controlled on various lexical parameters. As Chinese words are free from…
Preserved Visual Language Identification Despite Severe Alexia
ERIC Educational Resources Information Center
Di Pietro, Marie; Ptak, Radek; Schnider, Armin
2012-01-01
Patients with letter-by-letter alexia may have residual access to lexical or semantic representations of words despite severely impaired overt word recognition (reading). Here, we report a multilingual patient with severe letter-by-letter alexia who rapidly identified the language of written words and sentences in French and English while he had…
Words, Hemispheres, and Processing Mechanisms: A Response to Marsolek and Deason (2007)
ERIC Educational Resources Information Center
Ellis, Andrew W.; Ansorge, Lydia; Lavidor, Michal
2007-01-01
Ellis, Ansorge and Lavidor (2007) [Ellis, A.W., Ansorge, L., & Lavidor, M. (2007). Words, hemispheres, and dissociable subsystems: The effects of exposure duration, case alternation, priming and continuity of form on word recognition in the left and right visual fields. "Brain and Language," 103, 292-303.] presented three experiments investigating…
Number Reading in Pure Alexia--A Review
ERIC Educational Resources Information Center
Starrfelt, Randi; Behrmann, Marlene
2011-01-01
It is commonly assumed that number reading can be intact in patients with pure alexia, and that this dissociation between letter/word recognition and number reading strongly constrains theories of visual word processing. A truly selective deficit in letter/word processing would strongly support the hypothesis that there is a specialized system or…
Wang, Hua-Chen; Savage, Greg; Gaskell, M Gareth; Paulin, Tamara; Robidoux, Serje; Castles, Anne
2017-08-01
Lexical competition processes are widely viewed as the hallmark of visual word recognition, but little is known about the factors that promote their emergence. This study examined for the first time whether sleep may play a role in inducing these effects. A group of 27 participants learned novel written words, such as banara, at 8 am and were tested on their learning at 8 pm the same day (AM group), while 29 participants learned the words at 8 pm and were tested at 8 am the following day (PM group). Both groups were retested after 24 hours. Using a semantic categorization task, we showed that lexical competition effects, as indexed by slowed responses to existing neighbor words such as banana, emerged 12 h later in the PM group who had slept after learning but not in the AM group. After 24 h the competition effects were evident in both groups. These findings have important implications for theories of orthographic learning and broader neurobiological models of memory consolidation.
Extrinsic Cognitive Load Impairs Spoken Word Recognition in High- and Low-Predictability Sentences.
Hunter, Cynthia R; Pisoni, David B
Listening effort (LE) induced by speech degradation reduces performance on concurrent cognitive tasks. However, a converse effect of extrinsic cognitive load on recognition of spoken words in sentences has not been shown. The aims of the present study were to (a) examine the impact of extrinsic cognitive load on spoken word recognition in a sentence recognition task and (b) determine whether cognitive load and/or LE needed to understand spectrally degraded speech would differentially affect word recognition in high- and low-predictability sentences. Downstream effects of speech degradation and sentence predictability on the cognitive load task were also examined. One hundred twenty young adults identified sentence-final spoken words in high- and low-predictability Speech Perception in Noise sentences. Cognitive load consisted of a preload of short (low-load) or long (high-load) sequences of digits, presented visually before each spoken sentence and reported either before or after identification of the sentence-final word. LE was varied by spectrally degrading sentences with four-, six-, or eight-channel noise vocoding. Level of spectral degradation and order of report (digits first or words first) were between-participants variables. Effects of cognitive load, sentence predictability, and speech degradation on accuracy of sentence-final word identification as well as recall of preload digit sequences were examined. In addition to anticipated main effects of sentence predictability and spectral degradation on word recognition, we found an effect of cognitive load, such that words were identified more accurately under low load than high load. However, load differentially affected word identification in high- and low-predictability sentences depending on the level of sentence degradation. Under severe spectral degradation (four-channel vocoding), the effect of cognitive load on word identification was present for high-predictability sentences but not for low-predictability sentences. Under mild spectral degradation (eight-channel vocoding), the effect of load was present for low-predictability sentences but not for high-predictability sentences. There were also reliable downstream effects of speech degradation and sentence predictability on recall of the preload digit sequences. Long digit sequences were more easily recalled following spoken sentences that were less spectrally degraded. When digits were reported after identification of sentence-final words, short digit sequences were recalled more accurately when the spoken sentences were predictable. Extrinsic cognitive load can impair recognition of spectrally degraded spoken words in a sentence recognition task. Cognitive load affected word identification in both high- and low-predictability sentences, suggesting that load may impact both context use and lower-level perceptual processes. Consistent with prior work, LE also had downstream effects on memory for visual digit sequences. Results support the proposal that extrinsic cognitive load and LE induced by signal degradation both draw on a central, limited pool of cognitive resources that is used to recognize spoken words in sentences under adverse listening conditions.
The Bayesian reader: explaining word recognition as an optimal Bayesian decision process.
Norris, Dennis
2006-04-01
This article presents a theory of visual word recognition that assumes that, in the tasks of word identification, lexical decision, and semantic categorization, human readers behave as optimal Bayesian decision makers. This leads to the development of a computational model of word recognition, the Bayesian reader. The Bayesian reader successfully simulates some of the most significant data on human reading. The model accounts for the nature of the function relating word frequency to reaction time and identification threshold, the effects of neighborhood density and its interaction with frequency, and the variation in the pattern of neighborhood density effects seen in different experimental tasks. Both the general behavior of the model and the way the model predicts different patterns of results in different tasks follow entirely from the assumption that human readers approximate optimal Bayesian decision makers. ((c) 2006 APA, all rights reserved).
W-tree indexing for fast visual word generation.
Shi, Miaojing; Xu, Ruixin; Tao, Dacheng; Xu, Chao
2013-03-01
The bag-of-visual-words representation has been widely used in image retrieval and visual recognition. The most time-consuming step in obtaining this representation is the visual word generation, i.e., assigning visual words to the corresponding local features in a high-dimensional space. Recently, structures based on multibranch trees and forests have been adopted to reduce the time cost. However, these approaches cannot perform well without a large number of backtrackings. In this paper, by considering the spatial correlation of local features, we can significantly speed up the time consuming visual word generation process while maintaining accuracy. In particular, visual words associated with certain structures frequently co-occur; hence, we can build a co-occurrence table for each visual word for a large-scale data set. By associating each visual word with a probability according to the corresponding co-occurrence table, we can assign a probabilistic weight to each node of a certain index structure (e.g., a KD-tree and a K-means tree), in order to re-direct the searching path to be close to its global optimum within a small number of backtrackings. We carefully study the proposed scheme by comparing it with the fast library for approximate nearest neighbors and the random KD-trees on the Oxford data set. Thorough experimental results suggest the efficiency and effectiveness of the new scheme.
Recognition without Awareness: Encoding and Retrieval Factors
ERIC Educational Resources Information Center
Craik, Fergus I. M.; Rose, Nathan S.; Gopie, Nigel
2015-01-01
The article reports 4 experiments that explore the notion of recognition without awareness using words as the material. Previous work by Voss and associates has shown that complex visual patterns were correctly selected as targets in a 2-alternative forced-choice (2-AFC) recognition test although participants reported that they were guessing. The…
Mark My Words: Tone of Voice Changes Affective Word Representations in Memory
Schirmer, Annett
2010-01-01
The present study explored the effect of speaker prosody on the representation of words in memory. To this end, participants were presented with a series of words and asked to remember the words for a subsequent recognition test. During study, words were presented auditorily with an emotional or neutral prosody, whereas during test, words were presented visually. Recognition performance was comparable for words studied with emotional and neutral prosody. However, subsequent valence ratings indicated that study prosody changed the affective representation of words in memory. Compared to words with neutral prosody, words with sad prosody were later rated as more negative and words with happy prosody were later rated as more positive. Interestingly, the participants' ability to remember study prosody failed to predict this effect, suggesting that changes in word valence were implicit and associated with initial word processing rather than word retrieval. Taken together these results identify a mechanism by which speakers can have sustained effects on listener attitudes towards word referents. PMID:20169154
Nakamura, Kimihiro; Dehaene, Stanislas; Jobert, Antoinette; Le Bihan, Denis; Kouider, Sid
2005-06-01
Recent evidence has suggested that the human occipitotemporal region comprises several subregions, each sensitive to a distinct processing level of visual words. To further explore the functional architecture of visual word recognition, we employed a subliminal priming method with functional magnetic resonance imaging (fMRI) during semantic judgments of words presented in two different Japanese scripts, Kanji and Kana. Each target word was preceded by a subliminal presentation of either the same or a different word, and in the same or a different script. Behaviorally, word repetition produced significant priming regardless of whether the words were presented in the same or different script. At the neural level, this cross-script priming was associated with repetition suppression in the left inferior temporal cortex anterior and dorsal to the visual word form area hypothesized for alphabetical writing systems, suggesting that cross-script convergence occurred at a semantic level. fMRI also evidenced a shared visual occipito-temporal activation for words in the two scripts, with slightly more mesial and right-predominant activation for Kanji and with greater occipital activation for Kana. These results thus allow us to separate script-specific and script-independent regions in the posterior temporal lobe, while demonstrating that both can be activated subliminally.
A test of the orthographic recoding hypothesis
NASA Astrophysics Data System (ADS)
Gaygen, Daniel E.
2003-04-01
The Orthographic Recoding Hypothesis [D. E. Gaygen and P. A. Luce, Percept. Psychophys. 60, 465-483 (1998)] was tested. According to this hypothesis, listeners recognize spoken words heard for the first time by mapping them onto stored representations of the orthographic forms of the words. Listeners have a stable orthographic representation of words, but no phonological representation, when those words have been read frequently but never heard or spoken. Such may be the case for low frequency words such as jargon. Three experiments using visually and auditorily presented nonword stimuli tested this hypothesis. The first two experiments were explicit tests of memory (old-new tests) for words presented visually. In the first experiment, the recognition of auditorily presented nonwords was facilitated when they previously appeared on a visually presented list. The second experiment was similar, but included a concurrent articulation task during a visual word list presentation, thus preventing covert rehearsal of the nonwords. The results were similar to the first experiment. The third experiment was an indirect test of memory (auditory lexical decision task) for visually presented nonwords. Auditorily presented nonwords were identified as nonwords significantly more slowly if they had previously appeared on the visually presented list accompanied by a concurrent articulation task.
Deep generative learning of location-invariant visual word recognition.
Di Bono, Maria Grazia; Zorzi, Marco
2013-01-01
It is widely believed that orthographic processing implies an approximate, flexible coding of letter position, as shown by relative-position and transposition priming effects in visual word recognition. These findings have inspired alternative proposals about the representation of letter position, ranging from noisy coding across the ordinal positions to relative position coding based on open bigrams. This debate can be cast within the broader problem of learning location-invariant representations of written words, that is, a coding scheme abstracting the identity and position of letters (and combinations of letters) from their eye-centered (i.e., retinal) locations. We asked whether location-invariance would emerge from deep unsupervised learning on letter strings and what type of intermediate coding would emerge in the resulting hierarchical generative model. We trained a deep network with three hidden layers on an artificial dataset of letter strings presented at five possible retinal locations. Though word-level information (i.e., word identity) was never provided to the network during training, linear decoding from the activity of the deepest hidden layer yielded near-perfect accuracy in location-invariant word recognition. Conversely, decoding from lower layers yielded a large number of transposition errors. Analyses of emergent internal representations showed that word selectivity and location invariance increased as a function of layer depth. Word-tuning and location-invariance were found at the level of single neurons, but there was no evidence for bigram coding. Finally, the distributed internal representation of words at the deepest layer showed higher similarity to the representation elicited by the two exterior letters than by other combinations of two contiguous letters, in agreement with the hypothesis that word edges have special status. These results reveal that the efficient coding of written words-which was the model's learning objective-is largely based on letter-level information.
The unique role of the visual word form area in reading.
Dehaene, Stanislas; Cohen, Laurent
2011-06-01
Reading systematically activates the left lateral occipitotemporal sulcus, at a site known as the visual word form area (VWFA). This site is reproducible across individuals/scripts, attuned to reading-specific processes, and partially selective for written strings relative to other categories such as line drawings. Lesions affecting the VWFA cause pure alexia, a selective deficit in word recognition. These findings must be reconciled with the fact that human genome evolution cannot have been influenced by such a recent and culturally variable activity as reading. Capitalizing on recent functional magnetic resonance imaging experiments, we provide strong corroborating evidence for the hypothesis that reading acquisition partially recycles a cortical territory evolved for object and face recognition, the prior properties of which influenced the form of writing systems. Copyright © 2011 Elsevier Ltd. All rights reserved.
Semantic Richness and Aging: The Effect of Number of Features in the Lexical Decision Task
ERIC Educational Resources Information Center
Robert, Christelle; Rico Duarte, Liliana
2016-01-01
The aim of this study was to examine whether the effect of semantic richness in visual word recognition (i.e., words with a rich semantic representation are faster to recognize than words with a poorer semantic representation), is changed with aging. Semantic richness was investigated by manipulating the number of features of words (NOF), i.e.,…
Intrusive effects of implicitly processed information on explicit memory.
Sentz, Dustin F; Kirkhart, Matthew W; LoPresto, Charles; Sobelman, Steven
2002-02-01
This study described the interference of implicitly processed information on the memory for explicitly processed information. Participants studied a list of words either auditorily or visually under instructions to remember the words (explicit study). They were then visually presented another word list under instructions which facilitate implicit but not explicit processing. Following a distractor task, memory for the explicit study list was tested with either a visual or auditory recognition task that included new words, words from the explicit study list, and words implicitly processed. Analysis indicated participants both failed to recognize words from the explicit study list and falsely recognized words that were implicitly processed as originating from the explicit study list. However, this effect only occurred when the testing modality was visual, thereby matching the modality for the implicitly processed information, regardless of the modality of the explicit study list. This "modality effect" for explicit memory was interpreted as poor source memory for implicitly processed information and in light of the procedures used. as well as illustrating an example of "remembering causing forgetting."
Is the Orthographic/Phonological Onset a Single Unit in Reading Aloud?
ERIC Educational Resources Information Center
Mousikou, Petroula; Coltheart, Max; Saunders, Steven; Yen, Lisa
2010-01-01
Two main theories of visual word recognition have been developed regarding the way orthographic units in printed words map onto phonological units in spoken words. One theory suggests that a string of single letters or letter clusters corresponds to a string of phonemes (Coltheart, 1978; Venezky, 1970), while the other suggests that a string of…
Morphological Decomposition in the Recognition of Prefixed and Suffixed Words: Evidence from Korean
ERIC Educational Resources Information Center
Kim, Say Young; Wang, Min; Taft, Marcus
2015-01-01
Korean has visually salient syllable units that are often mapped onto either prefixes or suffixes in derived words. In addition, prefixed and suffixed words may be processed differently given a left-to-right parsing procedure and the need to resolve morphemic ambiguity in prefixes in Korean. To test this hypothesis, four experiments using the…
A neuroimaging study of conflict during word recognition.
Riba, Jordi; Heldmann, Marcus; Carreiras, Manuel; Münte, Thomas F
2010-08-04
Using functional magnetic resonance imaging the neural activity associated with error commission and conflict monitoring in a lexical decision task was assessed. In a cohort of 20 native speakers of Spanish conflict was introduced by presenting words with high and low lexical frequency and pseudo-words with high and low syllabic frequency for the first syllable. Erroneous versus correct responses showed activation in the frontomedial and left inferior frontal cortex. A similar pattern was found for correctly classified words of low versus high lexical frequency and for correctly classified pseudo-words of high versus low syllabic frequency. Conflict-related activations for language materials largely overlapped with error-induced activations. The effect of syllabic frequency underscores the role of sublexical processing in visual word recognition and supports the view that the initial syllable mediates between the letter and word level.
Ma, Bosen; Wang, Xiaoyun; Li, Degao
2015-01-01
To separate the contribution of phonological from that of visual-orthographic information in the recognition of a Chinese word that is composed of one or two Chinese characters, we conducted two experiments in a priming task of semantic categorization (PTSC), in which length (one- or two-character words), relation, prime (related or unrelated prime-target pairs), and SOA (47, 87, or 187 ms) were manipulated. The prime was similar to the target in meaning or in visual configuration in Experiment A and in meaning or in pronunciation in Experiment B. The results indicate that the two-character words were similar to the one-character words but were less demanding of cognitive resources than the one-character words in the processing of phonological, visual-orthographic, and semantic information. The phonological primes had a facilitating effect at the SOA of 47 ms but an inhibitory effect at the SOA of 187 ms on the participants' reaction times; the visual-orthographic primes only had an inhibitory influence on the participants' reaction times at the SOA of 187 ms. The visual configuration of a Chinese word of one or two Chinese characters has its own contribution in helping retrieve the word's meanings; similarly, the phonological configuration of a one- or two-character word plays its own role in triggering activations of the word's semantic representations.
Ward, Jamie; Hovard, Peter; Jones, Alicia; Rothen, Nicolas
2013-01-01
Memory has been shown to be enhanced in grapheme-color synaesthesia, and this enhancement extends to certain visual stimuli (that don't induce synaesthesia) as well as stimuli comprised of graphemes (which do). Previous studies have used a variety of testing procedures to assess memory in synaesthesia (e.g., free recall, recognition, associative learning) making it hard to know the extent to which memory benefits are attributable to the stimulus properties themselves, the testing method, participant strategies, or some combination of these factors. In the first experiment, we use the same testing procedure (recognition memory) for a variety of stimuli (written words, non-words, scenes, and fractals) and also check which memorization strategies were used. We demonstrate that grapheme-color synaesthetes show enhanced memory across all these stimuli, but this is not found for a non-visual type of synaesthesia (lexical-gustatory). In the second experiment, the memory advantage for scenes is explored further by manipulating the properties of the old and new images (changing color, orientation, or object presence). Again, grapheme-color synaesthetes show a memory advantage for scenes across all manipulations. Although recognition memory is generally enhanced in this study, the largest effects were found for abstract visual images (fractals) and scenes for which color can be used to discriminate old/new status. PMID:24187542
Ward, Jamie; Hovard, Peter; Jones, Alicia; Rothen, Nicolas
2013-01-01
Memory has been shown to be enhanced in grapheme-color synaesthesia, and this enhancement extends to certain visual stimuli (that don't induce synaesthesia) as well as stimuli comprised of graphemes (which do). Previous studies have used a variety of testing procedures to assess memory in synaesthesia (e.g., free recall, recognition, associative learning) making it hard to know the extent to which memory benefits are attributable to the stimulus properties themselves, the testing method, participant strategies, or some combination of these factors. In the first experiment, we use the same testing procedure (recognition memory) for a variety of stimuli (written words, non-words, scenes, and fractals) and also check which memorization strategies were used. We demonstrate that grapheme-color synaesthetes show enhanced memory across all these stimuli, but this is not found for a non-visual type of synaesthesia (lexical-gustatory). In the second experiment, the memory advantage for scenes is explored further by manipulating the properties of the old and new images (changing color, orientation, or object presence). Again, grapheme-color synaesthetes show a memory advantage for scenes across all manipulations. Although recognition memory is generally enhanced in this study, the largest effects were found for abstract visual images (fractals) and scenes for which color can be used to discriminate old/new status.
Address entry while driving: speech recognition versus a touch-screen keyboard.
Tsimhoni, Omer; Smith, Daniel; Green, Paul
2004-01-01
A driving simulator experiment was conducted to determine the effects of entering addresses into a navigation system during driving. Participants drove on roads of varying visual demand while entering addresses. Three address entry methods were explored: word-based speech recognition, character-based speech recognition, and typing on a touch-screen keyboard. For each method, vehicle control and task measures, glance timing, and subjective ratings were examined. During driving, word-based speech recognition yielded the shortest total task time (15.3 s), followed by character-based speech recognition (41.0 s) and touch-screen keyboard (86.0 s). The standard deviation of lateral position when performing keyboard entry (0.21 m) was 60% higher than that for all other address entry methods (0.13 m). Degradation of vehicle control associated with address entry using a touch screen suggests that the use of speech recognition is favorable. Speech recognition systems with visual feedback, however, even with excellent accuracy, are not without performance consequences. Applications of this research include the design of in-vehicle navigation systems as well as other systems requiring significant driver input, such as E-mail, the Internet, and text messaging.
Visual Aspects of Written Composition.
ERIC Educational Resources Information Center
Autrey, Ken
While attempting to refine and redefine the composing process, rhetoric teachers have overlooked research showing how the brain's visual and verbal components interrelate. Recognition of the brain's visual potential can mean more than the use of media with the written word--it also has implications for the writing process itself. For example,…
Rapid modulation of spoken word recognition by visual primes.
Okano, Kana; Grainger, Jonathan; Holcomb, Phillip J
2016-02-01
In a masked cross-modal priming experiment with ERP recordings, spoken Japanese words were primed with words written in one of the two syllabary scripts of Japanese. An early priming effect, peaking at around 200ms after onset of the spoken word target, was seen in left lateral electrode sites for Katakana primes, and later effects were seen for both Hiragana and Katakana primes on the N400 ERP component. The early effect is thought to reflect the efficiency with which words in Katakana script make contact with sublexical phonological representations involved in spoken language comprehension, due to the particular way this script is used by Japanese readers. This demonstrates fast-acting influences of visual primes on the processing of auditory target words, and suggests that briefly presented visual primes can influence sublexical processing of auditory target words. The later N400 priming effects, on the other hand, most likely reflect cross-modal influences on activity at the level of whole-word phonology and semantics.
Rapid modulation of spoken word recognition by visual primes
Okano, Kana; Grainger, Jonathan; Holcomb, Phillip J.
2015-01-01
In a masked cross-modal priming experiment with ERP recordings, spoken Japanese words were primed with words written in one of the two syllabary scripts of Japanese. An early priming effect, peaking at around 200ms after onset of the spoken word target, was seen in left lateral electrode sites for Katakana primes, and later effects were seen for both Hiragana and Katakana primes on the N400 ERP component. The early effect is thought to reflect the efficiency with which words in Katakana script make contact with sublexical phonological representations involved in spoken language comprehension, due to the particular way this script is used by Japanese readers. This demonstrates fast-acting influences of visual primes on the processing of auditory target words, and suggests that briefly presented visual primes can influence sublexical processing of auditory target words. The later N400 priming effects, on the other hand, most likely reflect cross-modal influences on activity at the level of whole-word phonology and semantics. PMID:26516296
Orthographic Processing in Visual Word Identification.
ERIC Educational Resources Information Center
Humphreys, Glyn W.; And Others
1990-01-01
A series of 6 experiments involving 210 subjects from a college subject pool examined orthographic priming effects between briefly presented pairs of letter strings. A theory of othographic priming is presented, and the implications of the findings for understanding word recognition and reading are discussed. (SLD)
The role of syllabic structure in French visual word recognition.
Rouibah, A; Taft, M
2001-03-01
Two experiments are reported in which the processing units involved in the reading of French polysyllabic words are examined. A comparison was made between units following the maximal onset principle (i.e., the spoken syllable) and units following the maximal coda principle (i.e., the basic orthographic syllabic structure [BOSS]). In the first experiment, it took longer to recognize that a syllable was the beginning of a word (e.g., the FOE of FOETUS) than to make the same judgment of a BOSS (e.g., FOET). The fact that a BOSS plus one letter (e.g., FOETU) also took longer to judge than the BOSS indicated that the maximal coda principle applies to the units of processing in French. The second experiment confirmed this, using a lexical decision task with the different units being demarcated on the basis of color. It was concluded that the syllabic structure that is so clearly manifested in the spoken form of French is not involved in visual word recognition.
Barker, Lynne Ann; Morton, Nicholas; Romanowski, Charles A J; Gosden, Kevin
2013-10-24
We report a rare case of a patient unable to read (alexic) and write (agraphic) after a mild head injury. He had preserved speech and comprehension, could spell aloud, identify words spelt aloud and copy letter features. He was unable to visualise letters but showed no problems with digits. Neuropsychological testing revealed general visual memory, processing speed and imaging deficits. Imaging data revealed an 8 mm colloid cyst of the third ventricle that splayed the fornix. Little is known about functions mediated by fornical connectivity, but this region is thought to contribute to memory recall. Other regions thought to mediate letter recognition and letter imagery, visual word form area and visual pathways were intact. We remediated reading and writing by multimodal letter retraining. The study raises issues about the neural substrates of reading, role of fornical tracts to selective memory in the absence of other pathology, and effective remediation strategies for selective functional deficits.
Visual Word Recognition in Deaf Readers: Lexicality Is Modulated by Communication Mode
Barca, Laura; Pezzulo, Giovanni; Castrataro, Marianna; Rinaldi, Pasquale; Caselli, Maria Cristina
2013-01-01
Evidence indicates that adequate phonological abilities are necessary to develop proficient reading skills and that later in life phonology also has a role in the covert visual word recognition of expert readers. Impairments of acoustic perception, such as deafness, can lead to atypical phonological representations of written words and letters, which in turn can affect reading proficiency. Here, we report an experiment in which young adults with different levels of acoustic perception (i.e., hearing and deaf individuals) and different modes of communication (i.e., hearing individuals using spoken language, deaf individuals with a preference for sign language, and deaf individuals using the oral modality with less or no competence in sign language) performed a visual lexical decision task, which consisted of categorizing real words and consonant strings. The lexicality effect was restricted to deaf signers who responded faster to real words than consonant strings, showing over-reliance on whole word lexical processing of stimuli. No effect of stimulus type was found in deaf individuals using the oral modality or in hearing individuals. Thus, mode of communication modulates the lexicality effect. This suggests that learning a sign language during development shapes visuo-motor representations of words, which are tuned to the actions used to express them (phono-articulatory movements vs. hand movements) and to associated perceptions. As these visuo-motor representations are elicited during on-line linguistic processing and can overlap with the perceptual-motor processes required to execute the task, they can potentially produce interference or facilitation effects. PMID:23554976
Visual word recognition in deaf readers: lexicality is modulated by communication mode.
Barca, Laura; Pezzulo, Giovanni; Castrataro, Marianna; Rinaldi, Pasquale; Caselli, Maria Cristina
2013-01-01
Evidence indicates that adequate phonological abilities are necessary to develop proficient reading skills and that later in life phonology also has a role in the covert visual word recognition of expert readers. Impairments of acoustic perception, such as deafness, can lead to atypical phonological representations of written words and letters, which in turn can affect reading proficiency. Here, we report an experiment in which young adults with different levels of acoustic perception (i.e., hearing and deaf individuals) and different modes of communication (i.e., hearing individuals using spoken language, deaf individuals with a preference for sign language, and deaf individuals using the oral modality with less or no competence in sign language) performed a visual lexical decision task, which consisted of categorizing real words and consonant strings. The lexicality effect was restricted to deaf signers who responded faster to real words than consonant strings, showing over-reliance on whole word lexical processing of stimuli. No effect of stimulus type was found in deaf individuals using the oral modality or in hearing individuals. Thus, mode of communication modulates the lexicality effect. This suggests that learning a sign language during development shapes visuo-motor representations of words, which are tuned to the actions used to express them (phono-articulatory movements vs. hand movements) and to associated perceptions. As these visuo-motor representations are elicited during on-line linguistic processing and can overlap with the perceptual-motor processes required to execute the task, they can potentially produce interference or facilitation effects.
ERIC Educational Resources Information Center
Woollams, Anna M.; Silani, Giorgia; Okada, Kayoko; Patterson, Karalyn; Price, Cathy J.
2011-01-01
Prior lesion and functional imaging studies have highlighted the importance of the left ventral occipito-temporal (LvOT) cortex for visual word recognition. Within this area, there is a posterior-anterior hierarchy of subregions that are specialized for different stages of orthographic processing. The aim of the present fMRI study was to…
ERIC Educational Resources Information Center
Balota, David A.; Aschenbrenner, Andrew J.; Yap, Melvin J.
2013-01-01
A counterintuitive and theoretically important pattern of results in the visual word recognition literature is that both word frequency and stimulus quality produce large but additive effects in lexical decision performance. The additive nature of these effects has recently been called into question by Masson and Kliegl (in press), who used linear…
ERIC Educational Resources Information Center
Wilson, Maximiliano A.; Cuetos, Fernando; Davies, Rob; Burani, Cristina
2013-01-01
Word age-of-acquisition (AoA) affects reading. The mapping hypothesis predicts AoA effects when input--output mappings are arbitrary. In Spanish, the orthography-to-phonology mappings required for word naming are consistent; therefore, no AoA effects are expected. Nevertheless, AoA effects have been found, motivating the present investigation of…
The Lexical Status of the Root in Processing Morphologically Complex Words in Arabic
ERIC Educational Resources Information Center
Shalhoub-Awwad, Yasmin; Leikin, Mark
2016-01-01
This study investigated the effects of the Arabic root in the visual word recognition process among young readers in order to explore its role in reading acquisition and its development within the structure of the Arabic mental lexicon. We examined cross-modal priming of words that were derived from the same root of the target…
ERIC Educational Resources Information Center
Zascavage, Victoria Selden; McKenzie, Ginger Kelley; Buot, Max; Woods, Carol; Orton-Gillingham, Fellow
2012-01-01
This study compared word recognition for words written in a traditional flat font to the same words written in a three-dimensional appearing font determined to create a right hemispheric stimulation. The participants were emergent readers enrolled in Montessori schools in the United States learning to read basic CVC (consonant, vowel, consonant)…
English Listeners Use Suprasegmental Cues to Lexical Stress Early During Spoken-Word Recognition
Poellmann, Katja; Kong, Ying-Yee
2017-01-01
Purpose We used an eye-tracking technique to investigate whether English listeners use suprasegmental information about lexical stress to speed up the recognition of spoken words in English. Method In a visual world paradigm, 24 young English listeners followed spoken instructions to choose 1 of 4 printed referents on a computer screen (e.g., “Click on the word admiral”). Displays contained a critical pair of words (e.g., ˈadmiral–ˌadmiˈration) that were segmentally identical for their first 2 syllables but differed suprasegmentally in their 1st syllable: One word began with primary lexical stress, and the other began with secondary lexical stress. All words had phrase-level prominence. Listeners' relative proportion of eye fixations on these words indicated their ability to differentiate them over time. Results Before critical word pairs became segmentally distinguishable in their 3rd syllables, participants fixated target words more than their stress competitors, but only if targets had initial primary lexical stress. The degree to which stress competitors were fixated was independent of their stress pattern. Conclusions Suprasegmental information about lexical stress modulates the time course of spoken-word recognition. Specifically, suprasegmental information on the primary-stressed syllable of words with phrase-level prominence helps in distinguishing the word from phonological competitors with secondary lexical stress. PMID:28056135
School-aged children can benefit from audiovisual semantic congruency during memory encoding.
Heikkilä, Jenni; Tiippana, Kaisa
2016-05-01
Although we live in a multisensory world, children's memory has been usually studied concentrating on only one sensory modality at a time. In this study, we investigated how audiovisual encoding affects recognition memory. Children (n = 114) from three age groups (8, 10 and 12 years) memorized auditory or visual stimuli presented with a semantically congruent, incongruent or non-semantic stimulus in the other modality during encoding. Subsequent recognition memory performance was better for auditory or visual stimuli initially presented together with a semantically congruent stimulus in the other modality than for stimuli accompanied by a non-semantic stimulus in the other modality. This congruency effect was observed for pictures presented with sounds, for sounds presented with pictures, for spoken words presented with pictures and for written words presented with spoken words. The present results show that semantically congruent multisensory experiences during encoding can improve memory performance in school-aged children.
Preschoolers Benefit From Visually Salient Speech Cues
Holt, Rachael Frush
2015-01-01
Purpose This study explored visual speech influence in preschoolers using 3 developmentally appropriate tasks that vary in perceptual difficulty and task demands. They also examined developmental differences in the ability to use visually salient speech cues and visual phonological knowledge. Method Twelve adults and 27 typically developing 3- and 4-year-old children completed 3 audiovisual (AV) speech integration tasks: matching, discrimination, and recognition. The authors compared AV benefit for visually salient and less visually salient speech discrimination contrasts and assessed the visual saliency of consonant confusions in auditory-only and AV word recognition. Results Four-year-olds and adults demonstrated visual influence on all measures. Three-year-olds demonstrated visual influence on speech discrimination and recognition measures. All groups demonstrated greater AV benefit for the visually salient discrimination contrasts. AV recognition benefit in 4-year-olds and adults depended on the visual saliency of speech sounds. Conclusions Preschoolers can demonstrate AV speech integration. Their AV benefit results from efficient use of visually salient speech cues. Four-year-olds, but not 3-year-olds, used visual phonological knowledge to take advantage of visually salient speech cues, suggesting possible developmental differences in the mechanisms of AV benefit. PMID:25322336
Ally, Brandon A.
2012-01-01
Difficulty recognizing previously encountered stimuli is one of the earliest signs of incipient Alzheimer’s disease (AD). Work over the last 10 years has focused on how patients with AD and those in the prodromal stage of amnestic mild cognitive impairment (aMCI) make recognition decisions for visual and verbal stimuli. Interestingly, both groups of patients demonstrate markedly better memory for pictures over words, to a degree that is significantly greater in magnitude than their healthy older counterparts. Understanding this phenomenon not only helps to conceptualize how memory breaks down in AD, but also potentially provides the basis for future interventions. The current review will critically examine recent recognition memory work using pictures and words in the context of the dual-process theory of recognition and current hypotheses of cognitive breakdown in the course of very early AD. PMID:22927024
Agnosic vision is like peripheral vision, which is limited by crowding.
Strappini, Francesca; Pelli, Denis G; Di Pace, Enrico; Martelli, Marialuisa
2017-04-01
Visual agnosia is a neuropsychological impairment of visual object recognition despite near-normal acuity and visual fields. A century of research has provided only a rudimentary account of the functional damage underlying this deficit. We find that the object-recognition ability of agnosic patients viewing an object directly is like that of normally-sighted observers viewing it indirectly, with peripheral vision. Thus, agnosic vision is like peripheral vision. We obtained 14 visual-object-recognition tests that are commonly used for diagnosis of visual agnosia. Our "standard" normal observer took these tests at various eccentricities in his periphery. Analyzing the published data of 32 apperceptive agnosia patients and a group of 14 posterior cortical atrophy (PCA) patients on these tests, we find that each patient's pattern of object recognition deficits is well characterized by one number, the equivalent eccentricity at which our standard observer's peripheral vision is like the central vision of the agnosic patient. In other words, each agnosic patient's equivalent eccentricity is conserved across tests. Across patients, equivalent eccentricity ranges from 4 to 40 deg, which rates severity of the visual deficit. In normal peripheral vision, the required size to perceive a simple image (e.g., an isolated letter) is limited by acuity, and that for a complex image (e.g., a face or a word) is limited by crowding. In crowding, adjacent simple objects appear unrecognizably jumbled unless their spacing exceeds the crowding distance, which grows linearly with eccentricity. Besides conservation of equivalent eccentricity across object-recognition tests, we also find conservation, from eccentricity to agnosia, of the relative susceptibility of recognition of ten visual tests. These findings show that agnosic vision is like eccentric vision. Whence crowding? Peripheral vision, strabismic amblyopia, and possibly apperceptive agnosia are all limited by crowding, making it urgent to know what drives crowding. Acuity does not (Song et al., 2014), but neural density might: neurons per deg 2 in the crowding-relevant cortical area. Copyright © 2017 Elsevier Ltd. All rights reserved.
Deployment of spatial attention to words in central and peripheral vision.
Ducrot, Stéphanie; Grainger, Jonathan
2007-05-01
Four perceptual identification experiments examined the influence of spatial cues on the recognition of words presented in central vision (with fixation on either the first or last letter of the target word) and in peripheral vision (displaced left or right of a central fixation point). Stimulus location had a strong effect on word identification accuracy in both central and peripheral vision, showing a strong right visual field superiority that did not depend on eccentricity. Valid spatial cues improved word identification for peripherally presented targets but were largely ineffective for centrally presented targets. Effects of spatial cuing interacted with visual field effects in Experiment 1, with valid cues reducing the right visual field superiority for peripherally located targets, but this interaction was shown to depend on the type of neutral cue. These results provide further support for the role of attentional factors in visual field asymmetries obtained with targets in peripheral vision but not with centrally presented targets.
Bernard, Jean-Baptiste; Aguilar, Carlos; Castet, Eric
2016-01-01
Reading speed is dramatically reduced when readers cannot use their central vision. This is because low visual acuity and crowding negatively impact letter recognition in the periphery. In this study, we designed a new font (referred to as the Eido font) in order to reduce inter-letter similarity and consequently to increase peripheral letter recognition performance. We tested this font by running five experiments that compared the Eido font with the standard Courier font. Letter spacing and x-height were identical for the two monospaced fonts. Six normally-sighted subjects used exclusively their peripheral vision to run two aloud reading tasks (with eye movements), a letter recognition task (without eye movements), a word recognition task (without eye movements) and a lexical decision task. Results show that reading speed was not significantly different between the Eido and the Courier font when subjects had to read single sentences with a round simulated gaze-contingent central scotoma (10° diameter). In contrast, Eido significantly decreased perceptual errors in peripheral crowded letter recognition (-30% errors on average for letters briefly presented at 6° eccentricity) and in peripheral word recognition (-32% errors on average for words briefly presented at 6° eccentricity). PMID:27074013
Bernard, Jean-Baptiste; Aguilar, Carlos; Castet, Eric
2016-01-01
Reading speed is dramatically reduced when readers cannot use their central vision. This is because low visual acuity and crowding negatively impact letter recognition in the periphery. In this study, we designed a new font (referred to as the Eido font) in order to reduce inter-letter similarity and consequently to increase peripheral letter recognition performance. We tested this font by running five experiments that compared the Eido font with the standard Courier font. Letter spacing and x-height were identical for the two monospaced fonts. Six normally-sighted subjects used exclusively their peripheral vision to run two aloud reading tasks (with eye movements), a letter recognition task (without eye movements), a word recognition task (without eye movements) and a lexical decision task. Results show that reading speed was not significantly different between the Eido and the Courier font when subjects had to read single sentences with a round simulated gaze-contingent central scotoma (10° diameter). In contrast, Eido significantly decreased perceptual errors in peripheral crowded letter recognition (-30% errors on average for letters briefly presented at 6° eccentricity) and in peripheral word recognition (-32% errors on average for words briefly presented at 6° eccentricity).
Age of Acquisition and Imageability: A Cross-Task Comparison
ERIC Educational Resources Information Center
Ploetz, Danielle M.; Yates, Mark
2016-01-01
Previous research has reported an imageability effect on visual word recognition. Words that are high in imageability are recognised more rapidly than are those lower in imageability. However, later researchers argued that imageability was confounded with age of acquisition. In the current research, these two factors were manipulated in a…
Peñaloza, Claudia; Mirman, Daniel; Tuomiranta, Leena; Benetello, Annalisa; Heikius, Ida-Maria; Järvinen, Sonja; Majos, Maria C; Cardona, Pedro; Juncadella, Montserrat; Laine, Matti; Martin, Nadine; Rodríguez-Fornells, Antoni
2016-06-01
Recent research suggests that some people with aphasia preserve some ability to learn novel words and to retain them in the long-term. However, this novel word learning ability has been studied only in the context of single word-picture pairings. We examined the ability of people with chronic aphasia to learn novel words using a paradigm that presents new word forms together with a limited set of different possible visual referents and requires the identification of the correct word-object associations on the basis of online feedback. We also studied the relationship between word learning ability and aphasia severity, word processing abilities, and verbal short-term memory (STM). We further examined the influence of gross lesion location on new word learning. The word learning task was first validated with a group of forty-five young adults. Fourteen participants with chronic aphasia were administered the task and underwent tests of immediate and long-term recognition memory at 1 week. Their performance was compared to that of a group of fourteen matched controls using growth curve analysis. The learning curve and recognition performance of the aphasia group was significantly below the matched control group, although above-chance recognition performance and case-by-case analyses indicated that some participants with aphasia had learned the correct word-referent mappings. Verbal STM but not word processing abilities predicted word learning ability after controlling for aphasia severity. Importantly, participants with lesions in the left frontal cortex performed significantly worse than participants with lesions that spared the left frontal region both during word learning and on the recognition tests. Our findings indicate that some people with aphasia can preserve the ability to learn a small novel lexicon in an ambiguous word-referent context. This learning and recognition memory ability was associated with verbal STM capacity, aphasia severity and the integrity of the left inferior frontal region. Copyright © 2016 Elsevier Ltd. All rights reserved.
Peñaloza, Claudia; Mirman, Daniel; Tuomiranta, Leena; Benetello, Annalisa; Heikius, Ida-Maria; Järvinen, Sonja; Majos, Maria C.; Cardona, Pedro; Juncadella, Montserrat; Laine, Matti; Martin, Nadine; Rodríguez-Fornells, Antoni
2017-01-01
Recent research suggests that some people with aphasia preserve some ability to learn novel words and to retain them in the long-term. However, this novel word learning ability has been studied only in the context of single word-picture pairings. We examined the ability of people with chronic aphasia to learn novel words using a paradigm that presents new word forms together with a limited set of different possible visual referents and requires the identification of the correct word-object associations on the basis of online feedback. We also studied the relationship between word learning ability and aphasia severity, word processing abilities, and verbal short-term memory (STM). We further examined the influence of gross lesion location on new word learning. The word learning task was first validated with a group of forty-five young adults. Fourteen participants with chronic aphasia were administered the task and underwent tests of immediate and long-term recognition memory at 1 week. Their performance was compared to that of a group of fourteen matched controls using growth curve analysis. The learning curve and recognition performance of the aphasia group was significantly below the matched control group, although above-chance recognition performance and case-by-case analyses indicated that some participants with aphasia had learned the correct word-referent mappings. Verbal STM but not word processing abilities predicted word learning ability after controlling for aphasia severity. Importantly, participants with lesions in the left frontal cortex performed significantly worse than participants with lesions that spared the left frontal region both during word learning and on the recognition tests. Our findings indicate that some people with aphasia can preserve the ability to learn a small novel lexicon in an ambiguous word-referent context. This learning and recognition memory ability was associated with verbal STM capacity, aphasia severity and the integrity of the left inferior frontal region. PMID:27085892
The activation of segmental and tonal information in visual word recognition.
Li, Chuchu; Lin, Candise Y; Wang, Min; Jiang, Nan
2013-08-01
Mandarin Chinese has a logographic script in which graphemes map onto syllables and morphemes. It is not clear whether Chinese readers activate phonological information during lexical access, although phonological information is not explicitly represented in Chinese orthography. In the present study, we examined the activation of phonological information, including segmental and tonal information in Chinese visual word recognition, using the Stroop paradigm. Native Mandarin speakers named the presentation color of Chinese characters in Mandarin. The visual stimuli were divided into five types: color characters (e.g., , hong2, "red"), homophones of the color characters (S+T+; e.g., , hong2, "flood"), different-tone homophones (S+T-; e.g., , hong1, "boom"), characters that shared the same tone but differed in segments with the color characters (S-T+; e.g., , ping2, "bottle"), and neutral characters (S-T-; e.g., , qian1, "leading through"). Classic Stroop facilitation was shown in all color-congruent trials, and interference was shown in the incongruent trials. Furthermore, the Stroop effect was stronger for S+T- than for S-T+ trials, and was similar between S+T+ and S+T- trials. These findings suggested that both tonal and segmental forms of information play roles in lexical constraints; however, segmental information has more weight than tonal information. We proposed a revised visual word recognition model in which the functions of both segmental and suprasegmental types of information and their relative weights are taken into account.
ERIC Educational Resources Information Center
Hunter, Zoe R.; Brysbaert, Marc
2008-01-01
Traditional neuropsychology employs visual half-field (VHF) experiments to assess cerebral language dominance. This approach is based on the assumption that left cerebral dominance for language leads to faster and more accurate recognition of words in the right visual half-field (RVF) than in the left visual half-field (LVF) during tachistoscopic…
ERIC Educational Resources Information Center
Van der Haegen, Lise; Brysbaert, Marc
2011-01-01
Words are processed as units. This is not as evident as it seems, given the division of the human cerebral cortex in two hemispheres and the partial decussation of the optic tract. In two experiments, we investigated what underlies the unity of foveally presented words: A bilateral projection of visual input in foveal vision, or interhemispheric…
ERIC Educational Resources Information Center
Yap, Melvin J.; Tse, Chi-Shing; Balota, David A.
2009-01-01
Word frequency and semantic priming effects are among the most robust effects in visual word recognition, and it has been generally assumed that these two variables produce interactive effects in lexical decision performance, with larger priming effects for low-frequency targets. The results from four lexical decision experiments indicate that the…
False memory and level of processing effect: an event-related potential study.
Beato, Maria Soledad; Boldini, Angela; Cadavid, Sara
2012-09-12
Event-related potentials (ERPs) were used to determine the effects of level of processing on true and false memory, using the Deese-Roediger-McDermott (DRM) paradigm. In the DRM paradigm, lists of words highly associated to a single nonpresented word (the 'critical lure') are studied and, in a subsequent memory test, critical lures are often falsely remembered. Lists with three critical lures per list were auditorily presented here to participants who studied them with either a shallow (saying whether the word contained the letter 'o') or a deep (creating a mental image of the word) processing task. Visual presentation modality was used on a final recognition test. True recognition of studied words was significantly higher after deep encoding, whereas false recognition of nonpresented critical lures was similar in both experimental groups. At the ERP level, true and false recognition showed similar patterns: no FN400 effect was found, whereas comparable left parietal and late right frontal old/new effects were found for true and false recognition in both experimental conditions. Items studied under shallow encoding conditions elicited more positive ERP than items studied under deep encoding conditions at a 1000-1500 ms interval. These ERP results suggest that true and false recognition share some common underlying processes. Differential effects of level of processing on true and false memory were found only at the behavioral level but not at the ERP level.
Visual speech information: a help or hindrance in perceptual processing of dysarthric speech.
Borrie, Stephanie A
2015-03-01
This study investigated the influence of visual speech information on perceptual processing of neurologically degraded speech. Fifty listeners identified spastic dysarthric speech under both audio (A) and audiovisual (AV) conditions. Condition comparisons revealed that the addition of visual speech information enhanced processing of the neurologically degraded input in terms of (a) acuity (percent phonemes correct) of vowels and consonants and (b) recognition (percent words correct) of predictive and nonpredictive phrases. Listeners exploited stress-based segmentation strategies more readily in AV conditions, suggesting that the perceptual benefit associated with adding visual speech information to the auditory signal-the AV advantage-has both segmental and suprasegmental origins. Results also revealed that the magnitude of the AV advantage can be predicted, to some degree, by the extent to which an individual utilizes syllabic stress cues to inform word recognition in AV conditions. Findings inform the development of a listener-specific model of speech perception that applies to processing of dysarthric speech in everyday communication contexts.
The effect of changing the secondary task in dual-task paradigms for measuring listening effort.
Picou, Erin M; Ricketts, Todd A
2014-01-01
The purpose of this study was to evaluate the effect of changing the secondary task in dual-task paradigms that measure listening effort. Specifically, the effects of increasing the secondary task complexity or the depth of processing on a paradigm's sensitivity to changes in listening effort were quantified in a series of two experiments. Specific factors investigated within each experiment were background noise and visual cues. Participants in Experiment 1 were adults with normal hearing (mean age 23 years) and participants in Experiment 2 were adults with mild sloping to moderately severe sensorineural hearing loss (mean age 60.1 years). In both experiments, participants were tested using three dual-task paradigms. These paradigms had identical primary tasks, which were always monosyllable word recognition. The secondary tasks were all physical reaction time measures. The stimulus for the secondary task varied by paradigm and was a (1) simple visual probe, (2) a complex visual probe, or (3) the category of word presented. In this way, the secondary tasks mainly varied from the simple paradigm by either complexity or depth of speech processing. Using all three paradigms, participants were tested in four conditions, (1) auditory-only stimuli in quiet, (2) auditory-only stimuli in noise, (3) auditory-visual stimuli in quiet, and (4) auditory-visual stimuli in noise. During auditory-visual conditions, the talker's face was visible. Signal-to-noise ratios used during conditions with background noise were set individually so word recognition performance was matched in auditory-only and auditory-visual conditions. In noise, word recognition performance was approximately 80% and 65% for Experiments 1 and 2, respectively. For both experiments, word recognition performance was stable across the three paradigms, confirming that none of the secondary tasks interfered with the primary task. In Experiment 1 (listeners with normal hearing), analysis of median reaction times revealed a significant main effect of background noise on listening effort only with the paradigm that required deep processing. Visual cues did not change listening effort as measured with any of the three dual-task paradigms. In Experiment 2 (listeners with hearing loss), analysis of median reaction times revealed expected significant effects of background noise using all three paradigms, but no significant effects of visual cues. None of the dual-task paradigms were sensitive to the effects of visual cues. Furthermore, changing the complexity of the secondary task did not change dual-task paradigm sensitivity to the effects of background noise on listening effort for either group of listeners. However, the paradigm whose secondary task involved deeper processing was more sensitive to the effects of background noise for both groups of listeners. While this paradigm differed from the others in several respects, depth of processing may be partially responsible for the increased sensitivity. Therefore, this paradigm may be a valuable tool for evaluating other factors that affect listening effort.
Spoken Word Recognition in Children with Autism Spectrum Disorder: The Role of Visual Disengagement
ERIC Educational Resources Information Center
Venker, Courtney E.
2017-01-01
Deficits in visual disengagement are one of the earliest emerging differences in infants who are later diagnosed with autism spectrum disorder. Although researchers have speculated that deficits in visual disengagement could have negative effects on the development of children with autism spectrum disorder, we do not know which skills are…
Li, Sara Tze Kwan; Hsiao, Janet Hui-Wen
2018-07-01
Music notation and English word reading both involve mapping horizontally arranged visual components to components in sound, in contrast to reading in logographic languages such as Chinese. Accordingly, music-reading expertise may influence English word processing more than Chinese character processing. Here we showed that musicians named English words significantly faster than non-musicians when words were presented in the left visual field/right hemisphere (RH) or the center position, suggesting an advantage of RH processing due to music reading experience. This effect was not observed in Chinese character naming. A follow-up ERP study showed that in a sequential matching task, musicians had reduced RH N170 responses to English non-words under the processing of musical segments as compared with non-musicians, suggesting a shared visual processing mechanism in the RH between music notation and English non-word reading. This shared mechanism may be related to the letter-by-letter, serial visual processing that characterizes RH English word recognition (e.g., Lavidor & Ellis, 2001), which may consequently facilitate English word processing in the RH in musicians. Thus, music reading experience may have differential influences on the processing of different languages, depending on their similarities in the cognitive processes involved. Copyright © 2018 Elsevier B.V. All rights reserved.
Bonin, Patrick; Méot, Alain; Bugaiska, Aurélia
2018-02-12
Words that correspond to a potential sensory experience-concrete words-have long been found to possess a processing advantage over abstract words in various lexical tasks. We collected norms of concreteness for a set of 1,659 French words, together with other psycholinguistic norms that were not available for these words-context availability, emotional valence, and arousal-but which are important if we are to achieve a better understanding of the meaning of concreteness effects. We then investigated the relationships of concreteness with these newly collected variables, together with other psycholinguistic variables that were already available for this set of words (e.g., imageability, age of acquisition, and sensory experience ratings). Finally, thanks to the variety of psychological norms available for this set of words, we decided to test further the embodied account of concreteness effects in visual-word recognition, championed by Kousta, Vigliocco, Vinson, Andrews, and Del Campo (Journal of Experimental Psychology: General, 140, 14-34, 2011). Similarly, we investigated the influences of concreteness in three word recognition tasks-lexical decision, progressive demasking, and word naming-using a multiple regression approach, based on the reaction times available in Chronolex (Ferrand, Brysbaert, Keuleers, New, Bonin, Méot, Pallier, Frontiers in Psychology, 2; 306, 2011). The norms can be downloaded as supplementary material provided with this article.
ERIC Educational Resources Information Center
Jared, Debra; Jouravlev, Olessia; Joanisse, Marc F.
2017-01-01
Decomposition theories of morphological processing in visual word recognition posit an early morpho-orthographic parser that is blind to semantic information, whereas parallel distributed processing (PDP) theories assume that the transparency of orthographic-semantic relationships influences processing from the beginning. To test these…
Word Recognition and Basic Cognitive Processes among Reading-Disabled and Normal Readers in Arabic.
ERIC Educational Resources Information Center
Abu-Rabia, Salim; Share, David; Mansour, Maysaloon Said
2003-01-01
Investigates word identification in Arabic and basic cognitive processes in reading-disabled (RD) and normal level readers of the same chronological age, and in younger normal readers at the same reading level. Indicates significant deficiencies in morphology, working memory, and syntactic and visual processing, with the most severe deficiencies…
Seamon, John G; Lee, Ihno A; Toner, Sarah K; Wheeler, Rachel H; Goodkind, Madeleine S; Birch, Antoine D
2002-11-01
Do participants in the Deese, Roediger, and McDermott (DRM) procedure demonstrate false memory because they think of nonpresented critical words during study and confuse them with words that were actually presented? In two experiments, 160 participants studied eight visually presented DRM lists at a rate of 2 s or 5 s per word. Half of the participants rehearsed silently: the other half rehearsed overtly. Following study, the participants' memory for the lists was tested by recall or recognition. Typical false memory results were obtained for both memory measures. More important, two new results were observed. First, a large majority of the overt-rehearsal participants spontaneously rehearsed approximately half of the critical words during study. Second, critical-word rehearsal at study enhanced subsequent false recall, but it had no effect on false recognition or remember judgments for falsely recognized critical words. Thinking of critical words during study was unnecessary for producing false memory.
NASA Astrophysics Data System (ADS)
Hassanat, Ahmad B. A.; Jassim, Sabah
2010-04-01
In this paper, the automatic lip reading problem is investigated, and an innovative approach to providing solutions to this problem has been proposed. This new VSR approach is dependent on the signature of the word itself, which is obtained from a hybrid feature extraction method dependent on geometric, appearance, and image transform features. The proposed VSR approach is termed "visual words". The visual words approach consists of two main parts, 1) Feature extraction/selection, and 2) Visual speech feature recognition. After localizing face and lips, several visual features for the lips where extracted. Such as the height and width of the mouth, mutual information and the quality measurement between the DWT of the current ROI and the DWT of the previous ROI, the ratio of vertical to horizontal features taken from DWT of ROI, The ratio of vertical edges to horizontal edges of ROI, the appearance of the tongue and the appearance of teeth. Each spoken word is represented by 8 signals, one of each feature. Those signals maintain the dynamic of the spoken word, which contains a good portion of information. The system is then trained on these features using the KNN and DTW. This approach has been evaluated using a large database for different people, and large experiment sets. The evaluation has proved the visual words efficiency, and shown that the VSR is a speaker dependent problem.
ERIC Educational Resources Information Center
Ellis, Andrew W.; Ansorge, Lydia; Lavidor, Michal
2007-01-01
Three experiments explore aspects of the dissociable neural subsystems theory of hemispheric specialisation proposed by Marsolek and colleagues, and in particular a study by [Deason, R. G., & Marsolek, C. J. (2005). A critical boundary to the left-hemisphere advantage in word processing. "Brain and Language," 92, 251-261]. Experiment 1A showed…
Khelifi, Rachid; Sparrow, Laurent; Casalis, Séverine
2015-11-01
We assessed third and fifth graders' processing of parafoveal word information using a lexical decision task. On each trial, a preview word was first briefly presented parafoveally in the left or right visual field before a target word was displayed. Preview and target words could be identical, share the first three letters, or have no letters in common. Experiment 1 showed that developing readers receive the same word recognition benefit from parafoveal previews as expert readers. The impact of a change of case between preview and target in Experiment 2 showed that in all groups of readers, the preview benefit resulted from the identification of letters at an abstract level rather than from facilitation at a purely visual level. Fifth graders identified more letters from the preview than third graders. The results are interpreted within the framework of the interactive activation model. In particular, we suggest that although the processing of parafoveal information led to letter identification in developing readers, the processes involved may differ from those in expert readers. Although expert readers' processing of parafoveal information led to activation at the level of lexical representations, no such activation was observed in developing readers. Copyright © 2015 Elsevier Inc. All rights reserved.
Pornographic image recognition and filtering using incremental learning in compressed domain
NASA Astrophysics Data System (ADS)
Zhang, Jing; Wang, Chao; Zhuo, Li; Geng, Wenhao
2015-11-01
With the rapid development and popularity of the network, the openness, anonymity, and interactivity of networks have led to the spread and proliferation of pornographic images on the Internet, which have done great harm to adolescents' physical and mental health. With the establishment of image compression standards, pornographic images are mainly stored with compressed formats. Therefore, how to efficiently filter pornographic images is one of the challenging issues for information security. A pornographic image recognition and filtering method in the compressed domain is proposed by using incremental learning, which includes the following steps: (1) low-resolution (LR) images are first reconstructed from the compressed stream of pornographic images, (2) visual words are created from the LR image to represent the pornographic image, and (3) incremental learning is adopted to continuously adjust the classification rules to recognize the new pornographic image samples after the covering algorithm is utilized to train and recognize the visual words in order to build the initial classification model of pornographic images. The experimental results show that the proposed pornographic image recognition method using incremental learning has a higher recognition rate as well as costing less recognition time in the compressed domain.
Hemispheric asymmetry in holistic processing of words.
Ventura, Paulo; Delgado, João; Ferreira, Miguel; Farinha-Fernandes, António; Guerreiro, José C; Faustino, Bruno; Leite, Isabel; Wong, Alan C-N
2018-05-13
Holistic processing has been regarded as a hallmark of face perception, indicating the automatic and obligatory tendency of the visual system to process all face parts as a perceptual unit rather than in isolation. Studies involving lateralized stimulus presentation suggest that the right hemisphere dominates holistic face processing. Holistic processing can also be shown with other categories such as words and thus it is not specific to faces or face-like expertize. Here, we used divided visual field presentation to investigate the possibly different contributions of the two hemispheres for holistic word processing. Observers performed same/different judgment on the cued parts of two sequentially presented words in the complete composite paradigm. Our data indicate a right hemisphere specialization for holistic word processing. Thus, these markers of expert object recognition are domain general.
A SUGGESTED METHOD FOR PRE-SCHOOL IDENTIFICATION OF POTENTIAL READING DISABILITY.
ERIC Educational Resources Information Center
NEWTON, KENNETH R.; AND OTHERS
THE RELATIONSHIPS BETWEEN PREREADING MEASURES OF VISUAL-MOTOR-PERCEPTUAL SKILLS AND READING ACHIEVEMENT WERE STUDIED. SUBJECTS WERE 172 FIRST GRADERS. PRETESTS AND POST-TESTS FOR WORD RECOGNITION, MOTOR COORDINATION, AND VISUAL PERCEPTION WERE ADMINISTERED. FOURTEEN VARIABLES WERE TESTED. RESULTS INDICATED THAT FORM-COPYING WAS MORE EFFECTIVE THAN…
Audio-Visual Speech in Noise Perception in Dyslexia
ERIC Educational Resources Information Center
van Laarhoven, Thijs; Keetels, Mirjam; Schakel, Lemmy; Vroomen, Jean
2018-01-01
Individuals with developmental dyslexia (DD) may experience, besides reading problems, other speech-related processing deficits. Here, we examined the influence of visual articulatory information (lip-read speech) at various levels of background noise on auditory word recognition in children and adults with DD. We found that children with a…
Immediate effects of form-class constraints on spoken word recognition
Magnuson, James S.; Tanenhaus, Michael K.; Aslin, Richard N.
2008-01-01
In many domains of cognitive processing there is strong support for bottom-up priority and delayed top-down (contextual) integration. We ask whether this applies to supra-lexical context that could potentially constrain lexical access. Previous findings of early context integration in word recognition have typically used constraints that can be linked to pair-wise conceptual relations between words. Using an artificial lexicon, we found immediate integration of syntactic expectations based on pragmatic constraints linked to syntactic categories rather than words: phonologically similar “nouns” and “adjectives” did not compete when a combination of syntactic and visual information strongly predicted form class. These results suggest that predictive context is integrated continuously, and that previous findings supporting delayed context integration stem from weak contexts rather than delayed integration. PMID:18675408
Visual agnosia and focal brain injury.
Martinaud, O
Visual agnosia encompasses all disorders of visual recognition within a selective visual modality not due to an impairment of elementary visual processing or other cognitive deficit. Based on a sequential dichotomy between the perceptual and memory systems, two different categories of visual object agnosia are usually considered: 'apperceptive agnosia' and 'associative agnosia'. Impaired visual recognition within a single category of stimuli is also reported in: (i) visual object agnosia of the ventral pathway, such as prosopagnosia (for faces), pure alexia (for words), or topographagnosia (for landmarks); (ii) visual spatial agnosia of the dorsal pathway, such as cerebral akinetopsia (for movement), or orientation agnosia (for the placement of objects in space). Focal brain injuries provide a unique opportunity to better understand regional brain function, particularly with the use of effective statistical approaches such as voxel-based lesion-symptom mapping (VLSM). The aim of the present work was twofold: (i) to review the various agnosia categories according to the traditional visual dual-pathway model; and (ii) to better assess the anatomical network underlying visual recognition through lesion-mapping studies correlating neuroanatomical and clinical outcomes. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
ERIC Educational Resources Information Center
Warmington, Meesha; Hulme, Charles
2012-01-01
This study examines the concurrent relationships between phoneme awareness, visual-verbal paired-associate learning, rapid automatized naming (RAN), and reading skills in 7- to 11-year-old children. Path analyses showed that visual-verbal paired-associate learning and RAN, but not phoneme awareness, were unique predictors of word recognition,…
Evidence for a Limited-Cascading Account of Written Word Naming
ERIC Educational Resources Information Center
Bonin, Patrick; Roux, Sebastien; Barry, Christopher; Canell, Laura
2012-01-01
We address the issue of how information flows within the written word production system by examining written object-naming latencies. We report 4 experiments in which we manipulate variables assumed to have their primary impact at the level of object recognition (e.g., quality of visual presentation of pictured objects), at the level of semantic…
ERIC Educational Resources Information Center
Sanchez, Laura V.
2014-01-01
Adult literacy training is known to be difficult in terms of teaching and maintenance (Abadzi, 2003), perhaps because adults who recently learned to read in their first language have not acquired reading automaticity. This study examines fast word recognition process in neoliterate adults, to evaluate whether they show evidence of perceptual…
Investigating Developmental Trajectories of Morphemes as Reading Units in German
ERIC Educational Resources Information Center
Hasenäcker, Jana; Schröter, Pauline; Schroeder, Sascha
2017-01-01
The developmental trajectory of the use of morphemes is still unclear. We investigated the emergence of morphological effects on visual word recognition in German in a large sample across the complete course of reading acquisition in elementary school. To this end, we analyzed lexical decision data on a total of 1,152 words and pseudowords from a…
Transposed Letter Priming with Horizontal and Vertical Text in Japanese and English Readers
ERIC Educational Resources Information Center
Witzel, Naoko; Qiao, Xiaomei; Forster, Kenneth
2011-01-01
It is well established that in masked priming, a target word (e.g., "JUDGE") is primed more effectively by a transposed letter (TL) prime (e.g., "jugde") than by an orthographic control prime (e.g., "junpe"). This is inconsistent with the slot coding schemes used in many models of visual word recognition. Several…
Effects of Study Task on the Neural Correlates of Source Encoding
ERIC Educational Resources Information Center
Park, Heekyeong; Uncapher, Melina R.; Rugg, Michael D.
2008-01-01
The present study investigated whether the neural correlates of source memory vary according to study task. Subjects studied visually presented words in one of two background contexts. In each test, subjects made old/new recognition and source memory judgments. In one study test cycle, study words were subjected to animacy judgments, whereas in…
The influence of visual contrast and case changes on parafoveal preview benefits during reading.
Wang, Chin-An; Inhoff, Albrecht W
2010-04-01
Reingold and Rayner (2006) showed that the visual contrast of a fixated target word influenced its viewing duration, but not the viewing of the next (posttarget) word in the text that was shown in regular contrast. Configurational target changes, by contrast, influenced target and posttarget viewing. The current study examined whether this effect pattern can be attributed to differential processing of the posttarget word during target viewing. A boundary paradigm (Rayner, 1975) was used to provide an informative or uninformative posttarget preview and to reveal the word when it was fixated. Consistent with the earlier study, more time was spent viewing the target when its visual contrast was low and its configuration unfamiliar. Critically, target contrast had no effect on the acquisition of useful information from a posttarget preview, but an unfamiliar target configuration diminished the usefulness of an informative posttarget preview. These findings are consistent with Reingold and Rayner's (2006) claim that saccade programming and attention shifting during reading can be controlled by functionally distinct word recognition processes.
2013-01-01
Background Event-related brain potentials (ERPs) were used to investigate training-related changes in fast visual word recognition of functionally illiterate adults. Analyses focused on the left-lateralized occipito-temporal N170, which represents the earliest processing of visual word forms. Event-related brain potentials were recorded from 20 functional illiterates receiving intensive literacy training for adults, 10 functional illiterates not participating in the training and 14 regular readers while they read words, pseudowords or viewed symbol strings. Subjects were required to press a button whenever a stimulus was immediately repeated. Results Attending intensive literacy training was associated with improvements in reading and writing skills and with an increase of the word-related N170 amplitude. For untrained functional illiterates and regular readers no changes in literacy skills or N170 amplitude were observed. Conclusions Results of the present study suggest that the word-related N170 can still be modulated in adulthood as a result of the improvements in literacy skills. PMID:24330622
Boltzmann, Melanie; Rüsseler, Jascha
2013-12-13
Event-related brain potentials (ERPs) were used to investigate training-related changes in fast visual word recognition of functionally illiterate adults. Analyses focused on the left-lateralized occipito-temporal N170, which represents the earliest processing of visual word forms. Event-related brain potentials were recorded from 20 functional illiterates receiving intensive literacy training for adults, 10 functional illiterates not participating in the training and 14 regular readers while they read words, pseudowords or viewed symbol strings. Subjects were required to press a button whenever a stimulus was immediately repeated. Attending intensive literacy training was associated with improvements in reading and writing skills and with an increase of the word-related N170 amplitude. For untrained functional illiterates and regular readers no changes in literacy skills or N170 amplitude were observed. Results of the present study suggest that the word-related N170 can still be modulated in adulthood as a result of the improvements in literacy skills.
The posterior parietal cortex in recognition memory: a neuropsychological study.
Haramati, Sharon; Soroker, Nachum; Dudai, Yadin; Levy, Daniel A
2008-01-01
Several recent functional neuroimaging studies have reported robust bilateral activation (L>R) in lateral posterior parietal cortex and precuneus during recognition memory retrieval tasks. It has not yet been determined what cognitive processes are represented by those activations. In order to examine whether parietal lobe-based processes are necessary for basic episodic recognition abilities, we tested a group of 17 first-incident CVA patients whose cortical damage included (but was not limited to) extensive unilateral posterior parietal lesions. These patients performed a series of tasks that yielded parietal activations in previous fMRI studies: yes/no recognition judgments on visual words and on colored object pictures and identifiable environmental sounds. We found that patients with left hemisphere lesions were not impaired compared to controls in any of the tasks. Patients with right hemisphere lesions were not significantly impaired in memory for visual words, but were impaired in recognition of object pictures and sounds. Two lesion--behavior analyses--area-based correlations and voxel-based lesion symptom mapping (VLSM)---indicate that these impairments resulted from extra-parietal damage, specifically to frontal and lateral temporal areas. These findings suggest that extensive parietal damage does not impair recognition performance. We suggest that parietal activations recorded during recognition memory tasks might reflect peri-retrieval processes, such as the storage of retrieved memoranda in a working memory buffer for further cognitive processing.
The role of tone and segmental information in visual-word recognition in Thai.
Winskel, Heather; Ratitamkul, Theeraporn; Charoensit, Akira
2017-07-01
Tone languages represent a large proportion of the spoken languages of the world and yet lexical tone is understudied. Thai offers a unique opportunity to investigate the role of lexical tone processing during visual-word recognition, as tone is explicitly expressed in its script. We used colour words and their orthographic neighbours as stimuli to investigate facilitation (Experiment 1) and interference (Experiment 2) Stroop effects. Five experimental conditions were created: (a) the colour word (e.g., ขาว /k h ã:w/ [white]), (b) tone different word (e.g., ข่าว /k h à:w/[news]), (c) initial consonant phonologically same word (e.g., คาว /k h a:w/ [fishy]), where the initial consonant of the word was phonologically the same but orthographically different, (d) initial consonant different, tone same word (e.g., หาว /hã:w/ yawn), where the initial consonant was orthographically different but the tone of the word was the same, and (e) initial consonant different, tone different word (e.g., กาว /ka:w/ glue), where the initial consonant was orthographically different, and the tone was different. In order to examine whether tone information per se had a facilitative effect, we also included a colour congruent word condition where the segmental (S) information was different but the tone (T) matched the colour word (S-T+) in Experiment 2. Facilitation/interference effects were found for all five conditions when compared with a neutral control word. Results of the critical comparisons revealed that tone information comes into play at a later stage in lexical processing, and orthographic information contributes more than phonological information.
ERIC Educational Resources Information Center
Evans, Karen M.; Federmeier, Kara D.
2009-01-01
Hemispheric differences in the use of memory retrieval cues were examined in a continuous recognition design, using visual half-field presentation to bias the processing of test words. A speeded recognition task revealed general accuracy and response time advantages for items whose test presentation was biased to the left hemisphere. A second…
Revisiting Huey: on the importance of the upper part of words during reading.
Perea, Manuel
2012-12-01
Recent research has shown that that the upper part of words enjoys an advantage over the lower part of words in the recognition of isolated words. The goal of the present article was to examine how removing the upper/lower part of the words influences eye movement control during silent normal reading. The participants' eye movements were monitored when reading intact sentences and when reading sentences in which the upper or the lower portion of the text was deleted. Results showed a greater reading cost (longer fixations) when the upper part of the text was removed than when the lower part of the text was removed (i.e., it influenced when to move the eyes). However, there was little influence on the initial landing position on a target word (i.e., on the decision as to where to move the eyes). In addition, lexical-processing difficulty (as inferred from the magnitude of the word frequency effect on a target word) was affected by text degradation. The implications of these findings for models of visual-word recognition and reading are discussed.
Lee, Jong-Seok; Park, Cheol Hoon
2010-08-01
We propose a novel stochastic optimization algorithm, hybrid simulated annealing (SA), to train hidden Markov models (HMMs) for visual speech recognition. In our algorithm, SA is combined with a local optimization operator that substitutes a better solution for the current one to improve the convergence speed and the quality of solutions. We mathematically prove that the sequence of the objective values converges in probability to the global optimum in the algorithm. The algorithm is applied to train HMMs that are used as visual speech recognizers. While the popular training method of HMMs, the expectation-maximization algorithm, achieves only local optima in the parameter space, the proposed method can perform global optimization of the parameters of HMMs and thereby obtain solutions yielding improved recognition performance. The superiority of the proposed algorithm to the conventional ones is demonstrated via isolated word recognition experiments.
McMenamin, Brenton W.; Deason, Rebecca G.; Steele, Vaughn R.; Koutstaal, Wilma; Marsolek, Chad J.
2014-01-01
Previous research indicates that dissociable neural subsystems underlie abstract-category (AC) recognition and priming of objects (e.g., cat, piano) and specific-exemplar (SE) recognition and priming of objects (e.g., a calico cat, a different calico cat, a grand piano, etc.). However, the degree of separability between these subsystems is not known, despite the importance of this issue for assessing relevant theories. Visual object representations are widely distributed in visual cortex, thus a multivariate pattern analysis (MVPA) approach to analyzing functional magnetic resonance imaging (fMRI) data may be critical for assessing the separability of different kinds of visual object processing. Here we examined the neural representations of visual object categories and visual object exemplars using multi-voxel pattern analyses of brain activity elicited in visual object processing areas during a repetition-priming task. In the encoding phase, participants viewed visual objects and the printed names of other objects. In the subsequent test phase, participants identified objects that were either same-exemplar primed, different-exemplar primed, word-primed, or unprimed. In visual object processing areas, classifiers were trained to distinguish same-exemplar primed objects from word-primed objects. Then, the abilities of these classifiers to discriminate different-exemplar primed objects and word-primed objects (reflecting AC priming) and to discriminate same-exemplar primed objects and different-exemplar primed objects (reflecting SE priming) was assessed. Results indicated that (a) repetition priming in occipital-temporal regions is organized asymmetrically, such that AC priming is more prevalent in the left hemisphere and SE priming is more prevalent in the right hemisphere, and (b) AC and SE subsystems are weakly modular, not strongly modular or unified. PMID:25528436
McMenamin, Brenton W; Deason, Rebecca G; Steele, Vaughn R; Koutstaal, Wilma; Marsolek, Chad J
2015-02-01
Previous research indicates that dissociable neural subsystems underlie abstract-category (AC) recognition and priming of objects (e.g., cat, piano) and specific-exemplar (SE) recognition and priming of objects (e.g., a calico cat, a different calico cat, a grand piano, etc.). However, the degree of separability between these subsystems is not known, despite the importance of this issue for assessing relevant theories. Visual object representations are widely distributed in visual cortex, thus a multivariate pattern analysis (MVPA) approach to analyzing functional magnetic resonance imaging (fMRI) data may be critical for assessing the separability of different kinds of visual object processing. Here we examined the neural representations of visual object categories and visual object exemplars using multi-voxel pattern analyses of brain activity elicited in visual object processing areas during a repetition-priming task. In the encoding phase, participants viewed visual objects and the printed names of other objects. In the subsequent test phase, participants identified objects that were either same-exemplar primed, different-exemplar primed, word-primed, or unprimed. In visual object processing areas, classifiers were trained to distinguish same-exemplar primed objects from word-primed objects. Then, the abilities of these classifiers to discriminate different-exemplar primed objects and word-primed objects (reflecting AC priming) and to discriminate same-exemplar primed objects and different-exemplar primed objects (reflecting SE priming) was assessed. Results indicated that (a) repetition priming in occipital-temporal regions is organized asymmetrically, such that AC priming is more prevalent in the left hemisphere and SE priming is more prevalent in the right hemisphere, and (b) AC and SE subsystems are weakly modular, not strongly modular or unified. Copyright © 2014 Elsevier Inc. All rights reserved.
Symbolic Play Connects to Language through Visual Object Recognition
ERIC Educational Resources Information Center
Smith, Linda B.; Jones, Susan S.
2011-01-01
Object substitutions in play (e.g. using a box as a car) are strongly linked to language learning and their absence is a diagnostic marker of language delay. Classic accounts posit a symbolic function that underlies both words and object substitutions. Here we show that object substitutions depend on developmental changes in visual object…
Letter position coding across modalities: the case of Braille readers.
Perea, Manuel; García-Chamorro, Cristina; Martín-Suesta, Miguel; Gómez, Pablo
2012-01-01
The question of how the brain encodes letter position in written words has attracted increasing attention in recent years. A number of models have recently been proposed to accommodate the fact that transposed-letter stimuli like jugde or caniso are perceptually very close to their base words. Here we examined how letter position coding is attained in the tactile modality via Braille reading. The idea is that Braille word recognition may provide more serial processing than the visual modality, and this may produce differences in the input coding schemes employed to encode letters in written words. To that end, we conducted a lexical decision experiment with adult Braille readers in which the pseudowords were created by transposing/replacing two letters. We found a word-frequency effect for words. In addition, unlike parallel experiments in the visual modality, we failed to find any clear signs of transposed-letter confusability effects. This dissociation highlights the differences between modalities. The present data argue against models of letter position coding that assume that transposed-letter effects (in the visual modality) occur at a relatively late, abstract locus.
Miller, Christi W; Stewart, Erin K; Wu, Yu-Hsiang; Bishop, Christopher; Bentler, Ruth A; Tremblay, Kelly
2017-08-16
This study evaluated the relationship between working memory (WM) and speech recognition in noise with different noise types as well as in the presence of visual cues. Seventy-six adults with bilateral, mild to moderately severe sensorineural hearing loss (mean age: 69 years) participated. Using a cross-sectional design, 2 measures of WM were taken: a reading span measure, and Word Auditory Recognition and Recall Measure (Smith, Pichora-Fuller, & Alexander, 2016). Speech recognition was measured with the Multi-Modal Lexical Sentence Test for Adults (Kirk et al., 2012) in steady-state noise and 4-talker babble, with and without visual cues. Testing was under unaided conditions. A linear mixed model revealed visual cues and pure-tone average as the only significant predictors of Multi-Modal Lexical Sentence Test outcomes. Neither WM measure nor noise type showed a significant effect. The contribution of WM in explaining unaided speech recognition in noise was negligible and not influenced by noise type or visual cues. We anticipate that with audibility partially restored by hearing aids, the effects of WM will increase. For clinical practice to be affected, more significant effect sizes are needed.
Human striatal activation during adjustment of the response criterion in visual word recognition.
Kuchinke, Lars; Hofmann, Markus J; Jacobs, Arthur M; Frühholz, Sascha; Tamm, Sascha; Herrmann, Manfred
2011-02-01
Results of recent computational modelling studies suggest that a general function of the striatum in human cognition is related to shifting decision criteria in selection processes. We used functional magnetic resonance imaging (fMRI) in 21 healthy subjects to examine the hemodynamic responses when subjects shift their response criterion on a trial-by-trial basis in the lexical decision paradigm. Trial-by-trial criterion setting is obtained when subjects respond faster in trials following a word trial than in trials following nonword trials - irrespective of the lexicality of the current trial. Since selection demands are equally high in the current trials, we expected to observe neural activations that are related to response criterion shifting. The behavioural data show sequential effects with faster responses in trials following word trials compared to trials following nonword trials, suggesting that subjects shifted their response criterion on a trial-by-trial basis. The neural responses revealed a signal increase in the striatum only in trials following word trials. This striatal activation is therefore likely to be related to response criterion setting. It demonstrates a role of the striatum in shifting decision criteria in visual word recognition, which cannot be attributed to pure error-related processing or the selection of a preferred response. Copyright © 2010 Elsevier Inc. All rights reserved.
Risse, Sarah
2014-07-15
The visual span (or ‘‘uncrowded window’’), which limits the sensory information on each fixation, has been shown to determine reading speed in tasks involving rapid serial visual presentation of single words. The present study investigated whether this is also true for fixation durations during sentence reading when all words are presented at the same time and parafoveal preview of words prior to fixation typically reduces later word-recognition times. If so, a larger visual span may allow more efficient parafoveal processing and thus faster reading. In order to test this hypothesis, visual span profiles (VSPs) were collected from 60 participants and related to data from an eye-tracking reading experiment. The results confirmed a positive relationship between the readers’ VSPs and fixation-based reading speed. However, this relationship was not determined by parafoveal processing. There was no evidence that individual differences in VSPs predicted differences in parafoveal preview benefit. Nevertheless, preview benefit correlated with reading speed, suggesting an independent effect on oculomotor control during reading. In summary, the present results indicate a more complex relationship between the visual span, parafoveal processing, and reading speed than initially assumed. © 2014 ARVO.
Early access to abstract representations in developing readers: evidence from masked priming.
Perea, Manuel; Mallouh, Reem Abu; Carreiras, Manuel
2013-07-01
A commonly shared assumption in the field of visual-word recognition is that retinotopic representations are rapidly converted into abstract representations. Here we examine the role of visual form vs. abstract representations during the early stages of word processing - as measured by masked priming - in young children (3rd and 6th Graders) and adult readers. To maximize the chances of detecting an effect of visual form, we employed a language with a very intricate orthography, Arabic. If visual form plays a role in the early stages of processing, greater benefit would be expected from related primes that have the same visual form (in terms of the ligation pattern between a word's letters) as the target word (e.g.- [ktz b-ktA b] - note that the three initial letters are connected in prime and target) than for those that do not (- [ktxb-ktA b]). Results showed that the magnitude of priming effect relative to an unrelated condition (e.g. -) was remarkably similar for both types of prime. Thus, despite the visual complexity of Arabic orthography, there is fast access to the abstract letter representations not only in adult readers by also in developing readers. © 2013 Blackwell Publishing Ltd.
Processing Stages Underlying Word Recognition in the Anteroventral Temporal Lobe
Halgren, Eric; Wang, Chunmao; Schomer, Donald L.; Knake, Susanne; Marinkovic, Ksenija; Wu, Julian; Ulbert, Istvan
2006-01-01
The anteroventral temporal lobe integrates visual, lexical, semantic and mnestic aspects of word-processing, through its reciprocal connections with the ventral visual stream, language areas, and the hippocampal formation. We used linear microelectrode arrays to probe population synaptic currents and neuronal firing in different cortical layers of the anteroventral temporal lobe, during semantic judgments with implicit priming, and overt word recognition. Since different extrinsic and associative inputs preferentially target different cortical layers, this method can help reveal the sequence and nature of local processing stages at a higher resolution than was previously possible. The initial response in inferotemporal and perirhinal cortices is a brief current sink beginning at ~120ms, and peaking at ~170ms. Localization of this initial sink to middle layers suggests that it represents feedforward input from lower visual areas, and simultaneously increased firing implies that it represents excitatory synaptic currents. Until ~800ms, the main focus of transmembrane current sinks alternates between middle and superficial layers, with the superficial focus becoming increasingly dominant after ~550ms. Since superficial layers are the target of local and feedback associative inputs, this suggests an alternation in predominant synaptic input between feedforward and feedback modes. Word repetition does not affect the initial perirhinal and inferotemporal middle layer sink, but does decrease later activity. Entorhinal activity begins later (~200ms), with greater apparent excitatory postsynaptic currents and multiunit activity in neocortically-projecting than hippocampal-projecting layers. In contrast to perirhinal and entorhinal responses, entorhinal responses are larger to repeated words during memory retrieval. These results identify a sequence of physiological activation, beginning with a sharp activation from lower level visual areas carrying specific information to middle layers. This is followed by feedback and associative interactions involving upper cortical layers, which are abbreviated to repeated words. Following bottom-up and associative stages, top-down recollective processes may be driven by entorhinal cortex. Word processing involves a systematic sequence of fast feedforward information transfer from visual areas to anteroventral temporal cortex, followed by prolonged interactions of this feedforward information with local associations, and feedback mnestic information from the medial temporal lobe. PMID:16488158
Errorless discrimination and picture fading as techniques for teaching sight words to TMR students.
Walsh, B F; Lamberts, F
1979-03-01
The effectiveness of two approaches for teaching beginning sight words to 30 TMR students was compared. In Dorry and Zeaman's picture-fading technique, words are taught through association with pictures that are faded out over a series of trials, while in the Edmark program errorless-discrimination technique, words are taught through shaped sequences of visual and auditory--visual matching-to-sample, with the target word first appearing alone and eventually appearing with orthographically similar words. Students were instructed on two lists of 10 words each, one list in the picture-fading and one in the discrimination method, in a double counter-balanced, repeated-measures design. Covariance analysis on three measures (word identification, word recognition, and picture--word matching) showed highly significant differences between the two methods. Students' performance was better after instruction with the errorless-discrimination method than after instruction with the picture-fading method. The findings on picture fading were interpreted as indicating a possible failure of the shifting of control from picture to printed word that earlier researchers have hypothesized as occurring.
Recognition Memory for Braille or Spoken Words: An fMRI study in Early Blind
Burton, Harold; Sinclair, Robert J.; Agato, Alvin
2012-01-01
We examined cortical activity in early blind during word recognition memory. Nine participants were blind at birth and one by 1.5 yrs. In an event-related design, we studied blood oxygen level-dependent responses to studied (“old”) compared to novel (“new”) words. Presentation mode was in Braille or spoken. Responses were larger for identified “new” words read with Braille in bilateral lower and higher tier visual areas and primary somatosensory cortex. Responses to spoken “new” words were larger in bilateral primary and accessory auditory cortex. Auditory cortex was unresponsive to Braille words and occipital cortex responded to spoken words but not differentially with “old”/“new” recognition. Left dorsolateral prefrontal cortex had larger responses to “old” words only with Braille. Larger occipital cortex responses to “new” Braille words suggested verbal memory based on the mechanism of recollection. A previous report in sighted noted larger responses for “new” words studied in association with pictures that created a distinctiveness heuristic source factor which enhanced recollection during remembering. Prior behavioral studies in early blind noted an exceptional ability to recall words. Utilization of this skill by participants in the current study possibly engendered recollection that augmented remembering “old” words. A larger response when identifying “new” words possibly resulted from exhaustive recollecting the sensory properties of “old” words in modality appropriate sensory cortices. The uniqueness of a memory role for occipital cortex is in its cross-modal responses to coding tactile properties of Braille. The latter possibly reflects a “sensory echo” that aids recollection. PMID:22251836
Recognition memory for Braille or spoken words: an fMRI study in early blind.
Burton, Harold; Sinclair, Robert J; Agato, Alvin
2012-02-15
We examined cortical activity in early blind during word recognition memory. Nine participants were blind at birth and one by 1.5years. In an event-related design, we studied blood oxygen level-dependent responses to studied ("old") compared to novel ("new") words. Presentation mode was in Braille or spoken. Responses were larger for identified "new" words read with Braille in bilateral lower and higher tier visual areas and primary somatosensory cortex. Responses to spoken "new" words were larger in bilateral primary and accessory auditory cortex. Auditory cortex was unresponsive to Braille words and occipital cortex responded to spoken words but not differentially with "old"/"new" recognition. Left dorsolateral prefrontal cortex had larger responses to "old" words only with Braille. Larger occipital cortex responses to "new" Braille words suggested verbal memory based on the mechanism of recollection. A previous report in sighted noted larger responses for "new" words studied in association with pictures that created a distinctiveness heuristic source factor which enhanced recollection during remembering. Prior behavioral studies in early blind noted an exceptional ability to recall words. Utilization of this skill by participants in the current study possibly engendered recollection that augmented remembering "old" words. A larger response when identifying "new" words possibly resulted from exhaustive recollecting the sensory properties of "old" words in modality appropriate sensory cortices. The uniqueness of a memory role for occipital cortex is in its cross-modal responses to coding tactile properties of Braille. The latter possibly reflects a "sensory echo" that aids recollection. Copyright © 2011 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Quémart, Pauline; Casalis, Séverine
2014-01-01
We report two experiments that investigated whether phonological and/or orthographic shifts in a base word interfere with morphological processing by French 3rd, 4th, and 5th graders and adults (as a control group) along the time course of visual word recognition. In both experiments, prime-target pairs shared four possible relationships:…
ERIC Educational Resources Information Center
Borowsky, Ron; Besner, Derek
2006-01-01
D. C. Plaut and J. R. Booth presented a parallel distributed processing model that purports to simulate human lexical decision performance. This model (and D. C. Plaut, 1995) offers a single mechanism account of the pattern of factor effects on reaction time (RT) between semantic priming, word frequency, and stimulus quality without requiring a…
Syllable Frequency Effects in Visual Word Recognition: Developmental Approach in French Children
ERIC Educational Resources Information Center
Maionchi-Pino, Norbert; Magnan, Annie; Ecalle, Jean
2010-01-01
This study investigates the syllable's role in the normal reading acquisition of French children at three grade levels (1st, 3rd, and 5th), using a modified version of Cole, Magnan, and Grainger's (1999) paradigm. We focused on the effects of syllable frequency and word frequency. The results suggest that from the first to third years of reading…
Alternating-Script Priming in Japanese: Are Katakana and Hiragana Characters Interchangeable?
ERIC Educational Resources Information Center
Perea, Manuel; Nakayama, Mariko; Lupker, Stephen J.
2017-01-01
Models of written word recognition in languages using the Roman alphabet assume that a word's visual form is quickly mapped onto abstract units. This proposal is consistent with the finding that masked priming effects are of similar magnitude from lowercase, uppercase, and alternating-case primes (e.g., beard-BEARD, BEARD-BEARD, and BeArD-BEARD).…
McQueen, James M; Huettig, Falk
2014-01-01
Three cross-modal priming experiments examined the influence of preexposure to pictures and printed words on the speed of spoken word recognition. Targets for auditory lexical decision were spoken Dutch words and nonwords, presented in isolation (Experiments 1 and 2) or after a short phrase (Experiment 3). Auditory stimuli were preceded by primes, which were pictures (Experiments 1 and 3) or those pictures' printed names (Experiment 2). Prime-target pairs were phonologically onset related (e.g., pijl-pijn, arrow-pain), were from the same semantic category (e.g., pijl-zwaard, arrow-sword), or were unrelated on both dimensions. Phonological interference and semantic facilitation were observed in all experiments. Priming magnitude was similar for pictures and printed words and did not vary with picture viewing time or number of pictures in the display (either one or four). These effects arose even though participants were not explicitly instructed to name the pictures and where strategic naming would interfere with lexical decision making. This suggests that, by default, processing of related pictures and printed words influences how quickly we recognize spoken words.
What can we learn from learning models about sensitivity to letter-order in visual word recognition?
Lerner, Itamar; Armstrong, Blair C.; Frost, Ram
2014-01-01
Recent research on the effects of letter transposition in Indo-European Languages has shown that readers are surprisingly tolerant of these manipulations in a range of tasks. This evidence has motivated the development of new computational models of reading that regard flexibility in positional coding to be a core and universal principle of the reading process. Here we argue that such approach does not capture cross-linguistic differences in transposed-letter effects, nor do they explain them. To address this issue, we investigated how a simple domain-general connectionist architecture performs in tasks such as letter-transposition and letter substitution when it had learned to process words in the context of different linguistic environments. The results show that in spite of of the neurobiological noise involved in registering letter-position in all languages, flexibility and inflexibility in coding letter order is also shaped by the statistical orthographic properties of words in a language, such as the relative prevalence of anagrams. Our learning model also generated novel predictions for targeted empirical research, demonstrating a clear advantage of learning models for studying visual word recognition. PMID:25431521
The role of visual representations during the lexical access of spoken words
Lewis, Gwyneth; Poeppel, David
2015-01-01
Do visual representations contribute to spoken word recognition? We examine, using MEG, the effects of sublexical and lexical variables at superior temporal (ST) areas and the posterior middle temporal gyrus (pMTG) compared with that of word imageability at visual cortices. Embodied accounts predict early modulation of visual areas by imageability - concurrently with or prior to modulation of pMTG by lexical variables. Participants responded to speech stimuli varying continuously in imageability during lexical decision with simultaneous MEG recording. We employed the linguistic variables in a new type of correlational time course analysis to assess trial-by-trial activation in occipital, ST, and pMTG regions of interest (ROIs). The linguistic variables modulated the ROIs during different time windows. Critically, visual regions reflected an imageability effect prior to effects of lexicality on pMTG. This surprising effect supports a view on which sensory aspects of a lexical item are not a consequence of lexical activation. PMID:24814579
The role of visual representations during the lexical access of spoken words.
Lewis, Gwyneth; Poeppel, David
2014-07-01
Do visual representations contribute to spoken word recognition? We examine, using MEG, the effects of sublexical and lexical variables at superior temporal (ST) areas and the posterior middle temporal gyrus (pMTG) compared with that of word imageability at visual cortices. Embodied accounts predict early modulation of visual areas by imageability--concurrently with or prior to modulation of pMTG by lexical variables. Participants responded to speech stimuli varying continuously in imageability during lexical decision with simultaneous MEG recording. We employed the linguistic variables in a new type of correlational time course analysis to assess trial-by-trial activation in occipital, ST, and pMTG regions of interest (ROIs). The linguistic variables modulated the ROIs during different time windows. Critically, visual regions reflected an imageability effect prior to effects of lexicality on pMTG. This surprising effect supports a view on which sensory aspects of a lexical item are not a consequence of lexical activation. Copyright © 2014 Elsevier Inc. All rights reserved.
Brain activation for lexical decision and reading aloud: two sides of the same coin?
Carreiras, Manuel; Mechelli, Andrea; Estévez, Adelina; Price, Cathy J
2007-03-01
This functional magnetic resonance imaging study compared the neuronal implementation of word and pseudoword processing during two commonly used word recognition tasks: lexical decision and reading aloud. In the lexical decision task, participants made a finger-press response to indicate whether a visually presented letter string is a word or a pseudoword (e.g., "paple"). In the reading-aloud task, participants read aloud visually presented words and pseudowords. The same sets of words and pseudowords were used for both tasks. This enabled us to look for the effects of task (lexical decision vs. reading aloud), lexicality (words vs. nonwords), and the interaction of lexicality with task. We found very similar patterns of activation for lexical decision and reading aloud in areas associated with word recognition and lexical retrieval (e.g., left fusiform gyrus, posterior temporal cortex, pars opercularis, and bilateral insulae), but task differences were observed bilaterally in sensorimotor areas. Lexical decision increased activation in areas associated with decision making and finger tapping (bilateral postcentral gyri, supplementary motor area, and right cerebellum), whereas reading aloud increased activation in areas associated with articulation and hearing the sound of the spoken response (bilateral precentral gyri, superior temporal gyri, and posterior cerebellum). The effect of lexicality (pseudoword vs. words) was also remarkably consistent across tasks. Nevertheless, increased activation for pseudowords relative to words was greater in the left precentral cortex for reading than lexical decision, and greater in the right inferior frontal cortex for lexical decision than reading. We attribute these effects to differences in the demands on speech production and decision-making processes, respectively.
On the contribution of unconscious processes to recognition memory.
Cleary, Anne M
2012-01-01
Abstract Voss et al. review work showing unconscious contributions to recognition memory. An electrophysiological effect, the N300, appears to signify an unconscious recognition process. Whether such unconscious recognition requires highly specific experimental circumstances or can occur in typical types of recognition testing situations has remained a question. The fact that the N300 has also been shown to be the sole electrophysiological correlate of the recognition-without-identification effect that occurs with visual word fragments suggests that unconscious processes may contribute to a wider range of recognition testing situations than those originally investigated by Voss and colleagues. Some implications of this possibility are discussed.
Evans, Karen M.; Federmeier, Kara D.
2009-01-01
We examined the nature and timecourse of hemispheric asymmetries in verbal memory by recording event-related potentials (ERPs) in a continuous recognition task. Participants made overt recognition judgments to test words presented in central vision that were either novel (new words) or had been previously presented in the left or right visual field (old words). An ERP memory effect linked to explicit retrieval revealed no asymmetries for words repeated at short and medium retention intervals, but at longer repetition lags (20–50 intervening words) this ‘old/new effect’ was more pronounced for words whose study presentation had been biased to the right hemisphere (RH). Additionally, a repetition effect linked to more implicit recognition processes (P2 amplitude changes) was observed at all lags for words preferentially encoded by the RH but was not observed for left hemisphere (LH)-encoded words. These results are consistent with theories that the RH encodes verbal stimuli more veridically whereas the LH encodes in a more abstract manner. The current findings provide a critical link between prior work on memory asymmetries, which has emphasized general LH advantages for verbal material, and on language comprehension, which has pointed to an important role for the RH in language processes that require the retention and integration of verbal information over long time spans. PMID:17291547
Preserved visual lexicosemantics in global aphasia: a right-hemisphere contribution?
Gold, B T; Kertesz, A
2000-12-01
Extensive testing of a patient, GP, who encountered large-scale destruction of left-hemisphere (LH) language regions was undertaken in order to address several issues concerning the ability of nonperisylvian areas to extract meaning from printed words. Testing revealed recognition of superordinate boundaries of animals, tools, vegetables, fruit, clothes, and furniture. GP was able to distinguish proper names from other nouns and from nonwords. GP was also able to differentiate words representing living things from those denoting nonliving things. The extent of LH infarct resulting in a global impairment to phonological and syntactic processing suggests LH specificity for these functions but considerable right-hemisphere (RH) participation in visual lexicosemantic processing. The relative preservation of visual lexicosemantic abilities despite severe impairment to all aspects of phonological coding demonstrates the importance of the direct route to the meaning of single printed words.
Impaired recognition of faces and objects in dyslexia: Evidence for ventral stream dysfunction?
Sigurdardottir, Heida Maria; Ívarsson, Eysteinn; Kristinsdóttir, Kristjana; Kristjánsson, Árni
2015-09-01
The objective of this study was to establish whether or not dyslexics are impaired at the recognition of faces and other complex nonword visual objects. This would be expected based on a meta-analysis revealing that children and adult dyslexics show functional abnormalities within the left fusiform gyrus, a brain region high up in the ventral visual stream, which is thought to support the recognition of words, faces, and other objects. 20 adult dyslexics (M = 29 years) and 20 matched typical readers (M = 29 years) participated in the study. One dyslexic-typical reader pair was excluded based on Adult Reading History Questionnaire scores and IS-FORM reading scores. Performance was measured on 3 high-level visual processing tasks: the Cambridge Face Memory Test, the Vanderbilt Holistic Face Processing Test, and the Vanderbilt Expertise Test. People with dyslexia are impaired in their recognition of faces and other visually complex objects. Their holistic processing of faces appears to be intact, suggesting that dyslexics may instead be specifically impaired at part-based processing of visual objects. The difficulty that people with dyslexia experience with reading might be the most salient manifestation of a more general high-level visual deficit. (c) 2015 APA, all rights reserved).
Bletzer, Keith V
2015-01-01
Satisfaction surveys are common in the field of health education, as a means of assisting organizations to improve the appropriateness of training materials and the effectiveness of facilitation-presentation. Data can be qualitative of which analysis often become specialized. This technical article aims to reveal whether qualitative survey results can be visualized by presenting them as a Word Cloud. Qualitative materials in the form of written comments on an agency-specific satisfaction survey were coded and quantified. The resulting quantitative data were used to convert comments into "input terms" to generate Word Clouds to increase comprehension and accessibility through visualization of the written responses. A three-tier display incorporated a Word Cloud at the top, followed by the corresponding frequency table, and a textual summary of the qualitative data represented by the Word Cloud imagery. This mixed format adheres to recognition that people vary in what format is most effective for assimilating new information. The combination of visual representation through Word Clouds complemented by quantified qualitative materials is one means of increasing comprehensibility for a range of stakeholders, who might not be familiar with numerical tables or statistical analyses.
Perea, Manuel; Marcet, Ana; Vergara-Martínez, Marta
2016-09-01
In masked priming lexical decision experiments, there is a matched-case identity advantage for nonwords, but not for words (e.g., ERTAR-ERTAR < ertar-ERTAR; ALTAR-ALTAR = altar-ALTAR). This dissociation has been interpreted in terms of feedback from higher levels of processing during orthographic encoding. Here, we examined whether a matched-case identity advantage also occurs for words when top-down feedback is minimized. We employed a task that taps prelexical orthographic processes: the masked prime same-different task. For "same" trials, results showed faster response times for targets when preceded by a briefly presented matched-case identity prime than when preceded by a mismatched-case identity prime. Importantly, this advantage was similar in magnitude for nonwords and words. This finding constrains the interplay of bottom-up versus top-down mechanisms in models of visual-word identification.
The process of spoken word recognition in the face of signal degradation.
Farris-Trimble, Ashley; McMurray, Bob; Cigrand, Nicole; Tomblin, J Bruce
2014-02-01
Though much is known about how words are recognized, little research has focused on how a degraded signal affects the fine-grained temporal aspects of real-time word recognition. The perception of degraded speech was examined in two populations with the goal of describing the time course of word recognition and lexical competition. Thirty-three postlingually deafened cochlear implant (CI) users and 57 normal hearing (NH) adults (16 in a CI-simulation condition) participated in a visual world paradigm eye-tracking task in which their fixations to a set of phonologically related items were monitored as they heard one item being named. Each degraded-speech group was compared with a set of age-matched NH participants listening to unfiltered speech. CI users and the simulation group showed a delay in activation relative to the NH listeners, and there is weak evidence that the CI users showed differences in the degree of peak and late competitor activation. In general, though, the degraded-speech groups behaved statistically similarly with respect to activation levels. PsycINFO Database Record (c) 2014 APA, all rights reserved.
Eye-tracking the time-course of novel word learning and lexical competition in adults and children.
Weighall, A R; Henderson, L M; Barr, D J; Cairney, S A; Gaskell, M G
2017-04-01
Lexical competition is a hallmark of proficient, automatic word recognition. Previous research suggests that there is a delay before a new spoken word becomes engaged in this process, with sleep playing an important role. However, data from one method - the visual world paradigm - consistently show competition without a delay. We trained 42 adults and 40 children (aged 7-8) on novel word-object pairings, and employed this paradigm to measure the time-course of lexical competition. Fixations to novel objects upon hearing existing words (e.g., looks to the novel object biscal upon hearing "click on the biscuit") were compared to fixations on untrained objects. Novel word-object pairings learned immediately before testing and those learned the previous day exhibited significant competition effects, with stronger competition for the previous day pairings for children but not adults. Crucially, this competition effect was significantly smaller for novel than existing competitors (e.g., looks to candy upon hearing "click on the candle"), suggesting that novel items may not compete for recognition like fully-fledged lexical items, even after 24h. Explicit memory (cued recall) was superior for words learned the day before testing, particularly for children; this effect (but not the lexical competition effects) correlated with sleep-spindle density. Together, the results suggest that different aspects of new word learning follow different time courses: visual world competition effects can emerge swiftly, but are qualitatively different from those observed with established words, and are less reliant upon sleep. Furthermore, the findings fit with the view that word learning earlier in development is boosted by sleep to a greater degree. Copyright © 2016. Published by Elsevier Inc.
No effect of stress on false recognition.
Beato, María Soledad; Cadavid, Sara; Pulido, Ramón F; Pinho, María Salomé
2013-02-01
The present study aimed to analyze the effect of acute stress on false recognition in the Deese/Roediger-McDermott (DRM) paradigm. In this paradigm, lists of words associated with a non-presented critical lure are studied and, in a subsequent memory test, critical lures are often falsely remembered. In two experiments, participants were randomly assigned to either the stress group (Trier Social Stress Test) or the no-stress control group. Because we sought to control the level-of-processing at encoding, in Experiment 1, participants created a visual mental image for each presented word (deep encoding). In Experiment 2, participants performed a shallow encoding (to respond whether each word contained the letter "o"). The results indicated that, in both experiments, as predicted, heart rate and STAI-S scores increased only in the stress group. However, false recognition did not differ across stress and no-stress groups. Results suggest that, although psychosocial stress was successfully induced, it does not enhance the vulnerability of individuals with acute stress to DRM false recognition, regardless of the level of processing.
Encoding in the visual word form area: an fMRI adaptation study of words versus handwriting.
Barton, Jason J S; Fox, Christopher J; Sekunova, Alla; Iaria, Giuseppe
2010-08-01
Written texts are not just words but complex multidimensional stimuli, including aspects such as case, font, and handwriting style, for example. Neuropsychological reports suggest that left fusiform lesions can impair the reading of text for word (lexical) content, being associated with alexia, whereas right-sided lesions may impair handwriting recognition. We used fMRI adaptation in 13 healthy participants to determine if repetition-suppression occurred for words but not handwriting in the left visual word form area (VWFA) and the reverse in the right fusiform gyrus. Contrary to these expectations, we found adaptation for handwriting but not for words in both the left VWFA and the right VWFA homologue. A trend to adaptation for words but not handwriting was seen only in the left middle temporal gyrus. An analysis of anterior and posterior subdivisions of the left VWFA also failed to show any adaptation for words. We conclude that the right and the left fusiform gyri show similar patterns of adaptation for handwriting, consistent with a predominantly perceptual contribution to text processing.
Effect of Syllable Congruency in Sixth Graders in the Lexical Decision Task with Masked Priming
ERIC Educational Resources Information Center
Chetail, Fabienne; Mathey, Stephanie
2012-01-01
The aim of this study was to investigate the role of the syllable in visual recognition of French words in Grade 6. To do so, the syllabic congruency effect was examined in the lexical decision task combined with masked priming. Target words were preceded by pseudoword primes sharing the first letters that either corresponded to the syllable…
Kelly, R R; Tomlison-Keasey, C
1976-12-01
Eleven hearing-impaired children and 11 normal-hearing children (mean = four years 11 months) were visually presented familiar items in either picture or word form. Subjects were asked to recognize the stimuli they had seen from cue cards consisting of pictures or words. They were then asked to recall the sequence of stimuli by arranging the cue cards selected. The hearing-impaired group and normal-hearing subjects performed differently with the picture/picture (P/P) and word/word (W/W) modes in the recognition phase. The hearing impaired performed equally well with both modes (P/P and W/W), while the normal hearing did significantly better on the P/P mode. Furthermore, the normal-hearing group showed no difference in processing like modes (P/P and W/W) when compared to unlike modes (W/P and P/W). In contrast, the hearing-impaired subjects did better on like modes. The results were interpreted, in part, as supporting the position that young normal-hearing children dual code their visual information better than hearing-impaired children.
Optimal viewing position in vertically and horizontally presented Japanese words.
Kajii, N; Osaka, N
2000-11-01
In the present study, the optimal viewing position (OVP) phenomenon in Japanese Hiragana was investigated, with special reference to a comparison between the vertical and the horizontal meridians in the visual field. In the first experiment, word recognition scores were determined while the eyes were fixating predetermined locations in vertically and horizontally displayed words. Similar to what has been reported for Roman scripts, OVP curves, which were asymmetric with respect to the beginning of words, were observed in both conditions. However, this asymmetry was less pronounced for vertically than for horizontally displayed words. In the second experiment, the visibility of individual characters within strings was examined for the vertical and horizontal meridians. As for Roman characters, letter identification scores were better in the right than in the left visual field. However, identification scores did not differ between the upper and the lower sides of fixation along the vertical meridian. The results showed that the model proposed by Nazir, O'Regan, and Jacobs (1991) cannot entirely account for the OVP phenomenon. A model in which visual and lexical factors are combined is proposed instead.
Schuster, Sarah; Hawelka, Stefan; Hutzler, Florian; Kronbichler, Martin; Richlan, Fabio
2016-01-01
Word length, frequency, and predictability count among the most influential variables during reading. Their effects are well-documented in eye movement studies, but pertinent evidence from neuroimaging primarily stem from single-word presentations. We investigated the effects of these variables during reading of whole sentences with simultaneous eye-tracking and functional magnetic resonance imaging (fixation-related fMRI). Increasing word length was associated with increasing activation in occipital areas linked to visual analysis. Additionally, length elicited a U-shaped modulation (i.e., least activation for medium-length words) within a brain stem region presumably linked to eye movement control. These effects, however, were diminished when accounting for multiple fixation cases. Increasing frequency was associated with decreasing activation within left inferior frontal, superior parietal, and occipito-temporal regions. The function of the latter region—hosting the putative visual word form area—was originally considered as limited to sublexical processing. An exploratory analysis revealed that increasing predictability was associated with decreasing activation within middle temporal and inferior frontal regions previously implicated in memory access and unification. The findings are discussed with regard to their correspondence with findings from single-word presentations and with regard to neurocognitive models of visual word recognition, semantic processing, and eye movement control during reading. PMID:27365297
Alesi, Marianna; Rappo, Gaetano; Pepi, Annamaria
2016-01-01
One of the most significant current discussions has led to the hypothesis that domain-specific training programs alone are not enough to improve reading achievement or working memory abilities. Incremental or Entity personal conceptions of intelligence may be assumed to be an important prognostic factor to overcome domain-specific deficits. Specifically, incremental students tend to be more oriented toward change and autonomy and are able to adopt more efficacious strategies. This study aims at examining the effect of personal conceptions of intelligence to strengthen the efficacy of a multidimensional intervention program in order to improve decoding abilities and working memory. Participants included two children (M age = 10 years) with developmental dyslexia and different conceptions of intelligence. The children were tested on a whole battery of reading and spelling tests commonly used in the assessment of reading disabilities in Italy. Afterwards, they were given a multimedia test to measure motivational factors such as conceptions of intelligence and achievement goals. The children took part in the T.I.R.D. Multimedia Training for the Rehabilitation of Dyslexia (Rappo and Pepi, 2010) reinforced by specific units to improve verbal working memory for 3 months. This training consisted of specific tasks to rehabilitate both visual and phonological strategies (sound blending, word segmentation, alliteration test and rhyme test, letter recognition, digraph recognition, trigraph recognition, and word recognition as samples of visual tasks) and verbal working memory (rapid words and non-words recognition). Posttest evaluations showed that the child holding the incremental theory of intelligence improved more than the child holding a static representation. On the whole this study highlights the importance of treatment programs in which both specificity of deficits and motivational factors are both taken into account. There is a need to plan multifaceted intervention programs based on a transverse approach, considering both cognitive and motivational factors. PMID:26779069
Reversing the picture superiority effect: a speed-accuracy trade-off study of recognition memory.
Boldini, Angela; Russo, Riccardo; Punia, Sahiba; Avons, S E
2007-01-01
Speed-accuracy trade-off methods have been used to contrast single- and dual-process accounts of recognition memory. With these procedures, subjects are presented with individual test items and required to make recognition decisions under various time constraints. In three experiments, we presented words and pictures to be intentionally learned; test stimuli were always visually presented words. At test, we manipulated the interval between the presentation of each test stimulus and that of a response signal, thus controlling the amount of time available to retrieve target information. The standard picture superiority effect was significant in long response deadline conditions (i.e., > or = 2,000 msec). Conversely, a significant reverse picture superiority effect emerged at short response-signal deadlines (< 200 msec). The results are congruent with views suggesting that both fast familiarity and slower recollection processes contribute to recognition memory. Alternative accounts are also discussed.
Is the orthographic/phonological onset a single unit in reading aloud?
Mousikou, Petroula; Coltheart, Max; Saunders, Steven; Yen, Lisa
2010-02-01
Two main theories of visual word recognition have been developed regarding the way orthographic units in printed words map onto phonological units in spoken words. One theory suggests that a string of single letters or letter clusters corresponds to a string of phonemes (Coltheart, 1978; Venezky, 1970), while the other suggests that a string of single letters or letter clusters corresponds to coarser phonological units, for example, onsets and rimes (Treiman & Chafetz, 1987). These theoretical assumptions were critical for the development of coding schemes in prominent computational models of word recognition and reading aloud. In a reading-aloud study, we tested whether the human reading system represents the orthographic/phonological onset of printed words and nonwords as single units or as separate letters/phonemes. Our results, which favored a letter and not an onset-coding scheme, were successfully simulated by the dual-route cascaded (DRC) model (Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001). A separate experiment was carried out to further adjudicate between 2 versions of the DRC model.
Transposed-letter priming effects in reading aloud words and nonwords.
Mousikou, Petroula; Kinoshita, Sachiko; Wu, Simon; Norris, Dennis
2015-10-01
A masked nonword prime generated by transposing adjacent inner letters in a word (e.g., jugde) facilitates the recognition of the target word (JUDGE) more than a prime in which the relevant letters are replaced by different letters (e.g., junpe). This transposed-letter (TL) priming effect has been widely interpreted as evidence that the coding of letter position is flexible, rather than precise. Although the TL priming effect has been extensively investigated in the domain of visual word recognition using the lexical decision task, very few studies have investigated this empirical phenomenon in reading aloud. In the present study, we investigated TL priming effects in reading aloud words and nonwords and found that these effects are of equal magnitude for the two types of items. We take this result as support for the view that the TL priming effect arises from noisy perception of letter order within the prime prior to the mapping of orthography to phonology.
Stewart, Erin K.; Wu, Yu-Hsiang; Bishop, Christopher; Bentler, Ruth A.; Tremblay, Kelly
2017-01-01
Purpose This study evaluated the relationship between working memory (WM) and speech recognition in noise with different noise types as well as in the presence of visual cues. Method Seventy-six adults with bilateral, mild to moderately severe sensorineural hearing loss (mean age: 69 years) participated. Using a cross-sectional design, 2 measures of WM were taken: a reading span measure, and Word Auditory Recognition and Recall Measure (Smith, Pichora-Fuller, & Alexander, 2016). Speech recognition was measured with the Multi-Modal Lexical Sentence Test for Adults (Kirk et al., 2012) in steady-state noise and 4-talker babble, with and without visual cues. Testing was under unaided conditions. Results A linear mixed model revealed visual cues and pure-tone average as the only significant predictors of Multi-Modal Lexical Sentence Test outcomes. Neither WM measure nor noise type showed a significant effect. Conclusion The contribution of WM in explaining unaided speech recognition in noise was negligible and not influenced by noise type or visual cues. We anticipate that with audibility partially restored by hearing aids, the effects of WM will increase. For clinical practice to be affected, more significant effect sizes are needed. PMID:28744550
On pleasure and thrill: the interplay between arousal and valence during visual word recognition.
Recio, Guillermo; Conrad, Markus; Hansen, Laura B; Jacobs, Arthur M
2014-07-01
We investigated the interplay between arousal and valence in the early processing of affective words. Event-related potentials (ERPs) were recorded while participants read words organized in an orthogonal design with the factors valence (positive, negative, neutral) and arousal (low, medium, high) in a lexical decision task. We observed faster reaction times for words of positive valence and for those of high arousal. Data from ERPs showed increased early posterior negativity (EPN) suggesting improved visual processing of these conditions. Valence effects appeared for medium and low arousal and were absent for high arousal. Arousal effects were obtained for neutral and negative words but were absent for positive words. These results suggest independent contributions of arousal and valence at early attentional stages of processing. Arousal effects preceded valence effects in the ERP data suggesting that arousal serves as an early alert system preparing a subsequent evaluation in terms of valence. Copyright © 2014 Elsevier Inc. All rights reserved.
Letter Position Coding Across Modalities: The Case of Braille Readers
Perea, Manuel; García-Chamorro, Cristina; Martín-Suesta, Miguel; Gómez, Pablo
2012-01-01
Background The question of how the brain encodes letter position in written words has attracted increasing attention in recent years. A number of models have recently been proposed to accommodate the fact that transposed-letter stimuli like jugde or caniso are perceptually very close to their base words. Methodology Here we examined how letter position coding is attained in the tactile modality via Braille reading. The idea is that Braille word recognition may provide more serial processing than the visual modality, and this may produce differences in the input coding schemes employed to encode letters in written words. To that end, we conducted a lexical decision experiment with adult Braille readers in which the pseudowords were created by transposing/replacing two letters. Principal Findings We found a word-frequency effect for words. In addition, unlike parallel experiments in the visual modality, we failed to find any clear signs of transposed-letter confusability effects. This dissociation highlights the differences between modalities. Conclusions The present data argue against models of letter position coding that assume that transposed-letter effects (in the visual modality) occur at a relatively late, abstract locus. PMID:23071522
Pardos, Maria; Korostenskaja, Milena; Xiang, Jing; Fujiwara, Hisako; Lee, Ki H.; Horn, Paul S.; Byars, Anna; Vannest, Jennifer; Wang, Yingying; Hemasilpin, Nat; Rose, Douglas F.
2015-01-01
Objective evaluation of language function is critical for children with intractable epilepsy under consideration for epilepsy surgery. The purpose of this preliminary study was to evaluate word recognition in children with intractable epilepsy by using magnetoencephalography (MEG). Ten children with intractable epilepsy (M/F 6/4, mean ± SD 13.4 ± 2.2 years) were matched on age and sex to healthy controls. Common nouns were presented simultaneously from visual and auditory sensory inputs in “match” and “mismatch” conditions. Neuromagnetic responses M1, M2, M3, M4, and M5 with latencies of ~100 ms, ~150 ms, ~250 ms, ~350 ms, and ~450 ms, respectively, elicited during the “match” condition were identified. Compared to healthy children, epilepsy patients had both significantly delayed latency of the M1 and reduced amplitudes of M3 and M5 responses. These results provide neurophysiologic evidence of altered word recognition in children with intractable epilepsy. PMID:26146459
The Onset and Time Course of Semantic Priming during Rapid Recognition of Visual Words
Hoedemaker, Renske S.; Gordon, Peter C.
2016-01-01
In two experiments, we assessed the effects of response latency and task-induced goals on the onset and time course of semantic priming during rapid processing of visual words as revealed by ocular response tasks. In Experiment 1 (Ocular Lexical Decision Task), participants performed a lexical decision task using eye-movement responses on a sequence of four words. In Experiment 2, the same words were encoded for an episodic recognition memory task that did not require a meta-linguistic judgment. For both tasks, survival analyses showed that the earliest-observable effect (Divergence Point or DP) of semantic priming on target-word reading times occurred at approximately 260 ms, and ex-Gaussian distribution fits revealed that the magnitude of the priming effect increased as a function of response time. Together, these distributional effects of semantic priming suggest that the influence of the prime increases when target processing is more effortful. This effect does not require that the task include a metalinguistic judgment; manipulation of the task goals across experiments affected the overall response speed but not the location of the DP or the overall distributional pattern of the priming effect. These results are more readily explained as the result of a retrospective rather than a prospective priming mechanism and are consistent with compound-cue models of semantic priming. PMID:28230394
The onset and time course of semantic priming during rapid recognition of visual words.
Hoedemaker, Renske S; Gordon, Peter C
2017-05-01
In 2 experiments, we assessed the effects of response latency and task-induced goals on the onset and time course of semantic priming during rapid processing of visual words as revealed by ocular response tasks. In Experiment 1 (ocular lexical decision task), participants performed a lexical decision task using eye movement responses on a sequence of 4 words. In Experiment 2, the same words were encoded for an episodic recognition memory task that did not require a metalinguistic judgment. For both tasks, survival analyses showed that the earliest observable effect (divergence point [DP]) of semantic priming on target-word reading times occurred at approximately 260 ms, and ex-Gaussian distribution fits revealed that the magnitude of the priming effect increased as a function of response time. Together, these distributional effects of semantic priming suggest that the influence of the prime increases when target processing is more effortful. This effect does not require that the task include a metalinguistic judgment; manipulation of the task goals across experiments affected the overall response speed but not the location of the DP or the overall distributional pattern of the priming effect. These results are more readily explained as the result of a retrospective, rather than a prospective, priming mechanism and are consistent with compound-cue models of semantic priming. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
War and peace: morphemes and full forms in a noninteractive activation parallel dual-route model.
Baayen, H; Schreuder, R
This article introduces a computational tool for modeling the process of morphological segmentation in visual and auditory word recognition in the framework of a parallel dual-route model. Copyright 1999 Academic Press.
Georgiou, George; Liu, Cuina; Xu, Shiyang
2017-08-01
Associative learning, traditionally measured with paired associate learning (PAL) tasks, has been found to predict reading ability in several languages. However, it remains unclear whether it also predicts word reading in Chinese, which is known for its ambiguous print-sound correspondences, and whether its effects are direct or indirect through the effects of other reading-related skills such as phonological awareness and rapid naming. Thus, the purpose of this study was to examine the direct and indirect effects of visual-verbal PAL on word reading in an unselected sample of Chinese children followed from the second to the third kindergarten year. A sample of 141 second-year kindergarten children (71 girls and 70 boys; mean age=58.99months, SD=3.17) were followed for a year and were assessed at both times on measures of visual-verbal PAL, rapid naming, and phonological awareness. In the third kindergarten year, they were also assessed on word reading. The results of path analysis showed that visual-verbal PAL exerted a significant direct effect on word reading that was independent of the effects of phonological awareness and rapid naming. However, it also exerted significant indirect effects through phonological awareness. Taken together, these findings suggest that variations in cross-modal associative learning (as measured by visual-verbal PAL) place constraints on the development of word recognition skills irrespective of the characteristics of the orthography children are learning to read. Copyright © 2017 Elsevier Inc. All rights reserved.
Remember to blink: Reduced attentional blink following instructions to forget.
Taylor, Tracy L
2018-04-24
This study used rapid serial visual presentation (RSVP) to determine whether, in an item-method directed forgetting task, study word processing ends earlier for forget words than for remember words. The critical manipulation required participants to monitor an RSVP stream of black nonsense strings in which a single blue word was embedded. The next item to follow the word was a string of red fs that instructed the participant to forget the word or green rs that instructed the participant to remember the word. After the memory instruction, a probe string of black xs or os appeared at postinstruction positions 1-8. Accuracy in reporting the identity of the probe string revealed an attenuated attentional blink following instructions to forget. A yes-no recognition task that followed the study trials confirmed a directed forgetting effect, with better recognition of remember words than forget words. Considered in the context of control conditions that required participants to commit either all or none of the study words to memory, the pattern of probe identification accuracy following the directed forgetting task argues that an intention to forget releases limited-capacity attentional resources sooner than an instruction to remember-despite participants needing to maintain an ongoing rehearsal set in both cases.
Feature and Region Selection for Visual Learning.
Zhao, Ji; Wang, Liantao; Cabral, Ricardo; De la Torre, Fernando
2016-03-01
Visual learning problems, such as object classification and action recognition, are typically approached using extensions of the popular bag-of-words (BoWs) model. Despite its great success, it is unclear what visual features the BoW model is learning. Which regions in the image or video are used to discriminate among classes? Which are the most discriminative visual words? Answering these questions is fundamental for understanding existing BoW models and inspiring better models for visual recognition. To answer these questions, this paper presents a method for feature selection and region selection in the visual BoW model. This allows for an intermediate visualization of the features and regions that are important for visual learning. The main idea is to assign latent weights to the features or regions, and jointly optimize these latent variables with the parameters of a classifier (e.g., support vector machine). There are four main benefits of our approach: 1) our approach accommodates non-linear additive kernels, such as the popular χ(2) and intersection kernel; 2) our approach is able to handle both regions in images and spatio-temporal regions in videos in a unified way; 3) the feature selection problem is convex, and both problems can be solved using a scalable reduced gradient method; and 4) we point out strong connections with multiple kernel learning and multiple instance learning approaches. Experimental results in the PASCAL VOC 2007, MSR Action Dataset II and YouTube illustrate the benefits of our approach.
Constraints on the Transfer of Perceptual Learning in Accented Speech
Eisner, Frank; Melinger, Alissa; Weber, Andrea
2013-01-01
The perception of speech sounds can be re-tuned through a mechanism of lexically driven perceptual learning after exposure to instances of atypical speech production. This study asked whether this re-tuning is sensitive to the position of the atypical sound within the word. We investigated perceptual learning using English voiced stop consonants, which are commonly devoiced in word-final position by Dutch learners of English. After exposure to a Dutch learner’s productions of devoiced stops in word-final position (but not in any other positions), British English (BE) listeners showed evidence of perceptual learning in a subsequent cross-modal priming task, where auditory primes with devoiced final stops (e.g., “seed”, pronounced [si:th]), facilitated recognition of visual targets with voiced final stops (e.g., SEED). In Experiment 1, this learning effect generalized to test pairs where the critical contrast was in word-initial position, e.g., auditory primes such as “town” facilitated recognition of visual targets like DOWN. Control listeners, who had not heard any stops by the speaker during exposure, showed no learning effects. The generalization to word-initial position did not occur when participants had also heard correctly voiced, word-initial stops during exposure (Experiment 2), and when the speaker was a native BE speaker who mimicked the word-final devoicing (Experiment 3). The readiness of the perceptual system to generalize a previously learned adjustment to other positions within the word thus appears to be modulated by distributional properties of the speech input, as well as by the perceived sociophonetic characteristics of the speaker. The results suggest that the transfer of pre-lexical perceptual adjustments that occur through lexically driven learning can be affected by a combination of acoustic, phonological, and sociophonetic factors. PMID:23554598
ERIC Educational Resources Information Center
Hsiao, Janet Hui-wen
2011-01-01
In Chinese orthography, a dominant character structure exists in which a semantic radical appears on the left and a phonetic radical on the right (SP characters); a minority opposite arrangement also exists (PS characters). As the number of phonetic radical types is much greater than semantic radical types, in SP characters the information is…
Shuai, Lan; Malins, Jeffrey G
2017-02-01
Despite its prevalence as one of the most highly influential models of spoken word recognition, the TRACE model has yet to be extended to consider tonal languages such as Mandarin Chinese. A key reason for this is that the model in its current state does not encode lexical tone. In this report, we present a modified version of the jTRACE model in which we borrowed on its existing architecture to code for Mandarin phonemes and tones. Units are coded in a way that is meant to capture the similarity in timing of access to vowel and tone information that has been observed in previous studies of Mandarin spoken word recognition. We validated the model by first simulating a recent experiment that had used the visual world paradigm to investigate how native Mandarin speakers process monosyllabic Mandarin words (Malins & Joanisse, 2010). We then subsequently simulated two psycholinguistic phenomena: (1) differences in the timing of resolution of tonal contrast pairs, and (2) the interaction between syllable frequency and tonal probability. In all cases, the model gave rise to results comparable to those of published data with human subjects, suggesting that it is a viable working model of spoken word recognition in Mandarin. It is our hope that this tool will be of use to practitioners studying the psycholinguistics of Mandarin Chinese and will help inspire similar models for other tonal languages, such as Cantonese and Thai.
Parallel language activation and inhibitory control in bimodal bilinguals.
Giezen, Marcel R; Blumenfeld, Henrike K; Shook, Anthony; Marian, Viorica; Emmorey, Karen
2015-08-01
Findings from recent studies suggest that spoken-language bilinguals engage nonlinguistic inhibitory control mechanisms to resolve cross-linguistic competition during auditory word recognition. Bilingual advantages in inhibitory control might stem from the need to resolve perceptual competition between similar-sounding words both within and between their two languages. If so, these advantages should be lessened or eliminated when there is no perceptual competition between two languages. The present study investigated the extent of inhibitory control recruitment during bilingual language comprehension by examining associations between language co-activation and nonlinguistic inhibitory control abilities in bimodal bilinguals, whose two languages do not perceptually compete. Cross-linguistic distractor activation was identified in the visual world paradigm, and correlated significantly with performance on a nonlinguistic spatial Stroop task within a group of 27 hearing ASL-English bilinguals. Smaller Stroop effects (indexing more efficient inhibition) were associated with reduced co-activation of ASL signs during the early stages of auditory word recognition. These results suggest that inhibitory control in auditory word recognition is not limited to resolving perceptual linguistic competition in phonological input, but is also used to moderate competition that originates at the lexico-semantic level. Copyright © 2015 Elsevier B.V. All rights reserved.
Early access to abstract representations in developing readers: Evidence from masked priming
Perea, Manuel; Abu Mallouh, Reem; Carreiras, Manuel
2013-01-01
A commonly shared assumption in the field of visual-word recognition is that retinotopic representations are rapidly converted into abstract representations. Here we examine the role of visual form vs. abstract representations during the early stages of word processing –as measured by masked priming– in young children (3rd and 6th graders) and adult readers. To maximize the chances of detecting an effect of visual form, we employed a language with a very intricate orthography, Arabic. If visual form plays a role in the early moments of processing, greater benefit would be expected from related primes that have the same visual form (in terms of the ligation pattern between a word’s letters) as the target word (e.g., - [ktzb-ktAb] –note that the three initial letters are connected in prime and target) than for those that do not ( [ktxb-ktAb]). Results showed that the magnitude of priming effect relative to an unrelated condition (e.g., ) was remarkably similar for both types of primes. Thus, despite the visual complexity of Arabic orthography, there is fast access to the abstract letter representations not only in adult readers by also in developing readers. PMID:23786474
ERIC Educational Resources Information Center
Vainio, Seppo; Anneli, Pajunen; Hyona, Jukka
2014-01-01
This study investigated the effect of the first language (L1) on the visual word recognition of inflected nouns in second language (L2) Finnish by native Russian and Chinese speakers. Case inflection is common in Russian and in Finnish but nonexistent in Chinese. Several models have been posited to describe L2 morphological processing. The unified…
Vergara-Martínez, Marta; Perea, Manuel; Marín, Alejandro; Carreiras, Manuel
2011-09-01
Recent research suggests that there is a processing distinction between consonants and vowels in visual-word recognition. Here we conjointly examine the time course of consonants and vowels in processes of letter identity and letter position assignment. Event related potentials (ERPs) were recorded while participants read words and pseudowords in a lexical decision task. The stimuli were displayed under different conditions in a masked priming paradigm with a 50-ms SOA: (i) identity/baseline condition e.g., chocolate-CHOCOLATE); (ii) vowels-delayed condition (e.g., choc_l_te-CHOCOLATE); (iii) consonants-delayed condition (cho_o_ate-CHOCOLATE); (iv) consonants-transposed condition (cholocate-CHOCOLATE); (v) vowels-transposed condition (chocalote-CHOCOLATE), and (vi) unrelated condition (editorial-CHOCOLATE). Results showed earlier ERP effects and longer reaction times for the delayed-letter compared to the transposed-letter conditions. Furthermore, at early stages of processing, consonants may play a greater role during letter identity processing. Differences between vowels and consonants regarding letter position assignment are discussed in terms of a later phonological level involved in lexical retrieval. Copyright © 2010 Elsevier Inc. All rights reserved.
Size matters: bigger is faster.
Sereno, Sara C; O'Donnell, Patrick J; Sereno, Margaret E
2009-06-01
A largely unexplored aspect of lexical access in visual word recognition is "semantic size"--namely, the real-world size of an object to which a word refers. A total of 42 participants performed a lexical decision task on concrete nouns denoting either big or small objects (e.g., bookcase or teaspoon). Items were matched pairwise on relevant lexical dimensions. Participants' reaction times were reliably faster to semantically "big" versus "small" words. The results are discussed in terms of possible mechanisms, including more active representations for "big" words, due to the ecological importance attributed to large objects in the environment and the relative speed of neural responses to large objects.
Mulligan, Neil W
2002-08-01
Extant research presents conflicting results on whether manipulations of attention during encoding affect perceptual priming. Two suggested mediating factors are type of manipulation (selective vs divided) and whether attention is manipulated across multiple objects or within a single object. Words printed in different colors (Experiment 1) or flanked by colored blocks (Experiment 2) were presented at encoding. In the full-attention condition, participants always read the word, in the unattended condition they always identified the color, and in the divided-attention conditions, participants attended to both word identity and color. Perceptual priming was assessed with perceptual identification and explicit memory with recognition. Relative to the full-attention condition, attending to color always reduced priming. Dividing attention between word identity and color, however, only disrupted priming when these attributes were presented as multiple objects (Experiment 2) but not when they were dimensions of a common object (Experiment 1). On the explicit test, manipulations of attention always affected recognition accuracy.
[Analysis of intrusion errors in free recall].
Diesfeldt, H F A
2017-06-01
Extra-list intrusion errors during five trials of the eight-word list-learning task of the Amsterdam Dementia Screening Test (ADST) were investigated in 823 consecutive psychogeriatric patients (87.1% suffering from major neurocognitive disorder). Almost half of the participants (45.9%) produced one or more intrusion errors on the verbal recall test. Correct responses were lower when subjects made intrusion errors, but learning slopes did not differ between subjects who committed intrusion errors and those who did not so. Bivariate regression analyses revealed that participants who committed intrusion errors were more deficient on measures of eight-word recognition memory, delayed visual recognition and tests of executive control (the Behavioral Dyscontrol Scale and the ADST-Graphical Sequences as measures of response inhibition). Using hierarchical multiple regression, only free recall and delayed visual recognition retained an independent effect in the association with intrusion errors, such that deficient scores on tests of episodic memory were sufficient to explain the occurrence of intrusion errors. Measures of inhibitory control did not add significantly to the explanation of intrusion errors in free recall, which makes insufficient strength of memory traces rather than a primary deficit in inhibition the preferred account for intrusion errors in free recall.
Conflict resolved: On the role of spatial attention in reading and color naming tasks.
Robidoux, Serje; Besner, Derek
2015-12-01
The debate about whether or not visual word recognition requires spatial attention has been marked by a conflict: the results from different tasks yield different conclusions. Experiments in which the primary task is reading based show no evidence that unattended words are processed, whereas when the primary task is color identification, supposedly unattended words do affect processing. However, the color stimuli used to date does not appear to demand as much spatial attention as explicit word reading tasks. We first identify a color stimulus that requires as much spatial attention to identify as does a word. We then demonstrate that when spatial attention is appropriately captured, distractor words in unattended locations do not affect color identification. We conclude that there is no word identification without spatial attention.
Basu, Anamitra; Mandal, Manas K
2004-07-01
The present study examined visual-field advantage as a function of presentation mode (unilateral, bilateral), stimulus structure (facial, lexical), and stimulus content (emotional, neutral). The experiment was conducted in a split visual-field paradigm using a JAVA-based computer program with recognition accuracy as the dependent measure. Unilaterally, rather than bilaterally, presented stimuli were significantly better recognized. Words were significantly better recognized than faces in the right visual-field; the difference was nonsignificant in the left visual-field. Emotional content elicited left visual-field and neutral content elicited right visual-field advantages. Copyright Taylor and Francis Inc.
Competition between conceptual relations affects compound recognition: the role of entropy.
Schmidtke, Daniel; Kuperman, Victor; Gagné, Christina L; Spalding, Thomas L
2016-04-01
Previous research has suggested that the conceptual representation of a compound is based on a relational structure linking the compound's constituents. Existing accounts of the visual recognition of modifier-head or noun-noun compounds posit that the process involves the selection of a relational structure out of a set of competing relational structures associated with the same compound. In this article, we employ the information-theoretic metric of entropy to gauge relational competition and investigate its effect on the visual identification of established English compounds. The data from two lexical decision megastudies indicates that greater entropy (i.e., increased competition) in a set of conceptual relations associated with a compound is associated with longer lexical decision latencies. This finding indicates that there exists competition between potential meanings associated with the same complex word form. We provide empirical support for conceptual composition during compound word processing in a model that incorporates the effect of the integration of co-activated and competing relational information.
An fMRI study of semantic processing in men with schizophrenia
Kubicki, M.; McCarley, R.W.; Nestor, P.G.; Huh, T.; Kikinis, R.; Shenton, M.E.; Wible, C.G.
2009-01-01
As a means toward understanding the neural bases of schizophrenic thought disturbance, we examined brain activation patterns in response to semantically and superficially encoded words in patients with schizophrenia. Nine male schizophrenic and 9 male control subjects were tested in a visual levels of processing (LOP) task first outside the magnet and then during the fMRI scanning procedures (using a different set of words). During the experiments visual words were presented under two conditions. Under the deep, semantic encoding condition, subjects made semantic judgments as to whether the words were abstract or concrete. Under the shallow, nonsemantic encoding condition, subjects made perceptual judgments of the font size (uppercase/lowercase) of the presented words. After performance of the behavioral task, a recognition test was used to assess the depth of processing effect, defined as better performance for semantically encoded words than for perceptually encoded words. For the scanned version only, the words for both conditions were repeated in order to assess repetition-priming effects. Reaction times were assessed in both testing scenarios. Both groups showed the expected depth of processing effect for recognition, and control subjects showed the expected increased activation of the left inferior prefrontal cortex (LIPC) under semantic encoding relative to perceptual encoding conditions as well as repetition priming for semantic conditions only. In contrast, schizophrenics showed similar patterns of fMRI activation regardless of condition. Most striking in relation to controls, patients showed decreased LIFC activation concurrent with increased left superior temporal gyrus activation for semantic encoding versus shallow encoding. Furthermore, schizophrenia subjects did not show the repetition priming effect, either behaviorally or as a decrease in LIPC activity. In patients with schizophrenia, LIFC underactivation and left superior temporal gyrus overactivation for semantically encoded words may reflect a disease-related disruption of a distributed frontal temporal network that is engaged in the representation and processing of meaning of words, text, and discourse and which may underlie schizophrenic thought disturbance. PMID:14683698
An fMRI study of semantic processing in men with schizophrenia.
Kubicki, M; McCarley, R W; Nestor, P G; Huh, T; Kikinis, R; Shenton, M E; Wible, C G
2003-12-01
As a means toward understanding the neural bases of schizophrenic thought disturbance, we examined brain activation patterns in response to semantically and superficially encoded words in patients with schizophrenia. Nine male schizophrenic and 9 male control subjects were tested in a visual levels of processing (LOP) task first outside the magnet and then during the fMRI scanning procedures (using a different set of words). During the experiments visual words were presented under two conditions. Under the deep, semantic encoding condition, subjects made semantic judgments as to whether the words were abstract or concrete. Under the shallow, nonsemantic encoding condition, subjects made perceptual judgments of the font size (uppercase/lowercase) of the presented words. After performance of the behavioral task, a recognition test was used to assess the depth of processing effect, defined as better performance for semantically encoded words than for perceptually encoded words. For the scanned version only, the words for both conditions were repeated in order to assess repetition-priming effects. Reaction times were assessed in both testing scenarios. Both groups showed the expected depth of processing effect for recognition, and control subjects showed the expected increased activation of the left inferior prefrontal cortex (LIPC) under semantic encoding relative to perceptual encoding conditions as well as repetition priming for semantic conditions only. In contrast, schizophrenics showed similar patterns of fMRI activation regardless of condition. Most striking in relation to controls, patients showed decreased LIFC activation concurrent with increased left superior temporal gyrus activation for semantic encoding versus shallow encoding. Furthermore, schizophrenia subjects did not show the repetition priming effect, either behaviorally or as a decrease in LIPC activity. In patients with schizophrenia, LIFC underactivation and left superior temporal gyrus overactivation for semantically encoded words may reflect a disease-related disruption of a distributed frontal temporal network that is engaged in the representation and processing of meaning of words, text, and discourse and which may underlie schizophrenic thought disturbance.
Farris-Trimble, Ashley; McMurray, Bob
2013-08-01
Researchers have begun to use eye tracking in the visual world paradigm (VWP) to study clinical differences in language processing, but the reliability of such laboratory tests has rarely been assessed. In this article, the authors assess test-retest reliability of the VWP for spoken word recognition. Methods Participants performed an auditory VWP task in repeated sessions and a visual-only VWP task in a third session. The authors performed correlation and regression analyses on several parameters to determine which reflect reliable behavior and which are predictive of behavior in later sessions. Results showed that the fixation parameters most closely related to timing and degree of fixations were moderately-to-strongly correlated across days, whereas the parameters related to rate of increase or decrease of fixations to particular items were less strongly correlated. Moreover, when including factors derived from the visual-only task, the performance of the regression model was at least moderately correlated with Day 2 performance on all parameters ( R > .30). The VWP is stable enough (with some caveats) to serve as an individual measure. These findings suggest guidelines for future use of the paradigm and for areas of improvement in both methodology and analysis.
Hazardous sign detection for safety applications in traffic monitoring
NASA Astrophysics Data System (ADS)
Benesova, Wanda; Kottman, Michal; Sidla, Oliver
2012-01-01
The transportation of hazardous goods in public streets systems can pose severe safety threats in case of accidents. One of the solutions for these problems is an automatic detection and registration of vehicles which are marked with dangerous goods signs. We present a prototype system which can detect a trained set of signs in high resolution images under real-world conditions. This paper compares two different methods for the detection: bag of visual words (BoW) procedure and our approach presented as pairs of visual words with Hough voting. The results of an extended series of experiments are provided in this paper. The experiments show that the size of visual vocabulary is crucial and can significantly affect the recognition success rate. Different code-book sizes have been evaluated for this detection task. The best result of the first method BoW was 67% successfully recognized hazardous signs, whereas the second method proposed in this paper - pairs of visual words and Hough voting - reached 94% of correctly detected signs. The experiments are designed to verify the usability of the two proposed approaches in a real-world scenario.
Does the advantage of the upper part of words occur at the lexical level?
Perea, Manuel; Comesaña, Montserrat; Soares, Ana P
2012-11-01
Several recent studies have shown that the upper part of words is more important than the lower part in visual word recognition. Here, we examine whether or not this advantage arises at the lexical or at the letter (letter feature) level. To examine this issue, we conducted two lexical decision experiments in which words/pseudowords were preceded by a very brief (50-ms) presentation of their upper or lower parts (e.g., ). If the advantage for the upper part of words arises at the letter (letter feature) level, the effect should occur for both words and pseudowords. Results revealed an advantage for the upper part of words, but not for pseudowords. This suggests that the advantage for the upper part of words occurs at the lexical level, rather than at the letter (or letter feature) level.
Is the masked priming same-different task a pure measure of prelexical processing?
Kelly, Andrew N; van Heuven, Walter J B; Pitchford, Nicola J; Ledgeway, Timothy
2013-01-01
To study prelexical processes involved in visual word recognition a task is needed that only operates at the level of abstract letter identities. The masked priming same-different task has been purported to do this, as the same pattern of priming is shown for words and nonwords. However, studies using this task have consistently found a processing advantage for words over nonwords, indicating a lexicality effect. We investigated the locus of this word advantage. Experiment 1 used conventional visually-presented reference stimuli to test previous accounts of the lexicality effect. Results rule out the use of different strategies, or strength of representations, for words and nonwords. No interaction was shown between prime type and word type, but a consistent word advantage was found. Experiment 2 used novel auditorally-presented reference stimuli to restrict nonword matching to the sublexical level. This abolished scrambled priming for nonwords, but not words. Overall this suggests the processing advantage for words over nonwords results from activation of whole-word, lexical representations. Furthermore, the number of shared open-bigrams between primes and targets could account for scrambled priming effects. These results have important implications for models of orthographic processing and studies that have used this task to investigate prelexical processes.
Roman, Adrienne S; Pisoni, David B; Kronenberger, William G; Faulkner, Kathleen F
Noise-vocoded speech is a valuable research tool for testing experimental hypotheses about the effects of spectral degradation on speech recognition in adults with normal hearing (NH). However, very little research has utilized noise-vocoded speech with children with NH. Earlier studies with children with NH focused primarily on the amount of spectral information needed for speech recognition without assessing the contribution of neurocognitive processes to speech perception and spoken word recognition. In this study, we first replicated the seminal findings reported by ) who investigated effects of lexical density and word frequency on noise-vocoded speech perception in a small group of children with NH. We then extended the research to investigate relations between noise-vocoded speech recognition abilities and five neurocognitive measures: auditory attention (AA) and response set, talker discrimination, and verbal and nonverbal short-term working memory. Thirty-one children with NH between 5 and 13 years of age were assessed on their ability to perceive lexically controlled words in isolation and in sentences that were noise-vocoded to four spectral channels. Children were also administered vocabulary assessments (Peabody Picture Vocabulary test-4th Edition and Expressive Vocabulary test-2nd Edition) and measures of AA (NEPSY AA and response set and a talker discrimination task) and short-term memory (visual digit and symbol spans). Consistent with the findings reported in the original ) study, we found that children perceived noise-vocoded lexically easy words better than lexically hard words. Words in sentences were also recognized better than the same words presented in isolation. No significant correlations were observed between noise-vocoded speech recognition scores and the Peabody Picture Vocabulary test-4th Edition using language quotients to control for age effects. However, children who scored higher on the Expressive Vocabulary test-2nd Edition recognized lexically easy words better than lexically hard words in sentences. Older children perceived noise-vocoded speech better than younger children. Finally, we found that measures of AA and short-term memory capacity were significantly correlated with a child's ability to perceive noise-vocoded isolated words and sentences. First, we successfully replicated the major findings from the ) study. Because familiarity, phonological distinctiveness and lexical competition affect word recognition, these findings provide additional support for the proposal that several foundational elementary neurocognitive processes underlie the perception of spectrally degraded speech. Second, we found strong and significant correlations between performance on neurocognitive measures and children's ability to recognize words and sentences noise-vocoded to four spectral channels. These findings extend earlier research suggesting that perception of spectrally degraded speech reflects early peripheral auditory processes, as well as additional contributions of executive function, specifically, selective attention and short-term memory processes in spoken word recognition. The present findings suggest that AA and short-term memory support robust spoken word recognition in children with NH even under compromised and challenging listening conditions. These results are relevant to research carried out with listeners who have hearing loss, because they are routinely required to encode, process, and understand spectrally degraded acoustic signals.
Roman, Adrienne S.; Pisoni, David B.; Kronenberger, William G.; Faulkner, Kathleen F.
2016-01-01
Objectives Noise-vocoded speech is a valuable research tool for testing experimental hypotheses about the effects of spectral-degradation on speech recognition in adults with normal hearing (NH). However, very little research has utilized noise-vocoded speech with children with NH. Earlier studies with children with NH focused primarily on the amount of spectral information needed for speech recognition without assessing the contribution of neurocognitive processes to speech perception and spoken word recognition. In this study, we first replicated the seminal findings reported by Eisenberg et al. (2002) who investigated effects of lexical density and word frequency on noise-vocoded speech perception in a small group of children with NH. We then extended the research to investigate relations between noise-vocoded speech recognition abilities and five neurocognitive measures: auditory attention and response set, talker discrimination and verbal and nonverbal short-term working memory. Design Thirty-one children with NH between 5 and 13 years of age were assessed on their ability to perceive lexically controlled words in isolation and in sentences that were noise-vocoded to four spectral channels. Children were also administered vocabulary assessments (PPVT-4 and EVT-2) and measures of auditory attention (NEPSY Auditory Attention (AA) and Response Set (RS) and a talker discrimination task (TD)) and short-term memory (visual digit and symbol spans). Results Consistent with the findings reported in the original Eisenberg et al. (2002) study, we found that children perceived noise-vocoded lexically easy words better than lexically hard words. Words in sentences were also recognized better than the same words presented in isolation. No significant correlations were observed between noise-vocoded speech recognition scores and the PPVT-4 using language quotients to control for age effects. However, children who scored higher on the EVT-2 recognized lexically easy words better than lexically hard words in sentences. Older children perceived noise-vocoded speech better than younger children. Finally, we found that measures of auditory attention and short-term memory capacity were significantly correlated with a child’s ability to perceive noise-vocoded isolated words and sentences. Conclusions First, we successfully replicated the major findings from the Eisenberg et al. (2002) study. Because familiarity, phonological distinctiveness and lexical competition affect word recognition, these findings provide additional support for the proposal that several foundational elementary neurocognitive processes underlie the perception of spectrally-degraded speech. Second, we found strong and significant correlations between performance on neurocognitive measures and children’s ability to recognize words and sentences noise-vocoded to four spectral channels. These findings extend earlier research suggesting that perception of spectrally-degraded speech reflects early peripheral auditory processes as well as additional contributions of executive function, specifically, selective attention and short-term memory processes in spoken word recognition. The present findings suggest that auditory attention and short-term memory support robust spoken word recognition in children with NH even under compromised and challenging listening conditions. These results are relevant to research carried out with listeners who have hearing loss, since they are routinely required to encode, process and understand spectrally-degraded acoustic signals. PMID:28045787
Bentin, S; Mouchetant-Rostaing, Y; Giard, M H; Echallier, J F; Pernier, J
1999-05-01
The aim of the present study was to examine the time course and scalp distribution of electrophysiological manifestations of the visual word recognition mechanism. Event-related potentials (ERPs) elicited by visually presented lists of words were recorded while subjects were involved in a series of oddball tasks. The distinction between the designated target and nontarget stimuli was manipulated to induce a different level of processing in each session (visual, phonological/phonetic, phonological/lexical, and semantic). The ERPs of main interest in this study were those elicited by nontarget stimuli. In the visual task the targets were twice as big as the nontargets. Words, pseudowords, strings of consonants, strings of alphanumeric symbols, and strings of forms elicited a sharp negative peak at 170 msec (N170); their distribution was limited to the occipito-temporal sites. For the left hemisphere electrode sites, the N170 was larger for orthographic than for nonorthographic stimuli and vice versa for the right hemisphere. The ERPs elicited by all orthographic stimuli formed a clearly distinct cluster that was different from the ERPs elicited by nonorthographic stimuli. In the phonological/phonetic decision task the targets were words and pseudowords rhyming with the French word vitrail, whereas the nontargets were words, pseudowords, and strings of consonants that did not rhyme with vitrail. The most conspicuous potential was a negative peak at 320 msec, which was similarly elicited by pronounceable stimuli but not by nonpronounceable stimuli. The N320 was bilaterally distributed over the middle temporal lobe and was significantly larger over the left than over the right hemisphere. In the phonological/lexical processing task we compared the ERPs elicited by strings of consonants (among which words were selected), pseudowords (among which words were selected), and by words (among which pseudowords were selected). The most conspicuous potential in these tasks was a negative potential peaking at 350 msec (N350) elicited by phonologically legal but not by phonologically illegal stimuli. The distribution of the N350 was similar to that of the N320, but it was broader and including temporo-parietal areas that were not activated in the "rhyme" task. Finally, in the semantic task the targets were abstract words, and the nontargets were concrete words, pseudowords, and strings of consonants. The negative potential in this task peaked at 450 msec. Unlike the lexical decision, the negative peak in this task significantly distinguished not only between phonologically legal and illegal words but also between meaningful (words) and meaningless (pseudowords) phonologically legal structures. The distribution of the N450 included the areas activated in the lexical decision task but also areas in the fronto-central regions. The present data corroborated the functional neuroanatomy of word recognition systems suggested by other neuroimaging methods and described their timecourse, supporting a cascade-type process that involves different but interconnected neural modules, each responsible for a different level of processing word-related information.
Hyperopia and emergent literacy of young children: pilot study.
Shankar, Sunita; Evans, Mary Ann; Bobier, William R
2007-11-01
To compare emergent literacy skills in uncorrected hyperopic and emmetropic children. "Hyperopes" (>or=2.00 D sphere along the most hyperopic meridian; n=13; aged 67+/-13 mo) and "emmetropes" (
Visual processing of words in a patient with visual form agnosia: a behavioural and fMRI study.
Cavina-Pratesi, Cristiana; Large, Mary-Ellen; Milner, A David
2015-03-01
Patient D.F. has a profound and enduring visual form agnosia due to a carbon monoxide poisoning episode suffered in 1988. Her inability to distinguish simple geometric shapes or single alphanumeric characters can be attributed to a bilateral loss of cortical area LO, a loss that has been well established through structural and functional fMRI. Yet despite this severe perceptual deficit, D.F. is able to "guess" remarkably well the identity of whole words. This paradoxical finding, which we were able to replicate more than 20 years following her initial testing, raises the question as to whether D.F. has retained specialized brain circuitry for word recognition that is able to function to some degree without the benefit of inputs from area LO. We used fMRI to investigate this, and found regions in the left fusiform gyrus, left inferior frontal gyrus, and left middle temporal cortex that responded selectively to words. A group of healthy control subjects showed similar activations. The left fusiform activations appear to coincide with the area commonly named the visual word form area (VWFA) in studies of healthy individuals, and appear to be quite separate from the fusiform face area (FFA). We hypothesize that there is a route to this area that lies outside area LO, and which remains relatively unscathed in D.F. Copyright © 2014 Elsevier Ltd. All rights reserved.
Working Memory Load Affects Processing Time in Spoken Word Recognition: Evidence from Eye-Movements
Hadar, Britt; Skrzypek, Joshua E.; Wingfield, Arthur; Ben-David, Boaz M.
2016-01-01
In daily life, speech perception is usually accompanied by other tasks that tap into working memory capacity. However, the role of working memory on speech processing is not clear. The goal of this study was to examine how working memory load affects the timeline for spoken word recognition in ideal listening conditions. We used the “visual world” eye-tracking paradigm. The task consisted of spoken instructions referring to one of four objects depicted on a computer monitor (e.g., “point at the candle”). Half of the trials presented a phonological competitor to the target word that either overlapped in the initial syllable (onset) or at the last syllable (offset). Eye movements captured listeners' ability to differentiate the target noun from its depicted phonological competitor (e.g., candy or sandal). We manipulated working memory load by using a digit pre-load task, where participants had to retain either one (low-load) or four (high-load) spoken digits for the duration of a spoken word recognition trial. The data show that the high-load condition delayed real-time target discrimination. Specifically, a four-digit load was sufficient to delay the point of discrimination between the spoken target word and its phonological competitor. Our results emphasize the important role working memory plays in speech perception, even when performed by young adults in ideal listening conditions. PMID:27242424
Xue, Gui; Jiang, Ting; Chen, Chuansheng; Dong, Qi
2008-02-15
How language experience affects visual word recognition has been a topic of intense interest. Using event-related potentials (ERPs), the present study compared the early electrophysiological responses (i.e., N1) to familiar and unfamiliar writings under different conditions. Thirteen native Chinese speakers (with English as their second language) were recruited to passively view four types of scripts: Chinese (familiar logographic writings), English (familiar alphabetic writings), Korean Hangul (unfamiliar logographic writings), and Tibetan (unfamiliar alphabetic writings). Stimuli also differed in lexicality (words vs. non-words, for familiar writings only), length (characters/letters vs. words), and presentation duration (100 ms vs. 750 ms). We found no significant differences between words and non-words, and the effect of language experience (familiar vs. unfamiliar) was significantly modulated by stimulus length and writing system, and to a less degree, by presentation duration. That is, the language experience effect (i.e., a stronger N1 response to familiar writings than to unfamiliar writings) was significant only for alphabetic letters, but not for alphabetic and logographic words. The difference between Chinese characters and unfamiliar logographic characters was significant under the condition of short presentation duration, but not under the condition of long presentation duration. Long stimuli elicited a stronger N1 response than did short stimuli, but this effect was significantly attenuated for familiar writings. These results suggest that N1 response might not reliably differentiate familiar and unfamiliar writings. More importantly, our results suggest that N1 is modulated by visual, linguistic, and task factors, which has important implications for the visual expertise hypothesis.
Smith, Mary Lou; Bigel, Marla; Miller, Laurie A
2011-02-01
The mesial temporal lobes are important for learning arbitrary associations. It has previously been demonstrated that left mesial temporal structures are involved in learning word pairs, but it is not yet known whether comparable lesions in the right temporal lobe impair visually mediated associative learning. Patients who had undergone left (n=16) or right (n=18) temporal lobectomy for relief of intractable epilepsy and healthy controls (n=13) were administered two paired-associate learning tasks assessing their learning and memory of pairs of abstract designs or pairs of symbols in unique locations. Both patient groups had deficits in learning the designs, but only the right temporal group was impaired in recognition. For the symbol location task, differences were not found in learning, but again a recognition deficit was found for the right temporal group. The findings implicate the mesial temporal structures in relational learning. They support a material-specific effect for recognition but not for learning and recall of arbitrary visual and visual-spatial associative information. Copyright © 2010 Elsevier Inc. All rights reserved.
Wilson, Richard H
2015-04-01
In 1940, a cooperative effort by the radio networks and Bell Telephone produced the volume unit (vu) meter that has been the mainstay instrument for monitoring the level of speech signals in commercial broadcasting and research laboratories. With the use of computers, today the amplitude of signals can be quantified easily using the root mean square (rms) algorithm. Researchers had previously reported that amplitude estimates of sentences and running speech were 4.8 dB higher when measured with a vu meter than when calculated with rms. This study addresses the vu-rms relation as applied to the carrier phrase and target word paradigm used to assess word-recognition abilities, the premise being that by definition the word-recognition paradigm is a special and different case from that described previously. The purpose was to evaluate the vu and rms amplitude relations for the carrier phrases and target words commonly used to assess word-recognition abilities. In addition, the relations with the target words between rms level and recognition performance were examined. Descriptive and correlational. Two recoded versions of the Northwestern University Auditory Test No. 6 were evaluated, the Auditec of St. Louis (Auditec) male speaker and the Department of Veterans Affairs (VA) female speaker. Using both visual and auditory cues from a waveform editor, the temporal onsets and offsets were defined for each carrier phrase and each target word. The rms amplitudes for those segments then were computed and expressed in decibels with reference to the maximum digitization range. The data were maintained for each of the four Northwestern University Auditory Test No. 6 word lists. Descriptive analyses were used with linear regressions used to evaluate the reliability of the measurement technique and the relation between the rms levels of the target words and recognition performances. Although there was a 1.3 dB difference between the calibration tones, the mean levels of the carrier phrases for the two recordings were -14.8 dB (Auditec) and -14.1 dB (VA) with standard deviations <1 dB. For the target words, the mean amplitudes were -19.9 dB (Auditec) and -18.3 dB (VA) with standard deviations ranging from 1.3 to 2.4 dB. The mean durations for the carrier phrases of both recordings were 593-594 msec, with the mean durations of the target words a little different, 509 msec (Auditec) and 528 msec (VA). Random relations were observed between the recognition performances and rms levels of the target words. Amplitude and temporal data for the individual words are provided. The rms levels of the carrier phrases closely approximated (±1 dB) the rms levels of the calibration tones, both of which were set to 0 vu (dB). The rms levels of the target words were 5-6 dB below the levels of the carrier phrases and were substantially more variable than the levels of the carrier phrases. The relation between the rms levels of the target words and recognition performances on the words was random. American Academy of Audiology.
Newly learned word forms are abstract and integrated immediately after acquisition
Kapnoula, Efthymia C.; McMurray, Bob
2015-01-01
A hotly debated question in word learning concerns the conditions under which newly learned words compete or interfere with familiar words during spoken word recognition. This has recently been described as a key marker of the integration of a new word into the lexicon and was thought to require consolidation Dumay & Gaskell, (Psychological Science, 18, 35–39, 2007; Gaskell & Dumay, Cognition, 89, 105–132, 2003). Recently, however, Kapnoula, Packard, Gupta, and McMurray, (Cognition, 134, 85–99, 2015) showed that interference can be observed immediately after a word is first learned, implying very rapid integration of new words into the lexicon. It is an open question whether these kinds of effects derive from episodic traces of novel words or from more abstract and lexicalized representations. Here we addressed this question by testing inhibition for newly learned words using training and test stimuli presented in different talker voices. During training, participants were exposed to a set of nonwords spoken by a female speaker. Immediately after training, we assessed the ability of the novel word forms to inhibit familiar words, using a variant of the visual world paradigm. Crucially, the test items were produced by a male speaker. An analysis of fixations showed that even with a change in voice, newly learned words interfered with the recognition of similar known words. These findings show that lexical competition effects from newly learned words spread across different talker voices, which suggests that newly learned words can be sufficiently lexicalized, and abstract with respect to talker voice, without consolidation. PMID:26202702
Verbal overshadowing of visual memories: some things are better left unsaid.
Schooler, J W; Engstler-Schooler, T Y
1990-01-01
It is widely believed that verbal processing generally improves memory performance. However, in a series of six experiments, verbalizing the appearance of previously seen visual stimuli impaired subsequent recognition performance. In Experiment 1, subjects viewed a videotape including a salient individual. Later, some subjects described the individual's face. Subjects who verbalized the face performed less well on a subsequent recognition test than control subjects who did not engage in memory verbalization. The results of Experiment 2 replicated those of Experiment 1 and further clarified the effect of memory verbalization by demonstrating that visualization does not impair face recognition. In Experiments 3 and 4 we explored the hypothesis that memory verbalization impairs memory for stimuli that are difficult to put into words. In Experiment 3 memory impairment followed the verbalization of a different visual stimulus: color. In Experiment 4 marginal memory improvement followed the verbalization of a verbal stimulus: a brief spoken statement. In Experiments 5 and 6 the source of verbally induced memory impairment was explored. The results of Experiment 5 suggested that the impairment does not reflect a temporary verbal set, but rather indicates relatively long-lasting memory interference. Finally, Experiment 6 demonstrated that limiting subjects' time to make recognition decisions alleviates the impairment, suggesting that memory verbalization overshadows but does not eradicate the original visual memory. This collection of results is consistent with a recording interference hypothesis: verbalizing a visual memory may produce a verbally biased memory representation that can interfere with the application of the original visual memory.
Tracking Hierarchical Processing in Morphological Decomposition with Brain Potentials
ERIC Educational Resources Information Center
Lavric, Aureliu; Elchlepp, Heike; Rastle, Kathleen
2012-01-01
One important debate in psycholinguistics concerns the nature of morphological decomposition processes in visual word recognition (e.g., darkness = {dark} + {-ness}). One theory claims that these processes arise during orthographic analysis and prior to accessing meaning (Rastle & Davis, 2008), and another argues that these processes arise through…
Perea, Manuel; Marcet, Ana; Vergara-Martínez, Marta; Gomez, Pablo
2016-01-01
A number of models of visual-word recognition assume that the repetition of an item in a lexical decision experiment increases that item's familiarity/wordness. This would produce not only a facilitative repetition effect for words, but also an inhibitory effect for nonwords (i.e., more familiarity/wordness makes the negative decision slower). We conducted a two-block lexical decision experiment to examine word/nonword repetition effects in the framework of a leading "familiarity/wordness" model of the lexical decision task, namely, the diffusion model (Ratcliff et al., 2004). Results showed that while repeated words were responded to faster than the unrepeated words, repeated nonwords were responded to more slowly than the nonrepeated nonwords. Fits from the diffusion model revealed that the repetition effect for words/nonwords was mainly due to differences in the familiarity/wordness (drift rate) parameter. This word/nonword dissociation favors those accounts that posit that the previous presentation of an item increases its degree of familiarity/wordness.
Methods study for the relocation of visual information in central scotoma cases
NASA Astrophysics Data System (ADS)
Scherlen, Anne-Catherine; Gautier, Vincent
2005-03-01
In this study we test the benefit on the reading performance of different ways to relocating the visual information present under the scotoma. The relocation (or unmasking) allows to compensate the loss of information and avoid the patient developing driving strategies not adapted for the reading. Eight healthy subjects were tested on a reading task, on each a central scotoma of various sizes was simulated. We then evaluate the reading speed (words/min) during three visual information relocation methods: all masked information is relocated - on both side of scotoma, - on the right of scotoma, - and only essentials letters for the word recognition too on the right of scotoma. We compare these reading speeds versus the pathological condition, ie without relocating visual information. Our results show that unmasking strategy improve the reading speed when all the visual information is unmask to the right of scotoma, this only for large scotoma. Taking account the word morphology, the perception of only certain letters outside the scotoma can be sufficient to improve the reading speed. A deepening of reading processes in the presence of a scotoma will then allows a new perspective for visual information unmasking. Multidisciplinary competences brought by engineers, ophtalmologists, linguists, clinicians would allow to optimize the reading benefit brought by the unmasking.
Walla, P; Püregger, E; Lehrner, J; Mayer, D; Deecke, L; Dal Bianco, P
2005-05-01
Effects related to depth of verbal information processing were investigated in probable Alzheimer's disease patients (AD) and age matched controls. During word encoding sessions 10 patients and 10 controls had either to decide whether the letter "s" appeared in visually presented words (alphabetical decision, shallow encoding), or whether the meaning of each presented word was animate or inanimate (lexical decision, deep encoding). These encoding sessions were followed by test sessions during which all previously encoded words were presented again together with the same number of new words. The task was then to discriminate between repeated and new words. Magnetic field changes related to brain activity were recorded with a whole cortex MEG.5 probable AD patients showed recognition performances above chance level related to both depths of information processing. Those patients and 5 age matched controls were then further analysed. Recognition performance was poorer in probable AD patients compared to controls for both levels of processing. However, in both groups deep encoding led to a higher recognition performance than shallow encoding. We therefore conclude that the performance reduction in the patient group was independent of depth of processing. Reaction times related to false alarms differed between patients and controls after deep encoding which perhaps could already be used for supporting an early diagnosis. The analysis of the physiological data revealed significant differences between correctly recognised repetitions and correctly classified new words (old/new-effect) in the control group which were missing in the patient group after deep encoding. The lack of such an effect in the patient group is interpreted as being due to the respective neuropathology related to probable AD. The present results demonstrate that magnetic field recordings represent a useful tool to physiologically distinguish between probable AD and age matched controls.
Immediate lexical integration of novel word forms
Kapnoula, Efthymia C.; McMurray, Bob
2014-01-01
It is well known that familiar words inhibit each other during spoken word recognition. However, we do not know how and under what circumstances newly learned words become integrated with the lexicon in order to engage in this competition. Previous work on word learning has highlighted the importance of offline consolidation (Gaskell & Dumay, 2003) and meaning (Leach & Samuel, 2007) to establish this integration. In two experiments we test the necessity of these factors by examining the inhibition between newly learned items and familiar words immediately after learning. Participants learned a set of nonwords without meanings in active (Exp 1) or passive (Exp 2) exposure paradigms. After training, participants performed a visual world paradigm task to assess inhibition from these newly learned items. An analysis of participants’ fixations suggested that the newly learned words were able to engage in competition with known words without any consolidation. PMID:25460382
Immediate lexical integration of novel word forms.
Kapnoula, Efthymia C; Packard, Stephanie; Gupta, Prahlad; McMurray, Bob
2015-01-01
It is well known that familiar words inhibit each other during spoken word recognition. However, we do not know how and under what circumstances newly learned words become integrated with the lexicon in order to engage in this competition. Previous work on word learning has highlighted the importance of offline consolidation (Gaskell & Dumay, 2003) and meaning (Leach & Samuel, 2007) to establish this integration. In two experiments we test the necessity of these factors by examining the inhibition between newly learned items and familiar words immediately after learning. Participants learned a set of nonwords without meanings in active (Experiment 1) or passive (Experiment 2) exposure paradigms. After training, participants performed a visual world paradigm task to assess inhibition from these newly learned items. An analysis of participants' fixations suggested that the newly learned words were able to engage in competition with known words without any consolidation. Copyright © 2014 Elsevier B.V. All rights reserved.
Alternating-script priming in Japanese: Are Katakana and Hiragana characters interchangeable?
Perea, Manuel; Nakayama, Mariko; Lupker, Stephen J
2017-07-01
Models of written word recognition in languages using the Roman alphabet assume that a word's visual form is quickly mapped onto abstract units. This proposal is consistent with the finding that masked priming effects are of similar magnitude from lowercase, uppercase, and alternating-case primes (e.g., beard-BEARD, BEARD-BEARD, and BeArD-BEARD). We examined whether this claim can be readily generalized to the 2 syllabaries of Japanese Kana (Hiragana and Katakana). The specific rationale was that if the visual form of Kana words is lost early in the lexical access process, alternating-script repetition primes should be as effective as same-script repetition primes at activating a target word. Results showed that alternating-script repetition primes were less effective at activating lexical representations of Katakana words than same-script repetition primes-indeed, they were no more effective than partial primes that contained only the Katakana characters from the alternating-script primes. Thus, the idiosyncrasies of each writing system do appear to shape the pathways to lexical access. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Perea, Manuel; Marcet, Ana; Vergara-Martínez, Marta; Gomez, Pablo
2016-01-01
A number of models of visual-word recognition assume that the repetition of an item in a lexical decision experiment increases that item's familiarity/wordness. This would produce not only a facilitative repetition effect for words, but also an inhibitory effect for nonwords (i.e., more familiarity/wordness makes the negative decision slower). We conducted a two-block lexical decision experiment to examine word/nonword repetition effects in the framework of a leading “familiarity/wordness” model of the lexical decision task, namely, the diffusion model (Ratcliff et al., 2004). Results showed that while repeated words were responded to faster than the unrepeated words, repeated nonwords were responded to more slowly than the nonrepeated nonwords. Fits from the diffusion model revealed that the repetition effect for words/nonwords was mainly due to differences in the familiarity/wordness (drift rate) parameter. This word/nonword dissociation favors those accounts that posit that the previous presentation of an item increases its degree of familiarity/wordness. PMID:26925021
Models of Verbal Working Memory Capacity: What Does It Take to Make Them Work?
Cowan, Nelson; Rouder, Jeffrey N.; Blume, Christopher L.; Saults, J. Scott
2013-01-01
Theories of working memory (WM) capacity limits will be more useful when we know what aspects of performance are governed by the limits and what aspects are governed by other memory mechanisms. Whereas considerable progress has been made on models of WM capacity limits for visual arrays of separate objects, less progress has been made in understanding verbal materials, especially when words are mentally combined to form multi-word units or chunks. Toward a more comprehensive theory of capacity limits, we examine models of forced-choice recognition of words within printed lists, using materials designed to produce multi-word chunks in memory (e.g., leather brief case). Several simple models were tested against data from a variety of list lengths and potential chunk sizes, with test conditions that only imperfectly elicited the inter-word associations. According to the most successful model, participants retained about 3 chunks on average in a capacity-limited region of WM, with some chunks being only subsets of the presented associative information (e.g., leather brief case retained with leather as one chunk and brief case as another). The addition to the model of an activated long-term memory (LTM) component unlimited in capacity was needed. A fixed capacity limit appears critical to account for immediate verbal recognition and other forms of WM. We advance a model-based approach that allows capacity to be assessed despite other important processing contributions. Starting with a psychological-process model of WM capacity developed to understand visual arrays, we arrive at a more unified and complete model. PMID:22486726
Intelligibility of emotional speech in younger and older adults.
Dupuis, Kate; Pichora-Fuller, M Kathleen
2014-01-01
Little is known about the influence of vocal emotions on speech understanding. Word recognition accuracy for stimuli spoken to portray seven emotions (anger, disgust, fear, sadness, neutral, happiness, and pleasant surprise) was tested in younger and older listeners. Emotions were presented in either mixed (heterogeneous emotions mixed in a list) or blocked (homogeneous emotion blocked in a list) conditions. Three main hypotheses were tested. First, vocal emotion affects word recognition accuracy; specifically, portrayals of fear enhance word recognition accuracy because listeners orient to threatening information and/or distinctive acoustical cues such as high pitch mean and variation. Second, older listeners recognize words less accurately than younger listeners, but the effects of different emotions on intelligibility are similar across age groups. Third, blocking emotions in list results in better word recognition accuracy, especially for older listeners, and reduces the effect of emotion on intelligibility because as listeners develop expectations about vocal emotion, the allocation of processing resources can shift from emotional to lexical processing. Emotion was the within-subjects variable: all participants heard speech stimuli consisting of a carrier phrase followed by a target word spoken by either a younger or an older talker, with an equal number of stimuli portraying each of seven vocal emotions. The speech was presented in multi-talker babble at signal to noise ratios adjusted for each talker and each listener age group. Listener age (younger, older), condition (mixed, blocked), and talker (younger, older) were the main between-subjects variables. Fifty-six students (Mage= 18.3 years) were recruited from an undergraduate psychology course; 56 older adults (Mage= 72.3 years) were recruited from a volunteer pool. All participants had clinically normal pure-tone audiometric thresholds at frequencies ≤3000 Hz. There were significant main effects of emotion, listener age group, and condition on the accuracy of word recognition in noise. Stimuli spoken in a fearful voice were the most intelligible, while those spoken in a sad voice were the least intelligible. Overall, word recognition accuracy was poorer for older than younger adults, but there was no main effect of talker, and the pattern of the effects of different emotions on intelligibility did not differ significantly across age groups. Acoustical analyses helped elucidate the effect of emotion and some intertalker differences. Finally, all participants performed better when emotions were blocked. For both groups, performance improved over repeated presentations of each emotion in both blocked and mixed conditions. These results are the first to demonstrate a relationship between vocal emotion and word recognition accuracy in noise for younger and older listeners. In particular, the enhancement of intelligibility by emotion is greatest for words spoken to portray fear and presented heterogeneously with other emotions. Fear may have a specialized role in orienting attention to words heard in noise. This finding may be an auditory counterpart to the enhanced detection of threat information in visual displays. The effect of vocal emotion on word recognition accuracy is preserved in older listeners with good audiograms and both age groups benefit from blocking and the repetition of emotions.
The neural basis of visual word form processing: a multivariate investigation.
Nestor, Adrian; Behrmann, Marlene; Plaut, David C
2013-07-01
Current research on the neurobiological bases of reading points to the privileged role of a ventral cortical network in visual word processing. However, the properties of this network and, in particular, its selectivity for orthographic stimuli such as words and pseudowords remain topics of significant debate. Here, we approached this issue from a novel perspective by applying pattern-based analyses to functional magnetic resonance imaging data. Specifically, we examined whether, where and how, orthographic stimuli elicit distinct patterns of activation in the human cortex. First, at the category level, multivariate mapping found extensive sensitivity throughout the ventral cortex for words relative to false-font strings. Secondly, at the identity level, the multi-voxel pattern classification provided direct evidence that different pseudowords are encoded by distinct neural patterns. Thirdly, a comparison of pseudoword and face identification revealed that both stimulus types exploit common neural resources within the ventral cortical network. These results provide novel evidence regarding the involvement of the left ventral cortex in orthographic stimulus processing and shed light on its selectivity and discriminability profile. In particular, our findings support the existence of sublexical orthographic representations within the left ventral cortex while arguing for the continuity of reading with other visual recognition skills.
High-Fidelity Visual Long-Term Memory within an Unattended Blink of an Eye.
Kuhbandner, Christof; Rosas-Corona, Elizabeth A; Spachtholz, Philipp
2017-01-01
What is stored in long-term memory from current sensations is a question that has attracted considerable interest. Over time, several prominent theories have consistently proposed that only attended sensory information leaves a durable memory trace whereas unattended information is not stored beyond the current moment, an assumption that seems to be supported by abundant empirical evidence. Here we show, by using a more sensitive memory test than in previous studies, that this is actually not true. Observers viewed a rapid stream of real-world object pictures overlapped by words (presentation duration per stimulus: 500 ms, interstimulus interval: 200 ms), with the instruction to attend to the words and detect word repetitions, without knowing that their memory would be tested later. In a surprise two-alternative forced-choice recognition test, memory for the unattended object pictures was tested. Memory performance was substantially above chance, even when detailed feature knowledge was necessary for correct recognition, even when tested 24 h later, and even although participants reported that they do not have any memories. These findings suggests that humans have the ability to store at high speed detailed copies of current visual stimulations in long-term memory independently of current intentions and the current attentional focus.
Kim, Judy S; Kanjlia, Shipra; Merabet, Lotfi B; Bedny, Marina
2017-11-22
Learning to read causes the development of a letter- and word-selective region known as the visual word form area (VWFA) within the human ventral visual object stream. Why does a reading-selective region develop at this anatomical location? According to one hypothesis, the VWFA develops at the nexus of visual inputs from retinotopic cortices and linguistic input from the frontotemporal language network because reading involves extracting linguistic information from visual symbols. Surprisingly, the anatomical location of the VWFA is also active when blind individuals read Braille by touch, suggesting that vision is not required for the development of the VWFA. In this study, we tested the alternative prediction that VWFA development is in fact influenced by visual experience. We predicted that in the absence of vision, the "VWFA" is incorporated into the frontotemporal language network and participates in high-level language processing. Congenitally blind ( n = 10, 9 female, 1 male) and sighted control ( n = 15, 9 female, 6 male), male and female participants each took part in two functional magnetic resonance imaging experiments: (1) word reading (Braille for blind and print for sighted participants), and (2) listening to spoken sentences of different grammatical complexity (both groups). We find that in blind, but not sighted participants, the anatomical location of the VWFA responds both to written words and to the grammatical complexity of spoken sentences. This suggests that in blindness, this region takes on high-level linguistic functions, becoming less selective for reading. More generally, the current findings suggest that experience during development has a major effect on functional specialization in the human cortex. SIGNIFICANCE STATEMENT The visual word form area (VWFA) is a region in the human cortex that becomes specialized for the recognition of written letters and words. Why does this particular brain region become specialized for reading? We tested the hypothesis that the VWFA develops within the ventral visual stream because reading involves extracting linguistic information from visual symbols. Consistent with this hypothesis, we find that in congenitally blind Braille readers, but not sighted readers of print, the VWFA region is active during grammatical processing of spoken sentences. These results suggest that visual experience contributes to VWFA specialization, and that different neural implementations of reading are possible. Copyright © 2017 the authors 0270-6474/17/3711495-10$15.00/0.
Kanjlia, Shipra; Merabet, Lotfi B.
2017-01-01
Learning to read causes the development of a letter- and word-selective region known as the visual word form area (VWFA) within the human ventral visual object stream. Why does a reading-selective region develop at this anatomical location? According to one hypothesis, the VWFA develops at the nexus of visual inputs from retinotopic cortices and linguistic input from the frontotemporal language network because reading involves extracting linguistic information from visual symbols. Surprisingly, the anatomical location of the VWFA is also active when blind individuals read Braille by touch, suggesting that vision is not required for the development of the VWFA. In this study, we tested the alternative prediction that VWFA development is in fact influenced by visual experience. We predicted that in the absence of vision, the “VWFA” is incorporated into the frontotemporal language network and participates in high-level language processing. Congenitally blind (n = 10, 9 female, 1 male) and sighted control (n = 15, 9 female, 6 male), male and female participants each took part in two functional magnetic resonance imaging experiments: (1) word reading (Braille for blind and print for sighted participants), and (2) listening to spoken sentences of different grammatical complexity (both groups). We find that in blind, but not sighted participants, the anatomical location of the VWFA responds both to written words and to the grammatical complexity of spoken sentences. This suggests that in blindness, this region takes on high-level linguistic functions, becoming less selective for reading. More generally, the current findings suggest that experience during development has a major effect on functional specialization in the human cortex. SIGNIFICANCE STATEMENT The visual word form area (VWFA) is a region in the human cortex that becomes specialized for the recognition of written letters and words. Why does this particular brain region become specialized for reading? We tested the hypothesis that the VWFA develops within the ventral visual stream because reading involves extracting linguistic information from visual symbols. Consistent with this hypothesis, we find that in congenitally blind Braille readers, but not sighted readers of print, the VWFA region is active during grammatical processing of spoken sentences. These results suggest that visual experience contributes to VWFA specialization, and that different neural implementations of reading are possible. PMID:29061700
Signed reward prediction errors drive declarative learning
Naert, Lien; Janssens, Clio; Talsma, Durk; Van Opstal, Filip; Verguts, Tom
2018-01-01
Reward prediction errors (RPEs) are thought to drive learning. This has been established in procedural learning (e.g., classical and operant conditioning). However, empirical evidence on whether RPEs drive declarative learning–a quintessentially human form of learning–remains surprisingly absent. We therefore coupled RPEs to the acquisition of Dutch-Swahili word pairs in a declarative learning paradigm. Signed RPEs (SRPEs; “better-than-expected” signals) during declarative learning improved recognition in a follow-up test, with increasingly positive RPEs leading to better recognition. In addition, classic declarative memory mechanisms such as time-on-task failed to explain recognition performance. The beneficial effect of SRPEs on recognition was subsequently affirmed in a replication study with visual stimuli. PMID:29293493
Signed reward prediction errors drive declarative learning.
De Loof, Esther; Ergo, Kate; Naert, Lien; Janssens, Clio; Talsma, Durk; Van Opstal, Filip; Verguts, Tom
2018-01-01
Reward prediction errors (RPEs) are thought to drive learning. This has been established in procedural learning (e.g., classical and operant conditioning). However, empirical evidence on whether RPEs drive declarative learning-a quintessentially human form of learning-remains surprisingly absent. We therefore coupled RPEs to the acquisition of Dutch-Swahili word pairs in a declarative learning paradigm. Signed RPEs (SRPEs; "better-than-expected" signals) during declarative learning improved recognition in a follow-up test, with increasingly positive RPEs leading to better recognition. In addition, classic declarative memory mechanisms such as time-on-task failed to explain recognition performance. The beneficial effect of SRPEs on recognition was subsequently affirmed in a replication study with visual stimuli.
The picture superiority effect in patients with Alzheimer's disease and mild cognitive impairment.
Ally, Brandon A; Gold, Carl A; Budson, Andrew E
2009-01-01
The fact that pictures are better remembered than words has been reported in the literature for over 30 years. While this picture superiority effect has been consistently found in healthy young and older adults, no study has directly evaluated the presence of the effect in patients with Alzheimer's disease (AD) or mild cognitive impairment (MCI). Clinical observations have indicated that pictures enhance memory in these patients, suggesting that the picture superiority effect may be intact. However, several studies have reported visual processing impairments in AD and MCI patients which might diminish the picture superiority effect. Using a recognition memory paradigm, we tested memory for pictures versus words in these patients. The results showed that the picture superiority effect is intact, and that these patients showed a similar benefit to healthy controls from studying pictures compared to words. The findings are discussed in terms of visual processing and possible clinical importance.
The picture superiority effect in patients with Alzheimer’s disease and mild cognitive impairment
Ally, Brandon A.; Gold, Carl A.; Budson, Andrew E.
2009-01-01
The fact that pictures are better remembered than words has been reported in the literature for over 30 years. While this picture superiority effect has been consistently found in healthy young and older adults, no study has directly evaluated the presence of the effect in patients with Alzheimer’s disease (AD) or mild cognitive impairment (MCI). Clinical observations have indicated that pictures enhance memory in these patients, suggesting that the picture superiority effect may be intact. However, several studies have reported visual processing impairments in AD and MCI patients which might diminish the picture superiority effect. Using a recognition memory paradigm, we tested memory for pictures versus words in these patients. The results showed that the picture superiority effect is intact, and that these patients showed a similar benefit to healthy controls from studying pictures compared to words. The findings are discussed in terms of visual processing and possible clinical importance. PMID:18992266
Lupker, Stephen J.
2017-01-01
The experiments reported here used “Reversed-Interior” (RI) primes (e.g., cetupmor-COMPUTER) in three different masked priming paradigms in order to test between different models of orthographic coding/visual word recognition. The results of Experiment 1, using a standard masked priming methodology, showed no evidence of priming from RI primes, in contrast to the predictions of the Bayesian Reader and LTRS models. By contrast, Experiment 2, using a sandwich priming methodology, showed significant priming from RI primes, in contrast to the predictions of open bigram models, which predict that there should be no orthographic similarity between these primes and their targets. Similar results were obtained in Experiment 3, using a masked prime same-different task. The results of all three experiments are most consistent with the predictions derived from simulations of the Spatial-coding model. PMID:29244824
Information properties of morphologically complex words modulate brain activity during word reading
Hultén, Annika; Lehtonen, Minna; Lagus, Krista; Salmelin, Riitta
2018-01-01
Abstract Neuroimaging studies of the reading process point to functionally distinct stages in word recognition. Yet, current understanding of the operations linked to those various stages is mainly descriptive in nature. Approaches developed in the field of computational linguistics may offer a more quantitative approach for understanding brain dynamics. Our aim was to evaluate whether a statistical model of morphology, with well‐defined computational principles, can capture the neural dynamics of reading, using the concept of surprisal from information theory as the common measure. The Morfessor model, created for unsupervised discovery of morphemes, is based on the minimum description length principle and attempts to find optimal units of representation for complex words. In a word recognition task, we correlated brain responses to word surprisal values derived from Morfessor and from other psycholinguistic variables that have been linked with various levels of linguistic abstraction. The magnetoencephalography data analysis focused on spatially, temporally and functionally distinct components of cortical activation observed in reading tasks. The early occipital and occipito‐temporal responses were correlated with parameters relating to visual complexity and orthographic properties, whereas the later bilateral superior temporal activation was correlated with whole‐word based and morphological models. The results show that the word processing costs estimated by the statistical Morfessor model are relevant for brain dynamics of reading during late processing stages. PMID:29524274
Information properties of morphologically complex words modulate brain activity during word reading.
Hakala, Tero; Hultén, Annika; Lehtonen, Minna; Lagus, Krista; Salmelin, Riitta
2018-06-01
Neuroimaging studies of the reading process point to functionally distinct stages in word recognition. Yet, current understanding of the operations linked to those various stages is mainly descriptive in nature. Approaches developed in the field of computational linguistics may offer a more quantitative approach for understanding brain dynamics. Our aim was to evaluate whether a statistical model of morphology, with well-defined computational principles, can capture the neural dynamics of reading, using the concept of surprisal from information theory as the common measure. The Morfessor model, created for unsupervised discovery of morphemes, is based on the minimum description length principle and attempts to find optimal units of representation for complex words. In a word recognition task, we correlated brain responses to word surprisal values derived from Morfessor and from other psycholinguistic variables that have been linked with various levels of linguistic abstraction. The magnetoencephalography data analysis focused on spatially, temporally and functionally distinct components of cortical activation observed in reading tasks. The early occipital and occipito-temporal responses were correlated with parameters relating to visual complexity and orthographic properties, whereas the later bilateral superior temporal activation was correlated with whole-word based and morphological models. The results show that the word processing costs estimated by the statistical Morfessor model are relevant for brain dynamics of reading during late processing stages. © 2018 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.
Magnetoencephalographic--features related to mild cognitive impairment.
Püregger, E; Walla, P; Deecke, L; Dal-Bianco, P
2003-12-01
We recorded changes of brain activity from 10 MCI patients and 10 controls related to shallow (nonsemantic) and deep (semantic) word encoding using a whole-head MEG. During the following recognition tasks, all participants had to recognize the previously encoded words, which were presented again together with new words. In both groups recognition performance significantly varied as a function of depth of processing. No significant differences were found between the groups. Reaction times related to correctly classified new words (correct rejections) and incorrectly classified repetitions (misses) of MCI patients showed a strong tendency toward prolongation compared to controls, although no statistically significant differences occurred. Strikingly, in patients the neurophysiological data associated with nonsemantic and semantic word encoding differed significantly between 250 and 450 ms after stimulus onset mainly over left frontal and left temporal sensors. They showed higher electrophysiological activation during shallow encoding as compared to deep encoding. No such significant differences were found in controls. The present results might reflect a dysfunction with respect to shallow encoding of visually presented verbal information. It is interpreted that additional neural activation is needed to compensate for neurodegeneration. This finding is suggested to be an additional tool for MCI diagnosis.
"What" and "where" in word reading: ventral coding of written words revealed by parietal atrophy.
Vinckier, Fabien; Naccache, Lionel; Papeix, Caroline; Forget, Joaquim; Hahn-Barma, Valerie; Dehaene, Stanislas; Cohen, Laurent
2006-12-01
The visual system of literate adults develops a remarkable perceptual expertise for printed words. To delineate the aspects of this competence intrinsic to the occipitotemporal "what" pathway, we studied a patient with bilateral lesions of the occipitoparietal "where" pathway. Depending on critical geometric features of the display (rotation angle, letter spacing, mirror reversal, etc.), she switched from a good performance, when her intact ventral pathway was sufficient to encode words, to severely impaired reading, when her parietal lesions prevented the use of alternative reading strategies as a result of spatial and attentional impairments. In particular, reading was disrupted (a) by rotating word by more than 50 degrees , providing an approximation of the invariance range for words encoding in the ventral pathway; (b) by separating letters with double spaces, revealing the limits of letter grouping into perceptual wholes; (c) by mirror-reversing words, showing that words escape the default mirror-invariant representation of visual objects in the ventral pathway. Moreover, because of her parietal lesions, she was unable to discriminate mirror images of common objects, although she was excellent with reversible pseudowords, confirming that the breaking of mirror symmetry was intrinsic to the occipitotemporal cortex. Thus, charting the display conditions associated with preserved or impaired performance allowed us to infer properties of word coding in the normal ventral pathway and to delineate the roles of the parietal lobes in single-word recognition.
The effect of orthographic neighborhood in the reading span task.
Robert, Christelle; Postal, Virginie; Mathey, Stéphanie
2015-04-01
This study aimed at examining whether and to what extent orthographic neighborhood of words influences performance in a working memory span task. Twenty-five participants performed a reading span task in which final words to be memorized had either no higher frequency orthographic neighbor or at least one. In both neighborhood conditions, each participant completed three series of two, three, four, or five sentences. Results indicated an interaction between orthographic neighborhood and list length. In particular, an inhibitory effect of orthographic neighborhood on recall appeared in list length 5. A view is presented suggesting that words with higher frequency neighbors require more resources to be memorized than words with no such neighbors. The implications of the results are discussed with regard to memory processes and current models of visual word recognition.
A stimulus-centered reading disorder for words and numbers: Is it neglect dyslexia?
Arduino, Lisa S; Daini, Roberta; Caterina Silveri, Maria
2005-12-01
A single case, RCG, showing a unilateral reading disorder without unilateral spatial neglect was studied. The disorder was characterized by substitutions of the initial (left) letters of words, nonwords and Arabic numbers, independently of egocentered spatial coordinates. MRI showed a bilateral lesion with the involvement of the splenium. Although, within the framework of the visual word recognition model proposed by Caramazza and Hillis (1990), RCG disorder could be defined as a stimulus-centered neglect dyslexia, we discuss the hypothesis of a dissociation in neural correlates and mechanisms between the syndrome of unilateral spatial neglect and such a unilateral reading disorder.
Enrichment Effects of Gestures and Pictures on Abstract Words in a Second Language.
Repetto, Claudia; Pedroli, Elisa; Macedonia, Manuela
2017-01-01
Laboratory research has demonstrated that multisensory enrichment promotes verbal learning in a foreign language (L2). Enrichment can be done in various ways, e.g., by adding a picture that illustrates the L2 word's meaning or by the learner performing a gesture to the word (enactment). Most studies have tested enrichment on concrete but not on abstract words. Unlike concrete words, the representation of abstract words is deprived of sensory-motor features. This has been addressed as one of the reasons why abstract words are difficult to remember. Here, we ask whether a brief enrichment training by means of pictures and by self-performed gestures also enhances the memorability of abstract words in L2. Further, we explore which of these two enrichment strategies is more effective. Twenty young adults learned 30 novel abstract words in L2 according to three encoding conditions: (1) reading, (2) reading and pairing the novel word to a picture, and (3) reading and enacting the word by means of a gesture. We measured memory performance in free and cued recall tests, as well as in a visual recognition task. Words encoded with gestures were better remembered in the free recall in the native language (L1). When recognizing the novel words, participants made less errors for words encoded with gestures compared to words encoded with pictures. The reaction times in the recognition task did not differ across conditions. The present findings support, even if only partially, the idea that enactment promotes learning of abstract words and that it is superior to enrichment by means of pictures even after short training.
Novel grid-based optical Braille conversion: from scanning to wording
NASA Astrophysics Data System (ADS)
Yoosefi Babadi, Majid; Jafari, Shahram
2011-12-01
Grid-based optical Braille conversion (GOBCO) is explained in this article. The grid-fitting technique involves processing scanned images taken from old hard-copy Braille manuscripts, recognising and converting them into English ASCII text documents inside a computer. The resulted words are verified using the relevant dictionary to provide the final output. The algorithms employed in this article can be easily modified to be implemented on other visual pattern recognition systems and text extraction applications. This technique has several advantages including: simplicity of the algorithm, high speed of execution, ability to help visually impaired persons and blind people to work with fax machines and the like, and the ability to help sighted people with no prior knowledge of Braille to understand hard-copy Braille manuscripts.
ERIC Educational Resources Information Center
Wong, Simpson W. L.; Chow, Bonnie Wing-Yin; Ho, Connie Suk-Han; Waye, Mary M. Y.; Bishop, Dorothy V. M.
2014-01-01
This twin study examined the relative contributions of genes and environment on 2nd language reading acquisition of Chinese-speaking children learning English. We examined whether specific skills-visual word recognition, receptive vocabulary, phonological awareness, phonological memory, and speech discrimination-in the 1st and 2nd languages have…
Shared Features Dominate Semantic Richness Effects for Concrete Concepts
ERIC Educational Resources Information Center
Grondin, Ray; Lupker, Stephen J.; McRae, Ken
2009-01-01
When asked to list semantic features for concrete concepts, participants list many features for some concepts and few for others. Concepts with many semantic features are processed faster in lexical and semantic decision tasks [Pexman, P. M., Lupker, S. J., & Hino, Y. (2002). "The impact of feedback semantics in visual word recognition:…
The Resolution of Visual Noise in Word Recognition
ERIC Educational Resources Information Center
Pae, Hye K.; Lee, Yong-Won
2015-01-01
This study examined lexical processing in English by native speakers of Korean and Chinese, compared to that of native speakers of English, using normal, alternated, and inverse fonts. Sixty four adult students participated in a lexical decision task. The findings demonstrated similarities and differences in accuracy and latency among the three L1…
ERIC Educational Resources Information Center
Orwig, Gary W.
1979-01-01
The first experiment determined that verbal interference (shadowing) was detrimental to the subjects' memory of words and high similarity pictures; the second, designed to minimize the possibility that students would sort through the pictures, indicated that verbal interference did not decrease memory of high similarity pictures. (Author/JEG)
ERIC Educational Resources Information Center
Dunabeitia, Jon Andoni; Dimitropoulou, María; Estevez, Adelina; Carreiras, Manuel
2013-01-01
The visual word recognition system recruits neuronal systems originally developed for object perception which are characterized by orientation insensitivity to mirror reversals. It has been proposed that during reading acquisition beginning readers have to "unlearn" this natural tolerance to mirror reversals in order to efficiently…
Strategy Access Rods: A Hands-On Approach.
ERIC Educational Resources Information Center
Worthing, Bernadette; Laster, Barbara
2002-01-01
Describes Strategy Access Rods (SARs), balsa-wood, prism-like or rectangular rods on which a one-sentence reading strategy phrase in the first person is printed. Notes SARs serve as a visual, auditory, kinesthetic, and tactile reminder of the strategies available to developing readers. Discusses use of SARs for word recognition and comprehension.…
Story Comprehension as a Function of Modality and Reading Ability.
ERIC Educational Resources Information Center
Marlowe, Wendy; And Others
1979-01-01
In a study 12 normal children and 12 reading disabled (word recognition difficulties) children (mean age 9.2 years) were compared for reading and listening comprehension to test whether disabled readers, given an auditory presentation, would show comprehension of material comparable to that of normal readers given visual presentation. (PHR)
Designing a Humane Multimedia Interface for the Visually Impaired.
ERIC Educational Resources Information Center
Ghaoui, Claude; Mann, M.; Ng, Eng Huat
2001-01-01
Promotes the provision of interfaces that allow users to access most of the functionality of existing graphical user interfaces (GUI) using speech. Uses the design of a speech control tool that incorporates speech recognition and synthesis into existing packaged software such as Teletext, the Internet, or a word processor. (Contains 22…
ERIC Educational Resources Information Center
Dunabeitia, Jon Andoni; Peream, Manuel; Carreiras, Manuel
2007-01-01
When does morphological decomposition occur in visual word recognition? An increasing body of evidence suggests the presence of early morphological processing. The present work investigates this issue via an orthographic similarity manipulation. Three masked priming lexical decision experiments were conducted to examine the transposed-letter…
Mechanisms of Masked Priming: Testing the Entry Opening Model
ERIC Educational Resources Information Center
Wu, Hongmei
2012-01-01
Since it was introduced in Forster and Davis (1984), masked priming has been widely adopted in the psycholinguistic research on visual word recognition, but there has been little consensus on its actual mechanisms, i.e. how it occurs and how it should be interpreted. This dissertation addresses two different interpretations of masked priming, one…
Protein Analysis Meets Visual Word Recognition: A Case for String Kernels in the Brain
ERIC Educational Resources Information Center
Hannagan, Thomas; Grainger, Jonathan
2012-01-01
It has been recently argued that some machine learning techniques known as Kernel methods could be relevant for capturing cognitive and neural mechanisms (Jakel, Scholkopf, & Wichmann, 2009). We point out that "String kernels," initially designed for protein function prediction and spam detection, are virtually identical to one contending proposal…
Does Kaniso activate CASINO?: input coding schemes and phonology in visual-word recognition.
Acha, Joana; Perea, Manuel
2010-01-01
Most recent input coding schemes in visual-word recognition assume that letter position coding is orthographic rather than phonological in nature (e.g., SOLAR, open-bigram, SERIOL, and overlap). This assumption has been drawn - in part - by the fact that the transposed-letter effect (e.g., caniso activates CASINO) seems to be (mostly) insensitive to phonological manipulations (e.g., Perea & Carreiras, 2006, 2008; Perea & Pérez, 2009). However, one could argue that the lack of a phonological effect in prior research was due to the fact that the manipulation always occurred in internal letter positions - note that phonological effects tend to be stronger for the initial syllable (Carreiras, Ferrand, Grainger, & Perea, 2005). To reexamine this issue, we conducted a masked priming lexical decision experiment in which we compared the priming effect for transposed-letter pairs (e.g., caniso-CASINO vs. caviro-CASINO) and for pseudohomophone transposed-letter pairs (kaniso-CASINO vs. kaviro-CASINO). Results showed a transposed-letter priming effect for the correctly spelled pairs, but not for the pseudohomophone pairs. This is consistent with the view that letter position coding is (primarily) orthographic in nature.
Elhassan, Zena; Crewther, Sheila G.; Bavin, Edith L.
2017-01-01
Research examining phonological awareness (PA) contributions to reading in established readers of different skill levels is limited. The current study examined the contribution of PA to phonological decoding, visual word recognition, reading rate, and reading comprehension in 124 fourth to sixth grade children (aged 9–12 years). On the basis of scores on the FastaReada measure of reading fluency participants were allocated to one of three reading ability categories: dysfluent (n = 47), moderate (n = 38) and fluent (n = 39). For the dysfluent group, PA contributed significantly to all reading measures except rate, but in the moderate group only to phonological decoding. PA did not influence performances on any of the reading measures examined for the fluent reader group. The results support the notion that fluency is characterized by a shift from conscious decoding to rapid and accurate visual recognition of words. Although PA may be influential in reading development, the results of the current study show that it is not sufficient for fluent reading. PMID:28443048
Elhassan, Zena; Crewther, Sheila G; Bavin, Edith L
2017-01-01
Research examining phonological awareness (PA) contributions to reading in established readers of different skill levels is limited. The current study examined the contribution of PA to phonological decoding, visual word recognition, reading rate, and reading comprehension in 124 fourth to sixth grade children (aged 9-12 years). On the basis of scores on the FastaReada measure of reading fluency participants were allocated to one of three reading ability categories: dysfluent ( n = 47), moderate ( n = 38) and fluent ( n = 39). For the dysfluent group, PA contributed significantly to all reading measures except rate, but in the moderate group only to phonological decoding. PA did not influence performances on any of the reading measures examined for the fluent reader group. The results support the notion that fluency is characterized by a shift from conscious decoding to rapid and accurate visual recognition of words. Although PA may be influential in reading development, the results of the current study show that it is not sufficient for fluent reading.
Diesfeldt, H F A
2011-06-01
A right-handed patient, aged 72, manifested alexia without agraphia, a right homonymous hemianopia and an impaired ability to identify visually presented objects. He was completely unable to read words aloud and severely deficient in naming visually presented letters. He responded to orthographic familiarity in the lexical decision tasks of the Psycholinguistic Assessments of Language Processing in Aphasia (PALPA) rather than to the lexicality of the letter strings. He was impaired at deciding whether two letters of different case (e.g., A, a) are the same, though he could detect real letters from made-up ones or from their mirror image. Consequently, his core deficit in reading was posited at the level of the abstract letter identifiers. When asked to trace a letter with his right index finger, kinesthetic facilitation enabled him to read letters and words aloud. Though he could use intact motor representations of letters in order to facilitate recognition and reading, the slow, sequential and error-prone process of reading letter by letter made him abandon further training.
Perceptual and academic patterns of learning-disabled/gifted students.
Waldron, K A; Saphire, D G
1992-04-01
This research explored ways gifted children with learning disabilities perceive and recall auditory and visual input and apply this information to reading, mathematics, and spelling. 24 learning-disabled/gifted children and a matched control group of normally achieving gifted students were tested for oral reading, word recognition and analysis, listening comprehension, and spelling. In mathematics, they were tested for numeration, mental and written computation, word problems, and numerical reasoning. To explore perception and memory skills, students were administered formal tests of visual and auditory memory as well as auditory discrimination of sounds. Their responses to reading and to mathematical computations were further considered for evidence of problems in visual discrimination, visual sequencing, and visual spatial areas. Analyses indicated that these learning-disabled/gifted students were significantly weaker than controls in their decoding skills, in spelling, and in most areas of mathematics. They were also significantly weaker in auditory discrimination and memory, and in visual discrimination, sequencing, and spatial abilities. Conclusions are that these underlying perceptual and memory deficits may be related to students' academic problems.
L2 Word Recognition: Influence of L1 Orthography on Multi-Syllabic Word Recognition
ERIC Educational Resources Information Center
Hamada, Megumi
2017-01-01
L2 reading research suggests that L1 orthographic experience influences L2 word recognition. Nevertheless, the findings on multi-syllabic words in English are still limited despite the fact that a vast majority of words are multi-syllabic. The study investigated whether L1 orthography influences the recognition of multi-syllabic words, focusing on…
Enrichment Effects of Gestures and Pictures on Abstract Words in a Second Language
Repetto, Claudia; Pedroli, Elisa; Macedonia, Manuela
2017-01-01
Laboratory research has demonstrated that multisensory enrichment promotes verbal learning in a foreign language (L2). Enrichment can be done in various ways, e.g., by adding a picture that illustrates the L2 word’s meaning or by the learner performing a gesture to the word (enactment). Most studies have tested enrichment on concrete but not on abstract words. Unlike concrete words, the representation of abstract words is deprived of sensory-motor features. This has been addressed as one of the reasons why abstract words are difficult to remember. Here, we ask whether a brief enrichment training by means of pictures and by self-performed gestures also enhances the memorability of abstract words in L2. Further, we explore which of these two enrichment strategies is more effective. Twenty young adults learned 30 novel abstract words in L2 according to three encoding conditions: (1) reading, (2) reading and pairing the novel word to a picture, and (3) reading and enacting the word by means of a gesture. We measured memory performance in free and cued recall tests, as well as in a visual recognition task. Words encoded with gestures were better remembered in the free recall in the native language (L1). When recognizing the novel words, participants made less errors for words encoded with gestures compared to words encoded with pictures. The reaction times in the recognition task did not differ across conditions. The present findings support, even if only partially, the idea that enactment promotes learning of abstract words and that it is superior to enrichment by means of pictures even after short training. PMID:29326617
Sommers, Mitchell S.; Phelps, Damian
2016-01-01
One goal of the present study was to establish whether providing younger and older adults with visual speech information (both seeing and hearing a talker compared with listening alone) would reduce listening effort for understanding speech in noise. In addition, we used an individual differences approach to assess whether changes in listening effort were related to changes in visual enhancement – the improvement in speech understanding in going from an auditory-only (A-only) to an auditory-visual condition (AV) condition. To compare word recognition in A-only and AV modalities, younger and older adults identified words in both A-only and AV conditions in the presence of six-talker babble. Listening effort was assessed using a modified version of a serial recall task. Participants heard (A-only) or saw and heard (AV) a talker producing individual words without background noise. List presentation was stopped randomly and participants were then asked to repeat the last 3 words that were presented. Listening effort was assessed using recall performance in the 2-back and 3-back positions. Younger, but not older, adults exhibited reduced listening effort as indexed by greater recall in the 2- and 3-back positions for the AV compared with the A-only presentations. For younger, but not older adults, changes in performance from the A-only to the AV condition were moderately correlated with visual enhancement. Results are discussed within a limited-resource model of both A-only and AV speech perception. PMID:27355772
Discrete Emotion Effects on Lexical Decision Response Times
Briesemeister, Benny B.; Kuchinke, Lars; Jacobs, Arthur M.
2011-01-01
Our knowledge about affective processes, especially concerning effects on cognitive demands like word processing, is increasing steadily. Several studies consistently document valence and arousal effects, and although there is some debate on possible interactions and different notions of valence, broad agreement on a two dimensional model of affective space has been achieved. Alternative models like the discrete emotion theory have received little interest in word recognition research so far. Using backward elimination and multiple regression analyses, we show that five discrete emotions (i.e., happiness, disgust, fear, anger and sadness) explain as much variance as two published dimensional models assuming continuous or categorical valence, with the variables happiness, disgust and fear significantly contributing to this account. Moreover, these effects even persist in an experiment with discrete emotion conditions when the stimuli are controlled for emotional valence and arousal levels. We interpret this result as evidence for discrete emotion effects in visual word recognition that cannot be explained by the two dimensional affective space account. PMID:21887307
Discrete emotion effects on lexical decision response times.
Briesemeister, Benny B; Kuchinke, Lars; Jacobs, Arthur M
2011-01-01
Our knowledge about affective processes, especially concerning effects on cognitive demands like word processing, is increasing steadily. Several studies consistently document valence and arousal effects, and although there is some debate on possible interactions and different notions of valence, broad agreement on a two dimensional model of affective space has been achieved. Alternative models like the discrete emotion theory have received little interest in word recognition research so far. Using backward elimination and multiple regression analyses, we show that five discrete emotions (i.e., happiness, disgust, fear, anger and sadness) explain as much variance as two published dimensional models assuming continuous or categorical valence, with the variables happiness, disgust and fear significantly contributing to this account. Moreover, these effects even persist in an experiment with discrete emotion conditions when the stimuli are controlled for emotional valence and arousal levels. We interpret this result as evidence for discrete emotion effects in visual word recognition that cannot be explained by the two dimensional affective space account.
Prosody and Spoken Word Recognition in Early and Late Spanish-English Bilingual Individuals
ERIC Educational Resources Information Center
Boutsen, Frank R.; Dvorak, Justin D.; Deweber, Derick D.
2017-01-01
Purpose: This study was conducted to compare the influence of word properties on gated single-word recognition in monolingual and bilingual individuals under conditions of native and nonnative accent and to determine whether word-form prosody facilitates recognition in bilingual individuals. Method: Word recognition was assessed in monolingual and…
NASA Astrophysics Data System (ADS)
Madokoro, H.; Tsukada, M.; Sato, K.
2013-07-01
This paper presents an unsupervised learning-based object category formation and recognition method for mobile robot vision. Our method has the following features: detection of feature points and description of features using a scale-invariant feature transform (SIFT), selection of target feature points using one class support vector machines (OC-SVMs), generation of visual words using self-organizing maps (SOMs), formation of labels using adaptive resonance theory 2 (ART-2), and creation and classification of categories on a category map of counter propagation networks (CPNs) for visualizing spatial relations between categories. Classification results of dynamic images using time-series images obtained using two different-size robots and according to movements respectively demonstrate that our method can visualize spatial relations of categories while maintaining time-series characteristics. Moreover, we emphasize the effectiveness of our method for category formation of appearance changes of objects.
Multimodal Alexia: Neuropsychological Mechanisms and Implications for Treatment
Kim, Esther S.; Rapcsak, Steven Z.; Andersen, Sarah; Beeson, Pélagie M.
2011-01-01
Letter-by-letter (LBL) reading is the phenomenon whereby individuals with acquired alexia decode words by sequential identification of component letters. In cases where letter recognition or letter naming is impaired, however, a LBL reading approach is obviated, resulting in a nearly complete inability to read, or global alexia. In some such cases, a treatment strategy wherein letter tracing is used to provide tactile and/or kinesthetic input has resulted in improved letter identification. In this study, a kinesthetic treatment approach was implemented with an individual who presented with severe alexia in the context of relatively preserved recognition of orally spelled words, and mildly impaired oral/written spelling. Eight weeks of kinesthetic treatment resulted in improved letter identification accuracy and oral reading of trained words; however, the participant remained unable to successfully decode untrained words. Further testing revealed that, in addition to the visual-verbal disconnection that resulted in impaired word reading and letter naming, her limited ability to derive benefit from the kinesthetic strategy was attributable to a disconnection that prevented access to letter names from kinesthetic input. We propose that this kinesthetic-verbal disconnection resulted from damage to the left parietal lobe and underlying white matter, a neuroanatomical feature that is not typically observed in patients with global alexia or classic LBL reading. This unfortunate combination of visual-verbal and kinesthetic-verbal disconnections demonstrated in this individual resulted in a persistent multimodal alexia syndrome that was resistant to behavioral treatment. To our knowledge, this is the first case in which the nature of this form of multimodal alexia has been fully characterized, and our findings provide guidance regarding the requisite cognitive skills and lesion profiles that are likely to be associated with a positive response to tactile/kinesthetic treatment. PMID:21952194
Multimodal alexia: neuropsychological mechanisms and implications for treatment.
Kim, Esther S; Rapcsak, Steven Z; Andersen, Sarah; Beeson, Pélagie M
2011-11-01
Letter-by-letter (LBL) reading is the phenomenon whereby individuals with acquired alexia decode words by sequential identification of component letters. In cases where letter recognition or letter naming is impaired, however, a LBL reading approach is obviated, resulting in a nearly complete inability to read, or global alexia. In some such cases, a treatment strategy wherein letter tracing is used to provide tactile and/or kinesthetic input has resulted in improved letter identification. In this study, a kinesthetic treatment approach was implemented with an individual who presented with severe alexia in the context of relatively preserved recognition of orally spelled words, and mildly impaired oral/written spelling. Eight weeks of kinesthetic treatment resulted in improved letter identification accuracy and oral reading of trained words; however, the participant remained unable to successfully decode untrained words. Further testing revealed that, in addition to the visual-verbal disconnection that resulted in impaired word reading and letter naming, her limited ability to derive benefit from the kinesthetic strategy was attributable to a disconnection that prevented access to letter names from kinesthetic input. We propose that this kinesthetic-verbal disconnection resulted from damage to the left parietal lobe and underlying white matter, a neuroanatomical feature that is not typically observed in patients with global alexia or classic LBL reading. This unfortunate combination of visual-verbal and kinesthetic-verbal disconnections demonstrated in this individual resulted in a persistent multimodal alexia syndrome that was resistant to behavioral treatment. To our knowledge, this is the first case in which the nature of this form of multimodal alexia has been fully characterized, and our findings provide guidance regarding the requisite cognitive skills and lesion profiles that are likely to be associated with a positive response to tactile/kinesthetic treatment. Copyright © 2011 Elsevier Ltd. All rights reserved.
Automated smartphone audiometry: Validation of a word recognition test app.
Dewyer, Nicholas A; Jiradejvong, Patpong; Henderson Sabes, Jennifer; Limb, Charles J
2018-03-01
Develop and validate an automated smartphone word recognition test. Cross-sectional case-control diagnostic test comparison. An automated word recognition test was developed as an app for a smartphone with earphones. English-speaking adults with recent audiograms and various levels of hearing loss were recruited from an audiology clinic and were administered the smartphone word recognition test. Word recognition scores determined by the smartphone app and the gold standard speech audiometry test performed by an audiologist were compared. Test scores for 37 ears were analyzed. Word recognition scores determined by the smartphone app and audiologist testing were in agreement, with 86% of the data points within a clinically acceptable margin of error and a linear correlation value between test scores of 0.89. The WordRec automated smartphone app accurately determines word recognition scores. 3b. Laryngoscope, 128:707-712, 2018. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.
Word Recognition in Auditory Cortex
ERIC Educational Resources Information Center
DeWitt, Iain D. J.
2013-01-01
Although spoken word recognition is more fundamental to human communication than text recognition, knowledge of word-processing in auditory cortex is comparatively impoverished. This dissertation synthesizes current models of auditory cortex, models of cortical pattern recognition, models of single-word reading, results in phonetics and results in…
Kuperman, Victor; Drieghe, Denis; Keuleers, Emmanuel; Brysbaert, Marc
2013-01-01
We assess the amount of shared variance between three measures of visual word recognition latencies: eye movement latencies, lexical decision times, and naming times. After partialling out the effects of word frequency and word length, two well-documented predictors of word recognition latencies, we see that 7-44% of the variance is uniquely shared between lexical decision times and naming times, depending on the frequency range of the words used. A similar analysis of eye movement latencies shows that the percentage of variance they uniquely share either with lexical decision times or with naming times is much lower. It is 5-17% for gaze durations and lexical decision times in studies with target words presented in neutral sentences, but drops to 0.2% for corpus studies in which eye movements to all words are analysed. Correlations between gaze durations and naming latencies are lower still. These findings suggest that processing times in isolated word processing and continuous text reading are affected by specific task demands and presentation format, and that lexical decision times and naming times are not very informative in predicting eye movement latencies in text reading once the effect of word frequency and word length are taken into account. The difference between controlled experiments and natural reading suggests that reading strategies and stimulus materials may determine the degree to which the immediacy-of-processing assumption and the eye-mind assumption apply. Fixation times are more likely to exclusively reflect the lexical processing of the currently fixated word in controlled studies with unpredictable target words rather than in natural reading of sentences or texts.
François, Clément; Cunillera, Toni; Garcia, Enara; Laine, Matti; Rodriguez-Fornells, Antoni
2017-04-01
Learning a new language requires the identification of word units from continuous speech (the speech segmentation problem) and mapping them onto conceptual representation (the word to world mapping problem). Recent behavioral studies have revealed that the statistical properties found within and across modalities can serve as cues for both processes. However, segmentation and mapping have been largely studied separately, and thus it remains unclear whether both processes can be accomplished at the same time and if they share common neurophysiological features. To address this question, we recorded EEG of 20 adult participants during both an audio alone speech segmentation task and an audiovisual word-to-picture association task. The participants were tested for both the implicit detection of online mismatches (structural auditory and visual semantic violations) as well as for the explicit recognition of words and word-to-picture associations. The ERP results from the learning phase revealed a delayed learning-related fronto-central negativity (FN400) in the audiovisual condition compared to the audio alone condition. Interestingly, while online structural auditory violations elicited clear MMN/N200 components in the audio alone condition, visual-semantic violations induced meaning-related N400 modulations in the audiovisual condition. The present results support the idea that speech segmentation and meaning mapping can take place in parallel and act in synergy to enhance novel word learning. Copyright © 2016 Elsevier Ltd. All rights reserved.
Lexical and age effects on word recognition in noise in normal-hearing children.
Ren, Cuncun; Liu, Sha; Liu, Haihong; Kong, Ying; Liu, Xin; Li, Shujing
2015-12-01
The purposes of the present study were (1) to examine the lexical and age effects on word recognition of normal-hearing (NH) children in noise, and (2) to compare the word-recognition performance in noise to that in quiet listening conditions. Participants were 213 NH children (age ranged between 3 and 6 years old). Eighty-nine and 124 of the participants were tested in noise and quiet listening conditions, respectively. The Standard-Chinese Lexical Neighborhood Test, which contains lists of words in four lexical categories (i.e., dissyllablic easy (DE), dissyllablic hard (DH), monosyllable easy (ME), and monosyllable hard (MH)) was used to evaluate the Mandarin Chinese word recognition in speech spectrum-shaped noise (SSN) with a signal-to-noise ratio (SNR) of 0dB. A two-way repeated-measures analysis of variance was conducted to examine the lexical effects with syllable length and difficulty level as the main factors on word recognition in the quiet and noise listening conditions. The effects of age on word-recognition performance were examined using a regression model. The word-recognition performance in noise was significantly poorer than that in quiet and the individual variations in performance in noise were much greater than those in quiet. Word recognition scores showed that the lexical effects were significant in the SSN. Children scored higher with dissyllabic words than with monosyllabic words; "easy" words scored higher than "hard" words in the noise condition. The scores of the NH children in the SSN (SNR=0dB) for the DE, DH, ME, and MH words were 85.4, 65.9, 71.7, and 46.2% correct, respectively. The word-recognition performance also increased with age in each lexical category for the NH children tested in noise. Both age and lexical characteristics of words had significant influences on the performance of Mandarin-Chinese word recognition in noise. The lexical effects were more obvious under noise listening conditions than in quiet. The word-recognition performance in noise increased with age in NH children of 3-6 years old and had not reached plateau at 6 years of age in the NH children. Copyright © 2015. Published by Elsevier Ireland Ltd.
ERIC Educational Resources Information Center
Frye, Elizabeth M.; Gosky, Ross
2012-01-01
The present study investigated the relationship between rapid recognition of individual words (Word Recognition Test) and two measures of contextual reading: (1) grade-level Passage Reading Test (IRI passage) and (2) performance on standardized STAR Reading Test. To establish if time of presentation on the word recognition test was a factor in…
Masked Phonological Priming Effects in English: Are They Real? Do They Matter?
ERIC Educational Resources Information Center
Rastle, Kathleen; Brysbaert, Marc
2006-01-01
For over 15 years, masked phonological priming effects have been offered as evidence that phonology plays a leading role in visual word recognition. The existence of these effects--along with their theoretical implications--has, however, been disputed. The authors present three sources of evidence relevant to an assessment of the existence and…
Investigating Occipito-Temporal Contributions to Reading with TMS
ERIC Educational Resources Information Center
Duncan, Keith J.; Pattamadilok, Chotiga; Devlin, Joseph T.
2010-01-01
The debate regarding the role of ventral occipito-temporal cortex (vOTC) in visual word recognition arises, in part, from difficulty delineating the functional contributions of vOTC as separate from other areas of the reading network. Here, we investigated the feasibility of using TMS to interfere with vOTC processing in order to explore its…
ERIC Educational Resources Information Center
McCormick, Samantha F.; Rastle, Kathleen; Davis, Matthew H.
2008-01-01
Recent research using masked priming has suggested that there is a form of morphological decomposition that is based solely on the appearance of morphological complexity and that operates independently of semantic information [Longtin, C.M., Segui, J., & Halle, P. A. (2003). Morphological priming without morphological relationship. "Language and…
SOA Does Not Reveal the Absolute Time Course of Cognitive Processing in Fast Priming Experiments
ERIC Educational Resources Information Center
Tzur, Boaz; Frost, Ram
2007-01-01
Applying Bloch's law to visual word recognition research, both exposure duration of the prime and its luminance determine the prime's overall energy, and consequently determine the size of the priming effect. Nevertheless, experimenters using fast-priming paradigms traditionally focus only on the SOA between prime and target to reflect the…
An eye movement corpus study of the age-of-acquisition effect.
Dirix, Nicolas; Duyck, Wouter
2017-12-01
In the present study, we investigated the effects of word-level age of acquisition (AoA) on natural reading. Previous studies, using multiple language modalities, showed that earlier-learned words are recognized, read, spoken, and responded to faster than words learned later in life. Until now, in visual word recognition the experimental materials were limited to single-word or sentence studies. We analyzed the data of the Ghent Eye-tracking Corpus (GECO; Cop, Dirix, Drieghe, & Duyck, in press), an eyetracking corpus of participants reading an entire novel, resulting in the first eye movement megastudy of AoA effects in natural reading. We found that the ages at which specific words were learned indeed influenced reading times, above other important (correlated) lexical variables, such as word frequency and length. Shorter fixations for earlier-learned words were consistently found throughout the reading process, in both early (single-fixation durations, first-fixation durations, gaze durations) and late (total reading times) measures. Implications for theoretical accounts of AoA effects and eye movements are discussed.
Enhanced visual awareness for morality and pajamas? Perception vs. memory in 'top-down' effects.
Firestone, Chaz; Scholl, Brian J
2015-03-01
A raft of prominent findings has revived the notion that higher-level cognitive factors such as desire, meaning, and moral relevance can directly affect what we see. For example, under conditions of brief presentation, morally relevant words reportedly "pop out" and are easier to identify than morally irrelevant words. Though such results purport to show that perception itself is sensitive to such factors, much of this research instead demonstrates effects on visual recognition--which necessarily involves not only visual processing per se, but also memory retrieval. Here we report three experiments which suggest that many alleged top-down effects of this sort are actually effects on 'back-end' memory rather than 'front-end' perception. In particular, the same methods used to demonstrate popout effects for supposedly privileged stimuli (such as morality-related words, e.g. "punishment" and "victim") also yield popout effects for unmotivated, superficial categories (such as fashion-related words, e.g. "pajamas" and "stiletto"). We conclude that such effects reduce to well-known memory processes (in this case, semantic priming) that do not involve morality, and have no implications for debates about whether higher-level factors influence perception. These case studies illustrate how it is critical to distinguish perception from memory in alleged 'top-down' effects. Copyright © 2014 Elsevier B.V. All rights reserved.
Audiovisual semantic congruency during encoding enhances memory performance.
Heikkilä, Jenni; Alho, Kimmo; Hyvönen, Heidi; Tiippana, Kaisa
2015-01-01
Studies of memory and learning have usually focused on a single sensory modality, although human perception is multisensory in nature. In the present study, we investigated the effects of audiovisual encoding on later unisensory recognition memory performance. The participants were to memorize auditory or visual stimuli (sounds, pictures, spoken words, or written words), each of which co-occurred with either a semantically congruent stimulus, incongruent stimulus, or a neutral (non-semantic noise) stimulus in the other modality during encoding. Subsequent memory performance was overall better when the stimulus to be memorized was initially accompanied by a semantically congruent stimulus in the other modality than when it was accompanied by a neutral stimulus. These results suggest that semantically congruent multisensory experiences enhance encoding of both nonverbal and verbal materials, resulting in an improvement in their later recognition memory.
Letter-case information and the identification of brand names.
Perea, Manuel; Jiménez, María; Talero, Fernanda; López-Cañada, Soraya
2015-02-01
A central tenet of most current models of visual-word recognition is that lexical units are activated on the basis of case-invariant abstract letter representations. Here, we examined this assumption by using a unique type of words: brand names. The rationale of the experiments is that brand names are archetypically printed either in lowercase (e.g., adidas) or uppercase (e.g., IKEA). This allows us to present the brand names in their standard or non-standard case configuration (e.g., adidas, IKEA vs. ADIDAS, ikea, respectively). We conducted two experiments with a brand-decision task ('is it a brand name?'): a single-presentation experiment and a masked priming experiment. Results in the single-presentation experiment revealed faster identification times of brand names in their standard case configuration than in their non-standard case configuration (i.e., adidas faster than ADIDAS; IKEA faster than ikea). In the masked priming experiment, we found faster identification times of brand names when they were preceded by an identity prime that matched its standard case configuration than when it did not (i.e., faster response times to adidas-adidas than to ADIDAS-adidas). Taken together, the present findings strongly suggest that letter-case information forms part of a brand name's graphemic information, thus posing some limits to current models of visual-word recognition. © 2014 The British Psychological Society.
The effect of word concreteness on recognition memory.
Fliessbach, K; Weis, S; Klaver, P; Elger, C E; Weber, B
2006-09-01
Concrete words that are readily imagined are better remembered than abstract words. Theoretical explanations for this effect either claim a dual coding of concrete words in the form of both a verbal and a sensory code (dual-coding theory), or a more accessible semantic network for concrete words than for abstract words (context-availability theory). However, the neural mechanisms of improved memory for concrete versus abstract words are poorly understood. Here, we investigated the processing of concrete and abstract words during encoding and retrieval in a recognition memory task using event-related functional magnetic resonance imaging (fMRI). As predicted, memory performance was significantly better for concrete words than for abstract words. Abstract words elicited stronger activations of the left inferior frontal cortex both during encoding and recognition than did concrete words. Stronger activation of this area was also associated with successful encoding for both abstract and concrete words. Concrete words elicited stronger activations bilaterally in the posterior inferior parietal lobe during recognition. The left parietal activation was associated with correct identification of old stimuli. The anterior precuneus, left cerebellar hemisphere and the posterior and anterior cingulate cortex showed activations both for successful recognition of concrete words and for online processing of concrete words during encoding. Additionally, we observed a correlation across subjects between brain activity in the left anterior fusiform gyrus and hippocampus during recognition of learned words and the strength of the concreteness effect. These findings support the idea of specific brain processes for concrete words, which are reactivated during successful recognition.
Roberts, Daniel J; Woollams, Anna M; Kim, Esther; Beeson, Pelagie M; Rapcsak, Steven Z; Lambon Ralph, Matthew A
2013-11-01
Recent visual neuroscience investigations suggest that ventral occipito-temporal cortex is retinotopically organized, with high acuity foveal input projecting primarily to the posterior fusiform gyrus (pFG), making this region crucial for coding high spatial frequency information. Because high spatial frequencies are critical for fine-grained visual discrimination, we hypothesized that damage to the left pFG should have an adverse effect not only on efficient reading, as observed in pure alexia, but also on the processing of complex non-orthographic visual stimuli. Consistent with this hypothesis, we obtained evidence that a large case series (n = 20) of patients with lesions centered on left pFG: 1) Exhibited reduced sensitivity to high spatial frequencies; 2) demonstrated prolonged response latencies both in reading (pure alexia) and object naming; and 3) were especially sensitive to visual complexity and similarity when discriminating between novel visual patterns. These results suggest that the patients' dual reading and non-orthographic recognition impairments have a common underlying mechanism and reflect the loss of high spatial frequency visual information normally coded in the left pFG.
Basirat, Anahita
2017-01-01
Cochlear implant (CI) users frequently achieve good speech understanding based on phoneme and word recognition. However, there is a significant variability between CI users in processing prosody. The aim of this study was to examine the abilities of an excellent CI user to segment continuous speech using intonational cues. A post-lingually deafened adult CI user and 22 normal hearing (NH) subjects segmented phonemically identical and prosodically different sequences in French such as 'l'affiche' (the poster) versus 'la fiche' (the sheet), both [lafiʃ]. All participants also completed a minimal pair discrimination task. Stimuli were presented in auditory-only and audiovisual presentation modalities. The performance of the CI user in the minimal pair discrimination task was 97% in the auditory-only and 100% in the audiovisual condition. In the segmentation task, contrary to the NH participants, the performance of the CI user did not differ from the chance level. Visual speech did not improve word segmentation. This result suggests that word segmentation based on intonational cues is challenging when using CIs even when phoneme/word recognition is very well rehabilitated. This finding points to the importance of the assessment of CI users' skills in prosody processing and the need for specific interventions focusing on this aspect of speech communication.
Hemispheric asymmetry of emotion words in a non-native mind: a divided visual field study.
Jończyk, Rafał
2015-05-01
This study investigates hemispheric specialization for emotional words among proficient non-native speakers of English by means of the divided visual field paradigm. The motivation behind the study is to extend the monolingual hemifield research to the non-native context and see how emotion words are processed in a non-native mind. Sixty eight females participated in the study, all highly proficient in English. The stimuli comprised 12 positive nouns, 12 negative nouns, 12 non-emotional nouns and 36 pseudo-words. To examine the lateralization of emotion, stimuli were presented unilaterally in a random fashion for 180 ms in a go/no-go lexical decision task. The perceptual data showed a right hemispheric advantage for processing speed of negative words and a complementary role of the two hemispheres in the recognition accuracy of experimental stimuli. The data indicate that processing of emotion words in non-native language may require greater interhemispheric communication, but at the same time demonstrates a specific role of the right hemisphere in the processing of negative relative to positive valence. The results of the study are discussed in light of the methodological inconsistencies in the hemifield research as well as the non-native context in which the study was conducted.
One process is not enough! A speed-accuracy tradeoff study of recognition memory.
Boldini, Angela; Russo, Riccardo; Avons, S E
2004-04-01
Speed-accuracy tradeoff (SAT) methods have been used to contrast single- and dual-process accounts of recognition memory. In these procedures, subjects are presented with individual test items and are required to make recognition decisions under various time constraints. In this experiment, we presented word lists under incidental learning conditions, varying the modality of presentation and level of processing. At test, we manipulated the interval between each visually presented test item and a response signal, thus controlling the amount of time available to retrieve target information. Study-test modality match had a beneficial effect on recognition accuracy at short response-signal delays (< or =300 msec). Conversely, recognition accuracy benefited more from deep than from shallow processing at study only at relatively long response-signal delays (> or =300 msec). The results are congruent with views suggesting that both fast familiarity and slower recollection processes contribute to recognition memory.
Exploring a recognition-induced recognition decrement
Dopkins, Stephen; Ngo, Catherine Trinh; Sargent, Jesse
2007-01-01
Four experiments explored a recognition decrement that is associated with the recognition of a word from a short list. The stimulus material for demonstrating the phenomenon was a list of words of different syntactic types. A word from the list was recognized less well following a decision that a word of the same type had occurred in the list than following a decision that such a word had not occurred in the list. A recognition decrement did not occur for a word of a given type following a positive recognition decision to a word of a different type. A recognition decrement did not occur when the list consisted exclusively of nouns. It was concluded that the phenomenon may reflect a criterion shift but probably does not reflect a list strength effect, suppression, or familiarity attribution consequent to a perceived discrepancy between actual and expected fluency. PMID:17063915
The effect of character contextual diversity on eye movements in Chinese sentence reading.
Chen, Qingrong; Zhao, Guoxia; Huang, Xin; Yang, Yiming; Tanenhaus, Michael K
2017-12-01
Chen, Huang, et al. (Psychonomic Bulletin & Review, 2017) found that when reading two-character Chinese words embedded in sentence contexts, contextual diversity (CD), a measure of the proportion of texts in which a word appears, affected fixation times to words. When CD is controlled, however, frequency did not affect reading times. Two experiments used the same experimental designs to examine whether there are frequency effects of the first character of two-character words when CD is controlled. In Experiment 1, yoked triples of characters from a control group, a group matched for character CD that is lower in frequency, and a group matched in frequency with the control group, but higher in character CD, were rotated through the same sentence frame. In Experiment 2 each character from a larger set was embedded in a separate sentence frame, allowing for a larger difference in log frequency compared to Experiment 1 (0.8 and 0.4, respectively). In both experiments, early and later eye movement measures were significantly shorter for characters with higher CD than for characters with lower CD, with no effects of character frequency. These results place constraints on models of visual word recognition and suggest ways in which Chinese can be used to tease apart the nature of context effects in word recognition and language processing in general.
Improved word recognition for observers with age-related maculopathies using compensation filters
NASA Technical Reports Server (NTRS)
Lawton, Teri B.
1988-01-01
A method for improving word recognition for people with age-related maculopathies, which cause a loss of central vision, is discussed. It is found that the use of individualized compensation filters based on an person's normalized contrast sensitivity function can improve word recognition for people with age-related maculopathies. It is shown that 27-70 pct more magnification is needed for unfiltered words compared to filtered words. The improvement in word recognition is positively correlated with the severity of vision loss.
The ERP signature of the contextual diversity effect in visual word recognition.
Vergara-Martínez, Marta; Comesaña, Montserrat; Perea, Manuel
2017-06-01
Behavioral experiments have revealed that words appearing in many different contexts are responded to faster than words that appear in few contexts. Although this contextual diversity (CD) effect has been found to be stronger than the word-frequency (WF) effect, it is a matter of debate whether the facilitative effects of CD and WF reflect the same underlying mechanisms. The analysis of the electrophysiological correlates of CD may shed some light on this issue. This experiment is the first to examine the ERPs to high- and low-CD words when WF is controlled for. Results revealed that while high-CD words produced faster responses than low-CD words, their ERPs showed larger negativities (225-325 ms) than low-CD words. This result goes in the opposite direction of the ERP WF effect (high-frequency words elicit smaller N400 amplitudes than low-frequency words). The direction and scalp distribution of the CD effect resembled the ERP effects associated with "semantic richness." Thus, while apparently related, CD and WF originate from different sources during the access of lexical-semantic representations.
Zhou, Wei; Mo, Fei; Zhang, Yunhong; Ding, Jinhong
2017-01-01
Two experiments were conducted to investigate how linguistic information influences attention allocation in visual search and memory for words. In Experiment 1, participants searched for the synonym of a cue word among five words. The distractors included one antonym and three unrelated words. In Experiment 2, participants were asked to judge whether the five words presented on the screen comprise a valid sentence. The relationships among words were sentential, semantically related or unrelated. A memory recognition task followed. Results in both experiments showed that linguistically related words produced better memory performance. We also found that there were significant interactions between linguistic relation conditions and memorization on eye-movement measures, indicating that good memory for words relied on frequent and long fixations during search in the unrelated condition but to a much lesser extent in linguistically related conditions. We conclude that semantic and syntactic associations attenuate the link between overt attention allocation and subsequent memory performance, suggesting that linguistic relatedness can somewhat compensate for a relative lack of attention during word search.
[Explicit memory for type font of words in source monitoring and recognition tasks].
Hatanaka, Yoshiko; Fujita, Tetsuya
2004-02-01
We investigated whether people can consciously remember type fonts of words by methods of examining explicit memory; source-monitoring and old/new-recognition. We set matched, non-matched, and non-studied conditions between the study and the test words using two kinds of type fonts; Gothic and MARU. After studying words in one way of encoding, semantic or physical, subjects in a source-monitoring task made a three way discrimination between new words, Gothic words, and MARU words (Exp. 1). Subjects in an old/new-recognition task indicated whether test words were previously presented or not (Exp. 2). We compared the source judgments with old/new recognition data. As a result, these data showed conscious recollection for type font of words on the source monitoring task and dissociation between source monitoring and old/new recognition performance.
Fracture Mechanics Method for Word Embedding Generation of Neural Probabilistic Linguistic Model.
Bi, Size; Liang, Xiao; Huang, Ting-Lei
2016-01-01
Word embedding, a lexical vector representation generated via the neural linguistic model (NLM), is empirically demonstrated to be appropriate for improvement of the performance of traditional language model. However, the supreme dimensionality that is inherent in NLM contributes to the problems of hyperparameters and long-time training in modeling. Here, we propose a force-directed method to improve such problems for simplifying the generation of word embedding. In this framework, each word is assumed as a point in the real world; thus it can approximately simulate the physical movement following certain mechanics. To simulate the variation of meaning in phrases, we use the fracture mechanics to do the formation and breakdown of meaning combined by a 2-gram word group. With the experiments on the natural linguistic tasks of part-of-speech tagging, named entity recognition and semantic role labeling, the result demonstrated that the 2-dimensional word embedding can rival the word embeddings generated by classic NLMs, in terms of accuracy, recall, and text visualization.
Body-object interaction ratings for 1,618 monosyllabic nouns.
Tillotson, Sherri M; Siakaluk, Paul D; Pexman, Penny M
2008-11-01
Body-object interaction (BOI) assesses the ease with which a human body can physically interact with a word's referent. Recent research has shown that BOI influences visual word recognition processes in such a way that responses to high-BOI words (e.g., couch) are faster and less error prone than responses to low-BOI words (e.g., cliff). Importantly, the high-BOI words and the low-BOI words that were used in those studies were matched on imageability. In the present study, we collected BOI ratings for a large set of words. BOI ratings, on a 1-7 scale, were obtained for 1,618 monosyllabic nouns. These ratings allowed us to test the generalizability of BOI effects to a large set of items, and they should be useful to researchers who are interested in manipulating or controlling for the effects of BOI. The body-object interaction ratings for this study may be downloaded from the Psychonomic Society's Archive of Norms, Stimuli, and Data, www.psychonomic.org/archive.
Webb, Tessa M; Beech, John R; Mayall, Kate M; Andrews, Antony S
2006-06-01
The relative importance of internal and external letter features of words in children's developing reading was investigated to clarify further the nature of early featural analysis. In Experiment 1, 72 6-, 8-, and 10-year-olds read aloud words displayed as wholes, external features only (central features missing, thereby preserving word shape information), or internal features only (central features preserved). There was an improvement in the processing of external features compared with internal features as reading experience increased. Experiment 2 examined the processing of the internal and external features of words employing a forward priming paradigm with 60 8-, 10-, and 12-year-olds. Reaction times to internal feature primes were equivalent to a nonprime blank condition, whereas responses to external feature primes were faster than those to the other two prime types. This advantage for the external features of words is discussed in terms of an early and enduring role for processing the external visual features in words during reading development.
Genetic and environmental influences on word recognition and spelling deficits as a function of age.
Friend, Angela; DeFries, John C; Wadsworth, Sally J; Olson, Richard K
2007-05-01
Previous twin studies have suggested a possible developmental dissociation between genetic influences on word recognition and spelling deficits, wherein genetic influence declined across age for word recognition, and increased for spelling recognition. The present study included two measures of word recognition (timed, untimed) and two measures of spelling (recognition, production) in younger and older twins. The heritability estimates for the two word recognition measures were .65 (timed) and .64 (untimed) in the younger group and .65 and .58 respectively in the older group. For spelling, the corresponding estimates were .57 (recognition) and .51 (production) in the younger group and .65 and .67 in the older group. Although these age group differences were not significant, the pattern of decline in heritability across age for reading and increase for spelling conformed to that predicted by the developmental dissociation hypothesis. However, the tests for an interaction between genetic influences on word recognition and spelling deficits as a function of age were not significant.
Recognizing Spoken Words: The Neighborhood Activation Model
Luce, Paul A.; Pisoni, David B.
2012-01-01
Objective A fundamental problem in the study of human spoken word recognition concerns the structural relations among the sound patterns of words in memory and the effects these relations have on spoken word recognition. In the present investigation, computational and experimental methods were employed to address a number of fundamental issues related to the representation and structural organization of spoken words in the mental lexicon and to lay the groundwork for a model of spoken word recognition. Design Using a computerized lexicon consisting of transcriptions of 20,000 words, similarity neighborhoods for each of the transcriptions were computed. Among the variables of interest in the computation of the similarity neighborhoods were: 1) the number of words occurring in a neighborhood, 2) the degree of phonetic similarity among the words, and 3) the frequencies of occurrence of the words in the language. The effects of these variables on auditory word recognition were examined in a series of behavioral experiments employing three experimental paradigms: perceptual identification of words in noise, auditory lexical decision, and auditory word naming. Results The results of each of these experiments demonstrated that the number and nature of words in a similarity neighborhood affect the speed and accuracy of word recognition. A neighborhood probability rule was developed that adequately predicted identification performance. This rule, based on Luce's (1959) choice rule, combines stimulus word intelligibility, neighborhood confusability, and frequency into a single expression. Based on this rule, a model of auditory word recognition, the neighborhood activation model, was proposed. This model describes the effects of similarity neighborhood structure on the process of discriminating among the acoustic-phonetic representations of words in memory. The results of these experiments have important implications for current conceptions of auditory word recognition in normal and hearing impaired populations of children and adults. PMID:9504270
The effect of background noise on the word activation process in nonnative spoken-word recognition.
Scharenborg, Odette; Coumans, Juul M J; van Hout, Roeland
2018-02-01
This article investigates 2 questions: (1) does the presence of background noise lead to a differential increase in the number of simultaneously activated candidate words in native and nonnative listening? And (2) do individual differences in listeners' cognitive and linguistic abilities explain the differential effect of background noise on (non-)native speech recognition? English and Dutch students participated in an English word recognition experiment, in which either a word's onset or offset was masked by noise. The native listeners outperformed the nonnative listeners in all listening conditions. Importantly, however, the effect of noise on the multiple activation process was found to be remarkably similar in native and nonnative listening. The presence of noise increased the set of candidate words considered for recognition in both native and nonnative listening. The results indicate that the observed performance differences between the English and Dutch listeners should not be primarily attributed to a differential effect of noise, but rather to the difference between native and nonnative listening. Additional analyses showed that word-initial information was found to be more important than word-final information during spoken-word recognition. When word-initial information was no longer reliably available word recognition accuracy dropped and word frequency information could no longer be used suggesting that word frequency information is strongly tied to the onset of words and the earliest moments of lexical access. Proficiency and inhibition ability were found to influence nonnative spoken-word recognition in noise, with a higher proficiency in the nonnative language and worse inhibition ability leading to improved recognition performance. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Perceptual uncertainty is a property of the cognitive system.
Perea, Manuel; Carreiras, Manuel
2012-10-01
We qualify Frost's proposals regarding letter-position coding in visual word recognition and the universal model of reading. First, we show that perceptual uncertainty regarding letter position is not tied to European languages-instead it is a general property of the cognitive system. Second, we argue that a universal model of reading should incorporate a developmental view of the reading process.
Testing the Multiple in the Multiple Read-Out Model of Visual Word Recognition
ERIC Educational Resources Information Center
De Moor, Wendy; Verguts, Tom; Brysbaert, Marc
2005-01-01
This study provided a test of the multiple criteria concept used for lexical decision, as implemented in J. Grainger and A. M. Jacobs's (1996) multiple read-out model. This account predicts more inhibition (or less facilitation) from a masked neighbor when accuracy is stressed more but more facilitation (or less inhibition) when the speed of…
Masked Translation Priming Effects in Visual Word Recognition by Trilinguals
ERIC Educational Resources Information Center
Aparicio, Xavier; Lavaur, Jean-Marc
2016-01-01
The present study aims to investigate how trilinguals process their two non-dominant languages and how those languages influence one another, as well as the relative importance of the dominant language on their processing. With this in mind, 24 French (L1)- English (L2)- and Spanish (L3)-unbalanced trilinguals, deemed equivalent in their L2 and L3…
ERIC Educational Resources Information Center
Welcome, Suzanne E.; Joanisse, Marc F.
2012-01-01
We used fMRI to examine patterns of brain activity associated with component processes of visual word recognition and their relationships to individual differences in reading skill. We manipulated both the judgments adults made on written stimuli and the characteristics of the stimuli. Phonological processing led to activation in left inferior…
ERIC Educational Resources Information Center
Schwartz, Richard G.; Steinman, Susan; Ying, Elizabeth; Mystal, Elana Ying; Houston, Derek M.
2013-01-01
In this plenary paper, we present a review of language research in children with cochlear implants along with an outline of a 5-year project designed to examine the lexical access for production and recognition. The project will use auditory priming, picture naming with auditory or visual interfering stimuli (Picture-Word Interference and…
An Investigation of the Role of Grapheme Units in Word Recognition
ERIC Educational Resources Information Center
Lupker, Stephen J.; Acha, Joana; Davis, Colin J.; Perea, Manuel
2012-01-01
In most current models of word recognition, the word recognition process is assumed to be driven by the activation of letter units (i.e., that letters are the perceptual units in reading). An alternative possibility is that the word recognition process is driven by the activation of grapheme units, that is, that graphemes, rather than letters, are…
Einarsson, Einar-Jón; Petersen, Hannes; Wiebe, Thomas; Fransson, Per-Anders; Magnusson, Måns; Moëll, Christian
2011-10-01
To investigate word recognition in noise in subjects treated in childhood with chemotherapy, study benefits of open-fitting hearing-aids for word recognition, and investigate whether self-reported hearing-handicap corresponded to subjects' word recognition ability. Subjects diagnosed with cancer and treated with platinum-based chemotherapy in childhood underwent audiometric evaluations. Fifteen subjects (eight females and seven males) fulfilled the criteria set for the study, and four of those received customized open-fitting hearing-aids. Subjects with cisplatin-induced ototoxicity had severe difficulties recognizing words in noise, and scored as low as 54% below reference scores standardized for age and degree of hearing loss. Hearing-impaired subjects' self-reported hearing-handicap correlated significantly with word recognition in a quiet environment but not in noise. Word recognition in noise improved markedly (up to 46%) with hearing-aids, and the self-reported hearing-handicap and disability score were reduced by more than 50%. This study demonstrates the importance of testing word recognition in noise in subjects treated with platinum-based chemotherapy in childhood, and to use specific custom-made questionnaires to evaluate the experienced hearing-handicap. Open-fitting hearing-aids are a good alternative for subjects suffering from poor word recognition in noise.
An event-related potential study of memory for words spoken aloud or heard.
Wilding, E L; Rugg, M D
1997-09-01
Subjects made old/new recognition judgements to visually presented words, half of which had been encountered in a prior study phase. For each word judged old, subjects made a subsequent source judgement, indicating whether they had pronounced the word aloud at study (spoken words), or whether they had heard the word spoken to them (heard words). Event-related potentials (ERPs) were compared for three classes of test item; words correctly judged to be new (correct rejections), and spoken and heard words that were correctly assigned to source (spoken hit/hit and heard hit/hit response categories). Consistent with previous findings (Wilding, E. L. and Rugg, M. D., Brain, 1996, 119, 889-905), two temporally and topographically dissociable components, with parietal and frontal maxima respectively, differentiated the ERPs to the hit/hit and correct rejection response categories. In addition, there was some evidence that the frontally distributed component could be decomposed into two distinct components, only one of which differentiated the two classes of hit/hit ERPs. The findings suggest that at least three functionally and neurologically dissociable processes can contribute to successful recovery of source information.
SOA does not Reveal the Absolute Time Course of Cognitive Processing in Fast Priming Experiments
Tzur, Boaz; Frost, Ram
2007-01-01
Applying Bloch's law to visual word recognition research, both exposure duration of the prime and its luminance determine the prime's overall energy, and consequently determine the size of the priming effect. Nevertheless, experimenters using fast-priming paradigms traditionally focus only on the SOA between prime and target to reflect the absolute speed of cognitive processes under investigation. Some of the discrepancies in results regarding the time course of orthographic and phonological activation in word recognition research may be due to this factor. This hypothesis was examined by manipulating parametrically the luminance of the prime and its exposure duration, measuring their joint impact on masked repetition priming. The results show that small and non-significant priming effects can be more than tripled as a result of simply increasing luminance, when SOA is kept constant. Moreover, increased luminance may compensate for briefer exposure duration and vice versa. PMID:18379635
Word-level recognition of multifont Arabic text using a feature vector matching approach
NASA Astrophysics Data System (ADS)
Erlandson, Erik J.; Trenkle, John M.; Vogt, Robert C., III
1996-03-01
Many text recognition systems recognize text imagery at the character level and assemble words from the recognized characters. An alternative approach is to recognize text imagery at the word level, without analyzing individual characters. This approach avoids the problem of individual character segmentation, and can overcome local errors in character recognition. A word-level recognition system for machine-printed Arabic text has been implemented. Arabic is a script language, and is therefore difficult to segment at the character level. Character segmentation has been avoided by recognizing text imagery of complete words. The Arabic recognition system computes a vector of image-morphological features on a query word image. This vector is matched against a precomputed database of vectors from a lexicon of Arabic words. Vectors from the database with the highest match score are returned as hypotheses for the unknown image. Several feature vectors may be stored for each word in the database. Database feature vectors generated using multiple fonts and noise models allow the system to be tuned to its input stream. Used in conjunction with database pruning techniques, this Arabic recognition system has obtained promising word recognition rates on low-quality multifont text imagery.
Starrfelt, Randi; Klargaard, Solja K; Petersen, Anders; Gerlach, Christian
2018-02-01
Recent models suggest that face and word recognition may rely on overlapping cognitive processes and neural regions. In support of this notion, face recognition deficits have been demonstrated in developmental dyslexia. Here we test whether the opposite association can also be found, that is, impaired reading in developmental prosopagnosia. We tested 10 adults with developmental prosopagnosia and 20 matched controls. All participants completed the Cambridge Face Memory Test, the Cambridge Face Perception test and a Face recognition questionnaire used to quantify everyday face recognition experience. Reading was measured in four experimental tasks, testing different levels of letter, word, and text reading: (a) single word reading with words of varying length,(b) vocal response times in single letter and short word naming, (c) recognition of single letters and short words at brief exposure durations (targeting the word superiority effect), and d) text reading. Participants with developmental prosopagnosia performed strikingly similar to controls across the four reading tasks. Formal analysis revealed a significant dissociation between word and face recognition, as the difference in performance with faces and words was significantly greater for participants with developmental prosopagnosia than for controls. Adult developmental prosopagnosics read as quickly and fluently as controls, while they are seemingly unable to learn efficient strategies for recognizing faces. We suggest that this is due to the differing demands that face and word recognition put on the perceptual system. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Italians Use Abstract Knowledge about Lexical Stress during Spoken-Word Recognition
ERIC Educational Resources Information Center
Sulpizio, Simone; McQueen, James M.
2012-01-01
In two eye-tracking experiments in Italian, we investigated how acoustic information and stored knowledge about lexical stress are used during the recognition of tri-syllabic spoken words. Experiment 1 showed that Italians use acoustic cues to a word's stress pattern rapidly in word recognition, but only for words with antepenultimate stress.…
The Slow Developmental Time Course of Real-Time Spoken Word Recognition
ERIC Educational Resources Information Center
Rigler, Hannah; Farris-Trimble, Ashley; Greiner, Lea; Walker, Jessica; Tomblin, J. Bruce; McMurray, Bob
2015-01-01
This study investigated the developmental time course of spoken word recognition in older children using eye tracking to assess how the real-time processing dynamics of word recognition change over development. We found that 9-year-olds were slower to activate the target words and showed more early competition from competitor words than…
ERIC Educational Resources Information Center
Ebert, Ashlee A.
2009-01-01
Ehri's developmental model of word recognition outlines early reading development that spans from the use of logos to advanced knowledge of oral and written language to read words. Henderson's developmental spelling theory presents stages of word knowledge that progress in a similar manner to Ehri's phases. The purpose of this research study was…
The A2iA French handwriting recognition system at the Rimes-ICDAR2011 competition
NASA Astrophysics Data System (ADS)
Menasri, Farès; Louradour, Jérôme; Bianne-Bernard, Anne-Laure; Kermorvant, Christopher
2012-01-01
This paper describes the system for the recognition of French handwriting submitted by A2iA to the competition organized at ICDAR2011 using the Rimes database. This system is composed of several recognizers based on three different recognition technologies, combined using a novel combination method. A framework multi-word recognition based on weighted finite state transducers is presented, using an explicit word segmentation, a combination of isolated word recognizers and a language model. The system was tested both for isolated word recognition and for multi-word line recognition and submitted to the RIMES-ICDAR2011 competition. This system outperformed all previously proposed systems on these tasks.
Form–meaning links in the development of visual word recognition
Nation, Kate
2009-01-01
Learning to read takes time and it requires explicit instruction. Three decades of research has taught us a good deal about how children learn about the links between orthography and phonology during word reading development. However, we have learned less about the links that children build between orthographic form and meaning. This is surprising given that the goal of reading development must be for children to develop an orthographic system that allows meanings to be accessed quickly, reliably and efficiently from orthography. This review considers whether meaning-related information is used when children read words aloud, and asks what we know about how and when children make connections between form and meaning during the course of reading development. PMID:19933139
Research and Implementation of Tibetan Word Segmentation Based on Syllable Methods
NASA Astrophysics Data System (ADS)
Jiang, Jing; Li, Yachao; Jiang, Tao; Yu, Hongzhi
2018-03-01
Tibetan word segmentation (TWS) is an important problem in Tibetan information processing, while abbreviated word recognition is one of the key and most difficult problems in TWS. Most of the existing methods of Tibetan abbreviated word recognition are rule-based approaches, which need vocabulary support. In this paper, we propose a method based on sequence tagging model for abbreviated word recognition, and then implement in TWS systems with sequence labeling models. The experimental results show that our abbreviated word recognition method is fast and effective and can be combined easily with the segmentation model. This significantly increases the effect of the Tibetan word segmentation.
The low-frequency encoding disadvantage: Word frequency affects processing demands.
Diana, Rachel A; Reder, Lynne M
2006-07-01
Low-frequency words produce more hits and fewer false alarms than high-frequency words in a recognition task. The low-frequency hit rate advantage has sometimes been attributed to processes that operate during the recognition test (e.g., L. M. Reder et al., 2000). When tasks other than recognition, such as recall, cued recall, or associative recognition, are used, the effects seem to contradict a low-frequency advantage in memory. Four experiments are presented to support the claim that in addition to the advantage of low-frequency words at retrieval, there is a low-frequency disadvantage during encoding. That is, low-frequency words require more processing resources to be encoded episodically than high-frequency words. Under encoding conditions in which processing resources are limited, low-frequency words show a larger decrement in recognition than high-frequency words. Also, studying items (pictures and words of varying frequencies) along with low-frequency words reduces performance for those stimuli. Copyright 2006 APA, all rights reserved.
L2 Word Recognition: Influence of L1 Orthography on Multi-syllabic Word Recognition.
Hamada, Megumi
2017-10-01
L2 reading research suggests that L1 orthographic experience influences L2 word recognition. Nevertheless, the findings on multi-syllabic words in English are still limited despite the fact that a vast majority of words are multi-syllabic. The study investigated whether L1 orthography influences the recognition of multi-syllabic words, focusing on the position of an embedded word. The participants were Arabic ESL learners, Chinese ESL learners, and native speakers of English. The task was a word search task, in which the participants identified a target word embedded in a pseudoword at the initial, middle, or final position. The search accuracy and speed indicated that all groups showed a strong preference for the initial position. The accuracy data further indicated group differences. The Arabic group showed higher accuracy in the final than middle, while the Chinese group showed the opposite and the native speakers showed no difference between the two positions. The findings suggest that L2 multi-syllabic word recognition involves unique processes.
A Limited-Vocabulary, Multi-Speaker Automatic Isolated Word Recognition System.
ERIC Educational Resources Information Center
Paul, James E., Jr.
Techniques for automatic recognition of isolated words are investigated, and a computer simulation of a word recognition system is effected. Considered in detail are data acquisition and digitizing, word detection, amplitude and time normalization, short-time spectral estimation including spectral windowing, spectral envelope approximation,…
Emotion and language: Valence and arousal affect word recognition
Brysbaert, Marc; Warriner, Amy Beth
2014-01-01
Emotion influences most aspects of cognition and behavior, but emotional factors are conspicuously absent from current models of word recognition. The influence of emotion on word recognition has mostly been reported in prior studies on the automatic vigilance for negative stimuli, but the precise nature of this relationship is unclear. Various models of automatic vigilance have claimed that the effect of valence on response times is categorical, an inverted-U, or interactive with arousal. The present study used a sample of 12,658 words, and included many lexical and semantic control factors, to determine the precise nature of the effects of arousal and valence on word recognition. Converging empirical patterns observed in word-level and trial-level data from lexical decision and naming indicate that valence and arousal exert independent monotonic effects: Negative words are recognized more slowly than positive words, and arousing words are recognized more slowly than calming words. Valence explained about 2% of the variance in word recognition latencies, whereas the effect of arousal was smaller. Valence and arousal do not interact, but both interact with word frequency, such that valence and arousal exert larger effects among low-frequency words than among high-frequency words. These results necessitate a new model of affective word processing whereby the degree of negativity monotonically and independently predicts the speed of responding. This research also demonstrates that incorporating emotional factors, especially valence, improves the performance of models of word recognition. PMID:24490848
The influence of bodily experience on children's language processing.
Wellsby, Michele; Pexman, Penny M
2014-07-01
The Body-Object Interaction (BOI) variable measures how easily a human body can physically interact with a word's referent (Siakaluk, Pexman, Aguilera, Owen, & Sears, ). A facilitory BOI effect has been observed with adults in language tasks, with faster and more accurate responses for high BOI words (e.g., mask) than for low BOI words (e.g., ship; Wellsby, Siakaluk, Owen, & Pexman, ). We examined the development of this effect in children. Fifty children (aged 6-9 years) and a group of 21 adults completed a word naming task with high and low BOI words. Younger children (aged 6-7 years) did not show a BOI effect, but older children (aged 8-9 years) showed a significant facilitory BOI effect, as did adults. Magnitude of children's BOI effect was related to age as well as reading skills. These results suggest that bodily experience (as measured by the BOI variable) begins to influence visual word recognition behavior by about 8 years of age. Copyright © 2014 Cognitive Science Society, Inc.
The sound of enemies and friends in the neighborhood.
Pecher, Diane; Boot, Inge; van Dantzig, Saskia; Madden, Carol J; Huber, David E; Zeelenberg, René
2011-01-01
Previous studies (e.g., Pecher, Zeelenberg, & Wagenmakers, 2005) found that semantic classification performance is better for target words with orthographic neighbors that are mostly from the same semantic class (e.g., living) compared to target words with orthographic neighbors that are mostly from the opposite semantic class (e.g., nonliving). In the present study we investigated the contribution of phonology to orthographic neighborhood effects by comparing effects of phonologically congruent orthographic neighbors (book-hook) to phonologically incongruent orthographic neighbors (sand-wand). The prior presentation of a semantically congruent word produced larger effects on subsequent animacy decisions when the previously presented word was a phonologically congruent neighbor than when it was a phonologically incongruent neighbor. In a second experiment, performance differences between target words with versus without semantically congruent orthographic neighbors were larger if the orthographic neighbors were also phonologically congruent. These results support models of visual word recognition that assume an important role for phonology in cascaded access to meaning.
Lexical precision in skilled readers: Individual differences in masked neighbor priming.
Andrews, Sally; Hersch, Jolyn
2010-05-01
Two experiments investigated the relationship between masked form priming and individual differences in reading and spelling proficiency among university students. Experiment 1 assessed neighbor priming for 4-letter word targets from high- and low-density neighborhoods in 97 university students. The overall results replicated previous evidence of facilitatory neighborhood priming only for low-neighborhood words. However, analyses including measures of reading and spelling proficiency as covariates revealed that better spellers showed inhibitory priming for high-neighborhood words, while poorer spellers showed facilitatory priming. Experiment 2, with 123 participants, replicated the finding of stronger inhibitory neighbor priming in better spellers using 5-letter words and distinguished facilitatory and inhibitory components of priming by comparing neighbor primes with ambiguous and unambiguous partial-word primes (e.g., crow#, cr#wd, crown CROWD). The results indicate that spelling ability is selectively associated with inhibitory effects of lexical competition. The implications for theories of visual word recognition and the lexical quality hypothesis of reading skill are discussed.
Clinical implications of word recognition differences in earphone and aided conditions
McRackan, Theodore R.; Ahlstrom, Jayne B.; Clinkscales, William B.; Meyer, Ted A.; Dubno, Judy R
2017-01-01
Objective To compare word recognition scores for adults with hearing loss measured using earphones and in the sound field without and with hearing aids (HA) Study design Independent review of pre-surgical audiological data from an active middle ear implant (MEI) FDA clinical trial Setting Multicenter prospective FDA clinical trial Patients Ninety-four adult HA users Interventions/Main outcomes measured Pre-operative earphone, unaided and aided pure tone thresholds, word recognition scores, and speech intelligibility index. Results We performed an independent review of pre-surgical audiological data from a MEI FDA trial and compared unaided and aided word recognition scores with participants’ HAs fit according to the NAL-R algorithm. For 52 participants (55.3%), differences in scores between earphone and aided conditions were >10%; for 33 participants (35.1%), earphone scores were higher by 10% or more than aided scores. These participants had significantly higher pure tone thresholds at 250 Hz, 500 Hz, and 1000 Hz), higher pure tone averages, higher speech recognition thresholds, (and higher earphone speech levels (p=0.002). No significant correlation was observed between word recognition scores measured with earphones and with hearing aids (r=.14; p=0.16), whereas a moderately high positive correlation was observed between unaided and aided word recognition (r=0.68; p<0.001). Conclusion Results of the these analyses do not support the common clinical practice of using word recognition scores measured with earphones to predict aided word recognition or hearing aid benefit. Rather, these results provide evidence supporting the measurement of aided word recognition in patients who are considering hearing aids. PMID:27631832
Marelli, Marco; Amenta, Simona; Crepaldi, Davide
2015-01-01
A largely overlooked side effect in most studies of morphological priming is a consistent main effect of semantic transparency across priming conditions. That is, participants are faster at recognizing stems from transparent sets (e.g., farm) in comparison to stems from opaque sets (e.g., fruit), regardless of the preceding primes. This suggests that semantic transparency may also be consistently associated with some property of the stem word. We propose that this property might be traced back to the consistency, throughout the lexicon, between the orthographic form of a word and its meaning, here named Orthography-Semantics Consistency (OSC), and that an imbalance in OSC scores might explain the "stem transparency" effect. We exploited distributional semantic models to quantitatively characterize OSC, and tested its effect on visual word identification relying on large-scale data taken from the British Lexicon Project (BLP). Results indicated that (a) the "stem transparency" effect is solid and reliable, insofar as it holds in BLP lexical decision times (Experiment 1); (b) an imbalance in terms of OSC can account for it (Experiment 2); and (c) more generally, OSC explains variance in a large item sample from the BLP, proving to be an effective predictor in visual word access (Experiment 3).
Asymmetries in Early Word Recognition: The Case of Stops and Fricatives
ERIC Educational Resources Information Center
Altvater-Mackensen, Nicole; van der Feest, Suzanne V. H.; Fikkert, Paula
2014-01-01
Toddlers' discrimination of native phonemic contrasts is generally unproblematic. Yet using those native contrasts in word learning and word recognition can be more challenging. In this article, we investigate perceptual versus phonological explanations for asymmetrical patterns found in early word recognition. We systematically investigated the…
Saarela, Carina; Joutsa, Juho; Laine, Matti; Parkkola, Riitta; Rinne, Juha O; Karrasch, Mira
2017-01-01
Emotional content is known to enhance memory in a content-dependent manner in healthy populations. In middle-aged and older adults, a reduced preference for negative material, or even an enhanced preference for positive material has been observed. This preference seems to be modulated by the emotional arousal that the material evokes. The neuroanatomical basis for emotional memory processes is, however, not well understood in middle-aged and older healthy people. Previous research on local gray matter correlates of emotional memory in older populations has mainly been conducted with patients suffering from various neurodegenerative diseases. To our knowledge, this is the first study to examine regional gray matter correlates of immediate free recall and recognition memory of intentionally encoded positive, negative, and emotionally neutral words using voxel-based morphometry (VBM) in a sample of 50-to-79-year-old cognitively intact normal adults. The behavioral analyses yielded a positivity bias in recognition memory, but not in immediate free recall. No associations with memory performance emerged from the region-of-interest (ROI) analyses using amygdalar and hippocampal volumes. Controlling for total intracranial volume, age, and gender, the whole-brain VBM analyses showed statistically significant associations between immediate free recall of negative words and volumes in various frontal regions, between immediate free recall of positive words and cerebellar volume, and between recognition memory of positive words and primary visual cortex volume. The findings indicate that the neural areas subserving memory for emotion-laden information encompass posterior brain areas, including the cerebellum, and that memory for emotion-laden information may be driven by cognitive control functions.
Joutsa, Juho; Laine, Matti; Parkkola, Riitta; Rinne, Juha O.; Karrasch, Mira
2017-01-01
Emotional content is known to enhance memory in a content-dependent manner in healthy populations. In middle-aged and older adults, a reduced preference for negative material, or even an enhanced preference for positive material has been observed. This preference seems to be modulated by the emotional arousal that the material evokes. The neuroanatomical basis for emotional memory processes is, however, not well understood in middle-aged and older healthy people. Previous research on local gray matter correlates of emotional memory in older populations has mainly been conducted with patients suffering from various neurodegenerative diseases. To our knowledge, this is the first study to examine regional gray matter correlates of immediate free recall and recognition memory of intentionally encoded positive, negative, and emotionally neutral words using voxel-based morphometry (VBM) in a sample of 50-to-79-year-old cognitively intact normal adults. The behavioral analyses yielded a positivity bias in recognition memory, but not in immediate free recall. No associations with memory performance emerged from the region-of-interest (ROI) analyses using amygdalar and hippocampal volumes. Controlling for total intracranial volume, age, and gender, the whole-brain VBM analyses showed statistically significant associations between immediate free recall of negative words and volumes in various frontal regions, between immediate free recall of positive words and cerebellar volume, and between recognition memory of positive words and primary visual cortex volume. The findings indicate that the neural areas subserving memory for emotion-laden information encompass posterior brain areas, including the cerebellum, and that memory for emotion-laden information may be driven by cognitive control functions. PMID:28771634
Holistic neural coding of Chinese character forms in bilateral ventral visual system.
Mo, Ce; Yu, Mengxia; Seger, Carol; Mo, Lei
2015-02-01
How are Chinese characters recognized and represented in the brain of skilled readers? Functional MRI fast adaptation technique was used to address this question. We found that neural adaptation effects were limited to identical characters in bilateral ventral visual system while no activation reduction was observed for partially overlapping characters regardless of the spatial location of the shared sub-character components, suggesting highly selective neuronal tuning to whole characters. The consistent neural profile across the entire ventral visual cortex indicates that Chinese characters are represented as mutually distinctive wholes rather than combinations of sub-character components, which presents a salient contrast to the left-lateralized, simple-to-complex neural representations of alphabetic words. Our findings thus revealed the cultural modulation effect on both local neuronal activity patterns and functional anatomical regions associated with written symbol recognition. Moreover, the cross-language discrepancy in written symbol recognition mechanism might stem from the language-specific early-stage learning experience. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.
Gwilliams, L; Marantz, A
2015-08-01
Although the significance of morphological structure is established in visual word processing, its role in auditory processing remains unclear. Using magnetoencephalography we probe the significance of the root morpheme for spoken Arabic words with two experimental manipulations. First we compare a model of auditory processing that calculates probable lexical outcomes based on whole-word competitors, versus a model that only considers the root as relevant to lexical identification. Second, we assess violations to the root-specific Obligatory Contour Principle (OCP), which disallows root-initial consonant gemination. Our results show root prediction to significantly correlate with neural activity in superior temporal regions, independent of predictions based on whole-word competitors. Furthermore, words that violated the OCP constraint were significantly easier to dismiss as valid words than probability-matched counterparts. The findings suggest that lexical auditory processing is dependent upon morphological structure, and that the root forms a principal unit through which spoken words are recognised. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Drakesmith, Mark; El-Deredy, Wael; Welbourne, Stephen
2015-01-01
Reading words for meaning relies on orthographic, phonological and semantic processing. The triangle model implicates a direct orthography-to-semantics pathway and a phonologically mediated orthography-to-semantics pathway, which interact with each other. The temporal evolution of processing in these routes is not well understood, although theoretical evidence predicts early phonological processing followed by interactive phonological and semantic processing. This study used electroencephalography-event-related potential (ERP) analysis and magnetoencephalography (MEG) source localisation to identify temporal markers and the corresponding neural generators of these processes in early (∼200 ms) and late (∼400 ms) neurophysiological responses to visual words, pseudowords and consonant strings. ERP showed an effect of phonology but not semantics in both time windows, although at ∼400 ms there was an effect of stimulus familiarity. Phonological processing at ~200 ms was localised to the left occipitotemporal cortex and the inferior frontal gyrus. At 400 ms, there was continued phonological processing in the inferior frontal gyrus and additional semantic processing in the anterior temporal cortex. There was also an area in the left temporoparietal junction which was implicated in both phonological and semantic processing. In ERP, the semantic response at ∼400 ms appeared to be masked by concurrent processes relating to familiarity, while MEG successfully differentiated these processes. The results support the prediction of early phonological processing followed by an interaction of phonological and semantic processing during word recognition. Neuroanatomical loci of these processes are consistent with previous neuropsychological and functional magnetic resonance imaging studies. The results also have implications for the classical interpretation of N400-like responses as markers for semantic processing.
Anticipatory coarticulation facilitates word recognition in toddlers.
Mahr, Tristan; McMillan, Brianna T M; Saffran, Jenny R; Ellis Weismer, Susan; Edwards, Jan
2015-09-01
Children learn from their environments and their caregivers. To capitalize on learning opportunities, young children have to recognize familiar words efficiently by integrating contextual cues across word boundaries. Previous research has shown that adults can use phonetic cues from anticipatory coarticulation during word recognition. We asked whether 18-24 month-olds (n=29) used coarticulatory cues on the word "the" when recognizing the following noun. We performed a looking-while-listening eyetracking experiment to examine word recognition in neutral vs. facilitating coarticulatory conditions. Participants looked to the target image significantly sooner when the determiner contained facilitating coarticulatory cues. These results provide the first evidence that novice word-learners can take advantage of anticipatory sub-phonemic cues during word recognition. Copyright © 2015 Elsevier B.V. All rights reserved.
Duñabeitia, Jon Andoni; Dimitropoulou, María; Estévez, Adelina; Carreiras, Manuel
2013-01-01
The visual word recognition system recruits neuronal systems originally developed for object perception which are characterized by orientation insensitivity to mirror reversals. It has been proposed that during reading acquisition beginning readers have to “unlearn” this natural tolerance to mirror reversals in order to efficiently discriminate letters and words. Therefore, it is supposed that this unlearning process takes place in a gradual way and that reading expertise modulates mirror-letter discrimination. However, to date no supporting evidence for this has been obtained. We present data from an eye-movement study that investigated the degree of sensitivity to mirror-letters in a group of beginning readers and a group of expert readers. Participants had to decide which of the two strings presented on a screen corresponded to an auditorily presented word. Visual displays always included the correct target word and one distractor word. Results showed that those distractors that were the same as the target word except for the mirror lateralization of two internal letters attracted participants’ attention more than distractors created by replacement of two internal letters. Interestingly, the time course of the effects was found to be different for the two groups, with beginning readers showing a greater tolerance (decreased sensitivity) to mirror-letters than expert readers. Implications of these findings are discussed within the framework of preceding evidence showing how reading expertise modulates letter identification. PMID:24273596
Does N200 reflect semantic processing?--An ERP study on Chinese visual word recognition.
Du, Yingchun; Zhang, Qin; Zhang, John X
2014-01-01
Recent event-related potential research has reported a N200 response or a negative deflection peaking around 200 ms following the visual presentation of two-character Chinese words. This N200 shows amplitude enhancement upon immediate repetition and there has been preliminary evidence that it reflects orthographic processing but not semantic processing. The present study tested whether this N200 is indeed unrelated to semantic processing with more sensitive measures, including the use of two tasks engaging semantic processing either implicitly or explicitly and the adoption of a within-trial priming paradigm. In Exp. 1, participants viewed repeated, semantically related and unrelated prime-target word pairs as they performed a lexical decision task judging whether or not each target was a real word. In Exp. 2, participants viewed high-related, low-related and unrelated word pairs as they performed a semantic task judging whether each word pair was related in meaning. In both tasks, semantic priming was found from both the behavioral data and the N400 ERP responses. Critically, while repetition priming elicited a clear and large enhancement on the N200 response, semantic priming did not show any modulation effect on the same response. The results indicate that the N200 repetition enhancement effect cannot be explained with semantic priming and that this specific N200 response is unlikely to reflect semantic processing.
Influence of color word availability on the Stroop color-naming effect.
Kim, Hyosun; Cho, Yang Seok; Yamaguchi, Motonori; Proctor, Robert W
2008-11-01
Three experiments tested whether the Stroop color-naming effect is a consequence of word recognition's being automatic or of the color word's capturing visual attention. In Experiment 1, a color bar was presented at fixation as the color carrier, with color and neutral words presented in locations above or below the color bar; Experiment 2 was similar, except that the color carrier could occur in one of the peripheral locations and the color word at fixation. The Stroop effect increased as display duration increased, and the Stroop dilution effect (a reduced Stroop effect when a neutral word is also present) was an approximately constant proportion of the Stroop effect at all display durations, regardless of whether the color bar or color word was at fixation. In Experiment 3, the interval between the onsets of the to-be-named color and the color word was manipulated. The Stroop effect decreased with increasing delay of the color word onset, but the absolute amount of Stroop dilution produced by the neutral word increased. This study's results imply that an attention shift from the color carrier to the color word is an important factor modulating the size of the Stroop effect.
ERIC Educational Resources Information Center
Campos, Ana Duarte; Mendes Oliveira, Helena; Soares, Ana Paula
2018-01-01
The role of syllables as a sublexical unit in visual word recognition and reading is well established in deep and shallow syllable-timed languages such as French and Spanish, respectively. However, its role in intermediate stress-timed languages remains unclear. This paper aims to overcome this gap by studying for the first time the role of…
Incorporating Speech Recognition into a Natural User Interface
NASA Technical Reports Server (NTRS)
Chapa, Nicholas
2017-01-01
The Augmented/ Virtual Reality (AVR) Lab has been working to study the applicability of recent virtual and augmented reality hardware and software to KSC operations. This includes the Oculus Rift, HTC Vive, Microsoft HoloLens, and Unity game engine. My project in this lab is to integrate voice recognition and voice commands into an easy to modify system that can be added to an existing portion of a Natural User Interface (NUI). A NUI is an intuitive and simple to use interface incorporating visual, touch, and speech recognition. The inclusion of speech recognition capability will allow users to perform actions or make inquiries using only their voice. The simplicity of needing only to speak to control an on-screen object or enact some digital action means that any user can quickly become accustomed to using this system. Multiple programs were tested for use in a speech command and recognition system. Sphinx4 translates speech to text using a Hidden Markov Model (HMM) based Language Model, an Acoustic Model, and a word Dictionary running on Java. PocketSphinx had similar functionality to Sphinx4 but instead ran on C. However, neither of these programs were ideal as building a Java or C wrapper slowed performance. The most ideal speech recognition system tested was the Unity Engine Grammar Recognizer. A Context Free Grammar (CFG) structure is written in an XML file to specify the structure of phrases and words that will be recognized by Unity Grammar Recognizer. Using Speech Recognition Grammar Specification (SRGS) 1.0 makes modifying the recognized combinations of words and phrases very simple and quick to do. With SRGS 1.0, semantic information can also be added to the XML file, which allows for even more control over how spoken words and phrases are interpreted by Unity. Additionally, using a CFG with SRGS 1.0 produces a Finite State Machine (FSM) functionality limiting the potential for incorrectly heard words or phrases. The purpose of my project was to investigate options for a Speech Recognition System. To that end I attempted to integrate Sphinx4 into a user interface. Sphinx4 had great accuracy and is the only free program able to perform offline speech dictation. However it had a limited dictionary of words that could be recognized, single syllable words were almost impossible for it to hear, and since it ran on Java it could not be integrated into the Unity based NUI. PocketSphinx ran much faster than Sphinx4 which would've made it ideal as a plugin to the Unity NUI, unfortunately creating a C# wrapper for the C code made the program unusable with Unity due to the wrapper slowing code execution and class files becoming unreachable. Unity Grammar Recognizer is the ideal speech recognition interface, it is flexible in recognizing multiple variations of the same command. It is also the most accurate program in recognizing speech due to using an XML grammar to specify speech structure instead of relying solely on a Dictionary and Language model. The Unity Grammar Recognizer will be used with the NUI for these reasons as well as being written in C# which further simplifies the incorporation.
Beato, María Soledad; Arndt, Jason
2014-01-01
False memory illusions have been widely studied using the Deese/Roediger-McDermott paradigm (DRM). In this paradigm, participants study words semantically related to a single nonpresented critical word. In a memory test critical words are often falsely recalled and recognized. The present study was conducted to measure the levels of false recognition for seventy-five Spanish DRM word lists that have multiple critical words per list. Lists included three critical words (e.g., HELL, LUCEFER, and SATAN) simultaneously associated with six studied words (e.g., devil, demon, fire, red, bad, and evil). Different levels of forward associative strength (FAS) between the critical words and their studied associates were used in the construction of the lists. Specifically, we selected lists with the highest FAS values possible and FAS was continuously decreased in order to obtain the 75 lists. Six words per list, simultaneously associated with three critical words, were sufficient to produce false recognition. Furthermore, there was wide variability in rates of false recognition (e.g., 53% for DUNGEON, PRISON, and GRATES; 1% for BRACKETS, GARMENT, and CLOTHING). Finally, there was no correlation between false recognition and associative strength. False recognition variability could not be attributed to differences in the forward associative strength.
Dewar, Michaela; Alber, Jessica; Cowan, Nelson; Della Sala, Sergio
2014-01-01
People perform better on tests of delayed free recall if learning is followed immediately by a short wakeful rest than by a short period of sensory stimulation. Animal and human work suggests that wakeful resting provides optimal conditions for the consolidation of recently acquired memories. However, an alternative account cannot be ruled out, namely that wakeful resting provides optimal conditions for intentional rehearsal of recently acquired memories, thus driving superior memory. Here we utilised non-recallable words to examine whether wakeful rest boosts long-term memory, even when new memories could not be rehearsed intentionally during the wakeful rest delay. The probing of non-recallable words requires a recognition paradigm. Therefore, we first established, via Experiment 1, that the rest-induced boost in memory observed via free recall can be replicated in a recognition paradigm, using concrete nouns. In Experiment 2, participants heard 30 non-recallable non-words, presented as ‘foreign names in a bridge club abroad’ and then either rested wakefully or played a visual spot-the-difference game for 10 minutes. Retention was probed via recognition at two time points, 15 minutes and 7 days after presentation. As in Experiment 1, wakeful rest boosted recognition significantly, and this boost was maintained for at least 7 days. Our results indicate that the enhancement of memory via wakeful rest is not dependent upon intentional rehearsal of learned material during the rest period. We thus conclude that consolidation is sufficient for this rest-induced memory boost to emerge. We propose that wakeful resting allows for superior memory consolidation, resulting in stronger and/or more veridical representations of experienced events which can be detected via tests of free recall and recognition. PMID:25333957
Dewar, Michaela; Alber, Jessica; Cowan, Nelson; Della Sala, Sergio
2014-01-01
People perform better on tests of delayed free recall if learning is followed immediately by a short wakeful rest than by a short period of sensory stimulation. Animal and human work suggests that wakeful resting provides optimal conditions for the consolidation of recently acquired memories. However, an alternative account cannot be ruled out, namely that wakeful resting provides optimal conditions for intentional rehearsal of recently acquired memories, thus driving superior memory. Here we utilised non-recallable words to examine whether wakeful rest boosts long-term memory, even when new memories could not be rehearsed intentionally during the wakeful rest delay. The probing of non-recallable words requires a recognition paradigm. Therefore, we first established, via Experiment 1, that the rest-induced boost in memory observed via free recall can be replicated in a recognition paradigm, using concrete nouns. In Experiment 2, participants heard 30 non-recallable non-words, presented as 'foreign names in a bridge club abroad' and then either rested wakefully or played a visual spot-the-difference game for 10 minutes. Retention was probed via recognition at two time points, 15 minutes and 7 days after presentation. As in Experiment 1, wakeful rest boosted recognition significantly, and this boost was maintained for at least 7 days. Our results indicate that the enhancement of memory via wakeful rest is not dependent upon intentional rehearsal of learned material during the rest period. We thus conclude that consolidation is sufficient for this rest-induced memory boost to emerge. We propose that wakeful resting allows for superior memory consolidation, resulting in stronger and/or more veridical representations of experienced events which can be detected via tests of free recall and recognition.
Kellenbach, Marion L; Wijers, Albertus A; Hovius, Marjolijn; Mulder, Juul; Mulder, Gijsbertus
2002-05-15
Event-related potentials (ERPs) were used to investigate whether processing differences between nouns and verbs can be accounted for by the differential salience of visual-perceptual and motor attributes in their semantic specifications. Three subclasses of nouns and verbs were selected, which differed in their semantic attribute composition (abstract, high visual, high visual and motor). Single visual word presentation with a recognition memory task was used. While multiple robust and parallel ERP effects were observed for both grammatical class and attribute type, there were no interactions between these. This pattern of effects provides support for lexical-semantic knowledge being organized in a manner that takes account both of category-based (grammatical class) and attribute-based distinctions.
Visual speech influences speech perception immediately but not automatically.
Mitterer, Holger; Reinisch, Eva
2017-02-01
Two experiments examined the time course of the use of auditory and visual speech cues to spoken word recognition using an eye-tracking paradigm. Results of the first experiment showed that the use of visual speech cues from lipreading is reduced if concurrently presented pictures require a division of attentional resources. This reduction was evident even when listeners' eye gaze was on the speaker rather than the (static) pictures. Experiment 2 used a deictic hand gesture to foster attention to the speaker. At the same time, the visual processing load was reduced by keeping the visual display constant over a fixed number of successive trials. Under these conditions, the visual speech cues from lipreading were used. Moreover, the eye-tracking data indicated that visual information was used immediately and even earlier than auditory information. In combination, these data indicate that visual speech cues are not used automatically, but if they are used, they are used immediately.
Qin, Jiangyi; Huang, Zhiping; Liu, Chunwu; Su, Shaojing; Zhou, Jing
2015-01-01
A novel blind recognition algorithm of frame synchronization words is proposed to recognize the frame synchronization words parameters in digital communication systems. In this paper, a blind recognition method of frame synchronization words based on the hard-decision is deduced in detail. And the standards of parameter recognition are given. Comparing with the blind recognition based on the hard-decision, utilizing the soft-decision can improve the accuracy of blind recognition. Therefore, combining with the characteristics of Quadrature Phase Shift Keying (QPSK) signal, an improved blind recognition algorithm based on the soft-decision is proposed. Meanwhile, the improved algorithm can be extended to other signal modulation forms. Then, the complete blind recognition steps of the hard-decision algorithm and the soft-decision algorithm are given in detail. Finally, the simulation results show that both the hard-decision algorithm and the soft-decision algorithm can recognize the parameters of frame synchronization words blindly. What's more, the improved algorithm can enhance the accuracy of blind recognition obviously.
Help me if I can't: Social interaction effects in adult contextual word learning.
Verga, Laura; Kotz, Sonja A
2017-11-01
A major challenge in second language acquisition is to build up new vocabulary. How is it possible to identify the meaning of a new word among several possible referents? Adult learners typically use contextual information, which reduces the number of possible referents a new word can have. Alternatively, a social partner may facilitate word learning by directing the learner's attention toward the correct new word meaning. While much is known about the role of this form of 'joint attention' in first language acquisition, little is known about its efficacy in second language acquisition. Consequently, we introduce and validate a novel visual word learning game to evaluate how joint attention affects the contextual learning of new words in a second language. Adult learners either acquired new words in a constant or variable sentence context by playing the game with a knowledgeable partner, or by playing the game alone on a computer. Results clearly show that participants who learned new words in social interaction (i) are faster in identifying a correct new word referent in variable sentence contexts, and (ii) temporally coordinate their behavior with a social partner. Testing the learned words in a post-learning recall or recognition task showed that participants, who learned interactively, better recognized words originally learned in a variable context. While this result may suggest that interactive learning facilitates the allocation of attention to a target referent, the differences in the performance during recognition and recall call for further studies investigating the effect of social interaction on learning performance. In summary, we provide first evidence on the role joint attention in second language learning. Furthermore, the new interactive learning game offers itself to further testing in complex neuroimaging research, where the lack of appropriate experimental set-ups has so far limited the investigation of the neural basis of adult word learning in social interaction. Copyright © 2017 Elsevier B.V. All rights reserved.
MPEG-7 audio-visual indexing test-bed for video retrieval
NASA Astrophysics Data System (ADS)
Gagnon, Langis; Foucher, Samuel; Gouaillier, Valerie; Brun, Christelle; Brousseau, Julie; Boulianne, Gilles; Osterrath, Frederic; Chapdelaine, Claude; Dutrisac, Julie; St-Onge, Francis; Champagne, Benoit; Lu, Xiaojian
2003-12-01
This paper reports on the development status of a Multimedia Asset Management (MAM) test-bed for content-based indexing and retrieval of audio-visual documents within the MPEG-7 standard. The project, called "MPEG-7 Audio-Visual Document Indexing System" (MADIS), specifically targets the indexing and retrieval of video shots and key frames from documentary film archives, based on audio-visual content like face recognition, motion activity, speech recognition and semantic clustering. The MPEG-7/XML encoding of the film database is done off-line. The description decomposition is based on a temporal decomposition into visual segments (shots), key frames and audio/speech sub-segments. The visible outcome will be a web site that allows video retrieval using a proprietary XQuery-based search engine and accessible to members at the Canadian National Film Board (NFB) Cineroute site. For example, end-user will be able to ask to point on movie shots in the database that have been produced in a specific year, that contain the face of a specific actor who tells a specific word and in which there is no motion activity. Video streaming is performed over the high bandwidth CA*net network deployed by CANARIE, a public Canadian Internet development organization.
The Impact of Left and Right Intracranial Tumors on Picture and Word Recognition Memory
ERIC Educational Resources Information Center
Goldstein, Bram; Armstrong, Carol L.; Modestino, Edward; Ledakis, George; John, Cameron; Hunter, Jill V.
2004-01-01
This study investigated the effects of left and right intracranial tumors on picture and word recognition memory. We hypothesized that left hemispheric (LH) patients would exhibit greater word recognition memory impairment than right hemispheric (RH) patients, with no significant hemispheric group picture recognition memory differences. The LH…
Heim, Stefan; Weidner, Ralph; von Overheidt, Ann-Christin; Tholen, Nicole; Grande, Marion; Amunts, Katrin
2014-03-01
Phonological and visual dysfunctions may result in reading deficits like those encountered in developmental dyslexia. Here, we use a novel approach to induce similar reading difficulties in normal readers in an event-related fMRI study, thus systematically investigating which brain regions relate to different pathways relating to orthographic-phonological (e.g. grapheme-to-phoneme conversion, GPC) vs. visual processing. Based upon a previous behavioural study (Tholen et al. 2011), the retrieval of phonemes from graphemes was manipulated by lowering the identifiability of letters in familiar vs. unfamiliar shapes. Visual word and letter processing was impeded by presenting the letters of a word in a moving, non-stationary manner. FMRI revealed that the visual condition activated cytoarchitectonically defined area hOC5 in the magnocellular pathway and area 7A in the right mesial parietal cortex. In contrast, the grapheme manipulation revealed different effects localised predominantly in bilateral inferior frontal gyrus (left cytoarchitectonic area 44; right area 45) and inferior parietal lobule (including areas PF/PFm), regions that have been demonstrated to show abnormal activation in dyslexic as compared to normal readers. This pattern of activation bears close resemblance to recent findings in dyslexic samples both behaviourally and with respect to the neurofunctional activation patterns. The novel paradigm may thus prove useful in future studies to understand reading problems related to distinct pathways, potentially providing a link also to the understanding of real reading impairments in dyslexia.
Lobier, Muriel; Peyrin, Carole; Le Bas, Jean-François; Valdois, Sylviane
2012-07-01
The visual front-end of reading is most often associated with orthographic processing. The left ventral occipito-temporal cortex seems to be preferentially tuned for letter string and word processing. In contrast, little is known of the mechanisms responsible for pre-orthographic processing: the processing of character strings regardless of character type. While the superior parietal lobule has been shown to be involved in multiple letter processing, further data is necessary to extend these results to non-letter characters. The purpose of this study is to identify the neural correlates of pre-orthographic character string processing independently of character type. Fourteen skilled adult readers carried out multiple and single element visual categorization tasks with alphanumeric (AN) and non-alphanumeric (nAN) characters under fMRI. The role of parietal cortex in multiple element processing was further probed with a priori defined anatomical regions of interest (ROIs). Participants activated posterior parietal cortex more strongly for multiple than single element processing. ROI analyses showed that bilateral SPL/BA7 was more strongly activated for multiple than single element processing, regardless of character type. In contrast, no multiple element specific activity was found in inferior parietal lobules. These results suggests that parietal mechanisms are involved in pre-orthographic character string processing. We argue that in general, attentional mechanisms are involved in visual word recognition, as an early step of word visual analysis. Copyright © 2012 Elsevier Ltd. All rights reserved.
Frisch, Stefan A.; Pisoni, David B.
2012-01-01
Objective Computational simulations were carried out to evaluate the appropriateness of several psycholinguistic theories of spoken word recognition for children who use cochlear implants. These models also investigate the interrelations of commonly used measures of closed-set and open-set tests of speech perception. Design A software simulation of phoneme recognition performance was developed that uses feature identification scores as input. Two simulations of lexical access were developed. In one, early phoneme decisions are used in a lexical search to find the best matching candidate. In the second, phoneme decisions are made only when lexical access occurs. Simulated phoneme and word identification performance was then applied to behavioral data from the Phonetically Balanced Kindergarten test and Lexical Neighborhood Test of open-set word recognition. Simulations of performance were evaluated for children with prelingual sensorineural hearing loss who use cochlear implants with the MPEAK or SPEAK coding strategies. Results Open-set word recognition performance can be successfully predicted using feature identification scores. In addition, we observed no qualitative differences in performance between children using MPEAK and SPEAK, suggesting that both groups of children process spoken words similarly despite differences in input. Word recognition ability was best predicted in the model in which phoneme decisions were delayed until lexical access. Conclusions Closed-set feature identification and open-set word recognition focus on different, but related, levels of language processing. Additional insight for clinical intervention may be achieved by collecting both types of data. The most successful model of performance is consistent with current psycholinguistic theories of spoken word recognition. Thus it appears that the cognitive process of spoken word recognition is fundamentally the same for pediatric cochlear implant users and children and adults with normal hearing. PMID:11132784
Amichetti, Nicole M; Atagi, Eriko; Kong, Ying-Yee; Wingfield, Arthur
The increasing numbers of older adults now receiving cochlear implants raises the question of how the novel signal produced by cochlear implants may interact with cognitive aging in the recognition of words heard spoken within a linguistic context. The objective of this study was to pit the facilitative effects of a constraining linguistic context against a potential age-sensitive negative effect of response competition on effectiveness of word recognition. Younger (n = 8; mean age = 22.5 years) and older (n = 8; mean age = 67.5 years) adult implant recipients heard 20 target words as the final words in sentences that manipulated the target word's probability of occurrence within the sentence context. Data from published norms were also used to measure response entropy, calculated as the total number of different responses and the probability distribution of the responses suggested by the sentence context. Sentence-final words were presented to participants using a word-onset gating paradigm, in which a target word was presented with increasing amounts of its onset duration in 50 msec increments until the word was correctly identified. Results showed that for both younger and older adult implant users, the amount of word-onset information needed for correct recognition of sentence-final words was inversely proportional to their likelihood of occurrence within the sentence context, with older adults gaining differential advantage from the contextual constraints offered by a sentence context. On the negative side, older adults' word recognition was differentially hampered by high response entropy, with this effect being driven primarily by the number of competing responses that might also fit the sentence context. Consistent with previous research with normal-hearing younger and older adults, the present results showed older adult implant users' recognition of spoken words to be highly sensitive to linguistic context. This sensitivity, however, also resulted in a greater degree of interference from other words that might also be activated by the context, with negative effects on ease of word recognition. These results are consistent with an age-related inhibition deficit extending to the domain of semantic constraints on word recognition.
Yoncheva, Yuliya N; Wise, Jessica; McCandliss, Bruce
2015-01-01
Selective attention to grapheme-phoneme mappings during learning can impact the circuitry subsequently recruited during reading. Here we trained literate adults to read two novel scripts of glyph words containing embedded letters under different instructions. For one script, learners linked each embedded letter to its corresponding sound within the word (grapheme-phoneme focus); for the other, decoding was prevented so entire words had to be memorized. Post-training, ERPs were recorded during a reading task on the trained words within each condition and on untrained but decodable (transfer) words. Within this condition, reaction-time patterns suggested both trained and transfer words were accessed via sublexical units, yet a left-lateralized, late ERP response showed an enhanced left lateralization for transfer words relative to trained words, potentially reflecting effortful decoding. Collectively, these findings show that selective attention to grapheme-phoneme mappings during learning drives the lateralization of circuitry that supports later word recognition. This study thus provides a model example of how different instructional approaches to the same material may impact changes in brain circuitry. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Word Recognition and Critical Reading.
ERIC Educational Resources Information Center
Groff, Patrick
1991-01-01
This article discusses the distinctions between literal and critical reading and explains the role that word recognition ability plays in critical reading behavior. It concludes that correct word recognition provides the raw material on which higher order critical reading is based. (DB)
Modeling Geometric-Temporal Context With Directional Pyramid Co-Occurrence for Action Recognition.
Yuan, Chunfeng; Li, Xi; Hu, Weiming; Ling, Haibin; Maybank, Stephen J
2014-02-01
In this paper, we present a new geometric-temporal representation for visual action recognition based on local spatio-temporal features. First, we propose a modified covariance descriptor under the log-Euclidean Riemannian metric to represent the spatio-temporal cuboids detected in the video sequences. Compared with previously proposed covariance descriptors, our descriptor can be measured and clustered in Euclidian space. Second, to capture the geometric-temporal contextual information, we construct a directional pyramid co-occurrence matrix (DPCM) to describe the spatio-temporal distribution of the vector-quantized local feature descriptors extracted from a video. DPCM characterizes the co-occurrence statistics of local features as well as the spatio-temporal positional relationships among the concurrent features. These statistics provide strong descriptive power for action recognition. To use DPCM for action recognition, we propose a directional pyramid co-occurrence matching kernel to measure the similarity of videos. The proposed method achieves the state-of-the-art performance and improves on the recognition performance of the bag-of-visual-words (BOVWs) models by a large margin on six public data sets. For example, on the KTH data set, it achieves 98.78% accuracy while the BOVW approach only achieves 88.06%. On both Weizmann and UCF CIL data sets, the highest possible accuracy of 100% is achieved.
Beato, María S; Arndt, Jason
2017-08-01
Memory is a reconstruction of the past and is prone to errors. One of the most widely-used paradigms to examine false memory is the Deese/Roediger-McDermott (DRM) paradigm. In this paradigm, participants studied words associatively related to a non-presented critical word. In a subsequent memory test critical words are often falsely recalled and/or recognized. In the present study, we examined the influence of backward associative strength (BAS) on false recognition using DRM lists with multiple critical words. In forty-eight English DRM lists, we manipulated BAS while controlling forward associative strength (FAS). Lists included four words (e.g., prison, convict, suspect, fugitive) simultaneously associated with two critical words (e.g., CRIMINAL, JAIL). The results indicated that true recognition was similar in high-BAS and low-BAS lists, while false recognition was greater in high-BAS lists than in low-BAS lists. Furthermore, there was a positive correlation between false recognition and the probability of a resonant connection between the studied words and their associates. These findings suggest that BAS and resonant connections influence false recognition, and extend prior research using DRM lists associated with a single critical word to studies of DRM lists associated with multiple critical words.
Yap, Melvin J.; Tse, Chi-Shing; Balota, David A.
2009-01-01
Word frequency and semantic priming effects are among the most robust effects in visual word recognition, and it has been generally assumed that these two variables produce interactive effects in lexical decision performance, with larger priming effects for low-frequency targets. The results from four lexical decision experiments indicate that the joint effects of semantic priming and word frequency are critically dependent upon differences in the vocabulary knowledge of the participants. Specifically, across two Universities, additive effects of the two variables were observed in participants with more vocabulary knowledge, while interactive effects were observed in participants with less vocabulary knowledge. These results are discussed with reference to Borowsky and Besner’s (1993) multistage account and Plaut and Booth’s (2000) single-mechanism model. In general, the findings are also consistent with a flexible lexical processing system that optimizes performance based on processing fluency and task demands. PMID:20161653
Subtitle-Based Word Frequencies as the Best Estimate of Reading Behavior: The Case of Greek
Dimitropoulou, Maria; Duñabeitia, Jon Andoni; Avilés, Alberto; Corral, José; Carreiras, Manuel
2010-01-01
Previous evidence has shown that word frequencies calculated from corpora based on film and television subtitles can readily account for reading performance, since the language used in subtitles greatly approximates everyday language. The present study examines this issue in a society with increased exposure to subtitle reading. We compiled SUBTLEX-GR, a subtitled-based corpus consisting of more than 27 million Modern Greek words, and tested to what extent subtitle-based frequency estimates and those taken from a written corpus of Modern Greek account for the lexical decision performance of young Greek adults who are exposed to subtitle reading on a daily basis. Results showed that SUBTLEX-GR frequency estimates effectively accounted for participants’ reading performance in two different visual word recognition experiments. More importantly, different analyses showed that frequencies estimated from a subtitle corpus explained the obtained results significantly better than traditional frequencies derived from written corpora. PMID:21833273
Phonological-Lexical Feedback during Early Abstract Encoding: The Case of Deaf Readers
Perea, Manuel; Marcet, Ana; Vergara-Martínez, Marta
2016-01-01
In the masked priming technique, physical identity between prime and target enjoys an advantage over nominal identity in nonwords (GEDA-GEDA faster than geda-GEDA). However, nominal identity overrides physical identity in words (e.g., REAL-REAL similar to real-REAL). Here we tested whether the lack of an advantage of the physical identity condition for words was due to top-down feedback from phonological-lexical information. We examined this issue with deaf readers, as their phonological representations are not as fully developed as in hearing readers. Results revealed that physical identity enjoyed a processing advantage over nominal identity not only in nonwords but also in words (GEDA-GEDA faster than geda-GEDA; REAL-REAL faster than real-REAL). This suggests the existence of fundamental differences in the early stages of visual word recognition of hearing and deaf readers, possibly related to the amount of feedback from higher levels of information. PMID:26731110
Word recognition using a lexicon constrained by first/last character decisions
NASA Astrophysics Data System (ADS)
Zhao, Sheila X.; Srihari, Sargur N.
1995-03-01
In lexicon based recognition of machine-printed word images, the size of the lexicon can be quite extensive. The recognition performance is closely related to the size of the lexicon. Recognition performance drops quickly when lexicon size increases. Here, we present an algorithm to improve the word recognition performance by reducing the size of the given lexicon. The algorithm utilizes the information provided by the first and last characters of a word to reduce the size of the given lexicon. Given a word image and a lexicon that contains the word in the image, the first and last characters are segmented and then recognized by a character classifier. The possible candidates based on the results given by the classifier are selected, which give us the sub-lexicon. Then a word shape analysis algorithm is applied to produce the final ranking of the given lexicon. The algorithm was tested on a set of machine- printed gray-scale word images which includes a wide range of print types and qualities.
Word Spotting and Recognition with Embedded Attributes.
Almazán, Jon; Gordo, Albert; Fornés, Alicia; Valveny, Ernest
2014-12-01
This paper addresses the problems of word spotting and word recognition on images. In word spotting, the goal is to find all instances of a query word in a dataset of images. In recognition, the goal is to recognize the content of the word image, usually aided by a dictionary or lexicon. We describe an approach in which both word images and text strings are embedded in a common vectorial subspace. This is achieved by a combination of label embedding and attributes learning, and a common subspace regression. In this subspace, images and strings that represent the same word are close together, allowing one to cast recognition and retrieval tasks as a nearest neighbor problem. Contrary to most other existing methods, our representation has a fixed length, is low dimensional, and is very fast to compute and, especially, to compare. We test our approach on four public datasets of both handwritten documents and natural images showing results comparable or better than the state-of-the-art on spotting and recognition tasks.
ERIC Educational Resources Information Center
Gelfand, Jessica T.; Christie, Robert E.; Gelfand, Stanley A.
2014-01-01
Purpose: Speech recognition may be analyzed in terms of recognition probabilities for perceptual wholes (e.g., words) and parts (e.g., phonemes), where j or the j-factor reveals the number of independent perceptual units required for recognition of the whole (Boothroyd, 1968b; Boothroyd & Nittrouer, 1988; Nittrouer & Boothroyd, 1990). For…
Rapid Extraction of Lexical Tone Phonology in Chinese Characters: A Visual Mismatch Negativity Study
Wang, Xiao-Dong; Liu, A-Ping; Wu, Yin-Yuan; Wang, Peng
2013-01-01
Background In alphabetic languages, emerging evidence from behavioral and neuroimaging studies shows the rapid and automatic activation of phonological information in visual word recognition. In the mapping from orthography to phonology, unlike most alphabetic languages in which there is a natural correspondence between the visual and phonological forms, in logographic Chinese, the mapping between visual and phonological forms is rather arbitrary and depends on learning and experience. The issue of whether the phonological information is rapidly and automatically extracted in Chinese characters by the brain has not yet been thoroughly addressed. Methodology/Principal Findings We continuously presented Chinese characters differing in orthography and meaning to adult native Mandarin Chinese speakers to construct a constant varying visual stream. In the stream, most stimuli were homophones of Chinese characters: The phonological features embedded in these visual characters were the same, including consonants, vowels and the lexical tone. Occasionally, the rule of phonology was randomly violated by characters whose phonological features differed in the lexical tone. Conclusions/Significance We showed that the violation of the lexical tone phonology evoked an early, robust visual response, as revealed by whole-head electrical recordings of the visual mismatch negativity (vMMN), indicating the rapid extraction of phonological information embedded in Chinese characters. Source analysis revealed that the vMMN was involved in neural activations of the visual cortex, suggesting that the visual sensory memory is sensitive to phonological information embedded in visual words at an early processing stage. PMID:23437235
Scaltritti, Michele; Balota, David A; Peressotti, Francesca
2013-01-01
Stimulus quality and word frequency produce additive effects in lexical decision performance, whereas the semantic priming effect interacts with both stimulus quality and word frequency effects. This pattern places important constraints on models of visual word recognition. In Experiment 1, all three variables were investigated within a single speeded pronunciation study. The results indicated that the joint effects of stimulus quality and word frequency were dependent upon prime relatedness. In particular, an additive effect of stimulus quality and word frequency was found after related primes, and an interactive effect was found after unrelated primes. It was hypothesized that this pattern reflects an adaptive reliance on related prime information within the experimental context. In Experiment 2, related primes were eliminated from the list, and the interactive effects of stimulus quality and word frequency found following unrelated primes in Experiment 1 reverted to additive effects for the same unrelated prime conditions. The results are supportive of a flexible lexical processor that adapts to both local prime information and global list-wide context.
Inhoff, Albrecht W; Radach, Ralph; Eiter, Brianna M; Juhasz, Barbara
2003-07-01
Two experiments examined readers' use of parafoveally obtained word length information for word recognition. Both experiments manipulated the length (number of constituent characters) of a parafoveally previewed target word so that it was either accurately or inaccurately specified. In Experiment 1, previews also either revealed or denied useful orthographic information. In Experiment 2, parafoveal targets were either high- or low-frequency words. Eye movement contingent display changes were used to show the intact target upon its fixation. Examination of target viewing duration showed completely additive effects of word length previews and of ortho-graphic previews in Experiment 1, viewing duration being shorter in the accurate-length and the orthographic preview conditions. Experiment 2 showed completely additive effects of word length and word frequency, target viewing being shorter in the accurate-length and the high-frequency conditions. Together these results indicate that functionally distinct subsystems control the use of parafoveally visible spatial and linguistic information in reading. Parafoveally visible spatial information appears to be used for two distinct extralinguistic computations: visual object selection and saccade specification.
Incidental orthographic learning during a color detection task.
Protopapas, Athanassios; Mitsi, Anna; Koustoumbardis, Miltiadis; Tsitsopoulou, Sofia M; Leventi, Marianna; Seitz, Aaron R
2017-09-01
Orthographic learning refers to the acquisition of knowledge about specific spelling patterns forming words and about general biases and constraints on letter sequences. It is thought to occur by strengthening simultaneously activated visual and phonological representations during reading. Here we demonstrate that a visual perceptual learning procedure that leaves no time for articulation can result in orthographic learning evidenced in improved reading and spelling performance. We employed task-irrelevant perceptual learning (TIPL), in which the stimuli to be learned are paired with an easy task target. Assorted line drawings and difficult-to-spell words were presented in red color among sequences of other black-colored words and images presented in rapid succession, constituting a fast-TIPL procedure with color detection being the explicit task. In five experiments, Greek children in Grades 4-5 showed increased recognition of words and images that had appeared in red, both during and after the training procedure, regardless of within-training testing, and also when targets appeared in blue instead of red. Significant transfer to reading and spelling emerged only after increased training intensity. In a sixth experiment, children in Grades 2-3 showed generalization to words not presented during training that carried the same derivational affixes as in the training set. We suggest that reinforcement signals related to detection of the target stimuli contribute to the strengthening of orthography-phonology connections beyond earlier levels of visually-based orthographic representation learning. These results highlight the potential of perceptual learning procedures for the reinforcement of higher-level orthographic representations. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Orthographic recognition in late adolescents: an assessment through event-related brain potentials.
González-Garrido, Andrés Antonio; Gómez-Velázquez, Fabiola Reveca; Rodríguez-Santillán, Elizabeth
2014-04-01
Reading speed and efficiency are achieved through the automatic recognition of written words. Difficulties in learning and recognizing the orthography of words can arise despite reiterative exposure to texts. This study aimed to investigate, in native Spanish-speaking late adolescents, how different levels of orthographic knowledge might result in behavioral and event-related brain potential differences during the recognition of orthographic errors. Forty-five healthy high school students were selected and divided into 3 equal groups (High, Medium, Low) according to their performance on a 5-test battery of orthographic knowledge. All participants performed an orthographic recognition task consisting of the sequential presentation of a picture (object, fruit, or animal) followed by a correctly, or incorrectly, written word (orthographic mismatch) that named the picture just shown. Electroencephalogram (EEG) recording took place simultaneously. Behavioral results showed that the Low group had a significantly lower number of correct responses and increased reaction times while processing orthographical errors. Tests showed significant positive correlations between higher performance on the experimental task and faster and more accurate reading. The P150 and P450 components showed higher voltages in the High group when processing orthographic errors, whereas N170 seemed less lateralized to the left hemisphere in the lower orthographic performers. Also, trials with orthographic errors elicited a frontal P450 component that was only evident in the High group. The present results show that higher levels of orthographic knowledge correlate with high reading performance, likely because of faster and more accurate perceptual processing, better visual orthographic representations, and top-down supervision, as the event-related brain potential findings seem to suggest.
Lexical orthography acquisition: Is handwriting better than spelling aloud?
Bosse, Marie-Line; Chaves, Nathalie; Valdois, Sylviane
2014-01-01
Lexical orthography acquisition is currently described as the building of links between the visual forms and the auditory forms of whole words. However, a growing body of data suggests that a motor component could further be involved in orthographic acquisition. A few studies support the idea that reading plus handwriting is a better lexical orthographic learning situation than reading alone. However, these studies did not explore which of the cognitive processes involved in handwriting enhanced lexical orthographic acquisition. Some findings suggest that the specific movements memorized when learning to write may participate in the establishment of orthographic representations in memory. The aim of the present study was to assess this hypothesis using handwriting and spelling aloud as two learning conditions. In two experiments, fifth graders were asked to read complex pseudo-words embedded in short sentences. Immediately after reading, participants had to recall the pseudo-words' spellings either by spelling them aloud or by handwriting them down. One week later, orthographic acquisition was tested using two post-tests: a pseudo-word production task (spelling by hand in Experiment 1 or spelling aloud in Experiment 2) and a pseudo-word recognition task. Results showed no significant difference in pseudo-word recognition between the two learning conditions. In the pseudo-word production task, orthography learning improved when the learning and post-test conditions were similar, thus showing a massive encoding-retrieval match effect in the two experiments. However, a mixed model analysis of the pseudo-word production results revealed a significant learning condition effect which remained after control of the encoding-retrieval match effect. This later finding suggests that orthography learning is more efficient when mediated by handwriting than by spelling aloud, whatever the post-test production task. PMID:24575058