The ties that bind what is known to the recognition of what is new.
Nelson, D L; Zhang, N; McKinney, V M
2001-09-01
Recognition success varies with how information is encoded (e.g., level of processing) and with what is already known as a result of past learning (e.g., word frequency). This article presents the results of experiments showing that preexisting connections involving the associates of studied words facilitate their recognition regardless of whether the words are intentionally encoded or are incidentally encoded under semantic or nonsemantic conditions. Words are more likely to be recognized when they have either more resonant connections coming back to them from their associates or more connections among their associates. Such results occur even though attention is never drawn to these associates. Regression analyses showed that these connections affect recognition independently of frequency, so the present results add to the literature showing that prior lexical knowledge contributes to episodic recognition. In addition, equations that use free-association data to derive composite strength indices of resonance and connectivity were evaluated. Implications for theories of recognition are discussed.
ERIC Educational Resources Information Center
Khateb, Asaid; Khateb-Abdelgani, Manal; Taha, Haitham Y.; Ibrahim, Raphiq
2014-01-01
This study aimed at assessing the effects of letters' connectivity in Arabic on visual word recognition. For this purpose, reaction times (RTs) and accuracy scores were collected from ninety-third, sixth and ninth grade native Arabic speakers during a lexical decision task, using fully connected (Cw), partially connected (PCw) and…
NASA Technical Reports Server (NTRS)
Simpson, C. A.
1985-01-01
In the present study of the responses of pairs of pilots to aircraft warning classification tasks using an isolated word, speaker-dependent speech recognition system, the induced stress was manipulated by means of different scoring procedures for the classification task and by the inclusion of a competitive manual control task. Both speech patterns and recognition accuracy were analyzed, and recognition errors were recorded by type for an isolated word speaker-dependent system and by an offline technique for a connected word speaker-dependent system. While errors increased with task loading for the isolated word system, there was no such effect for task loading in the case of the connected word system.
L2 Word Recognition Research: A Critical Review.
ERIC Educational Resources Information Center
Koda, Keiko
1996-01-01
Explores conceptual syntheses advancing second language (L2) word recognition research and uncovers agendas relating to cross-linguistic examinations of L2 processing in a cohort of undergraduate students in France. Describes connections between word recognition and reading, overviews the connectionist construct, and illustrates cross-linguistic…
Beato, María S; Arndt, Jason
2017-08-01
Memory is a reconstruction of the past and is prone to errors. One of the most widely-used paradigms to examine false memory is the Deese/Roediger-McDermott (DRM) paradigm. In this paradigm, participants studied words associatively related to a non-presented critical word. In a subsequent memory test critical words are often falsely recalled and/or recognized. In the present study, we examined the influence of backward associative strength (BAS) on false recognition using DRM lists with multiple critical words. In forty-eight English DRM lists, we manipulated BAS while controlling forward associative strength (FAS). Lists included four words (e.g., prison, convict, suspect, fugitive) simultaneously associated with two critical words (e.g., CRIMINAL, JAIL). The results indicated that true recognition was similar in high-BAS and low-BAS lists, while false recognition was greater in high-BAS lists than in low-BAS lists. Furthermore, there was a positive correlation between false recognition and the probability of a resonant connection between the studied words and their associates. These findings suggest that BAS and resonant connections influence false recognition, and extend prior research using DRM lists associated with a single critical word to studies of DRM lists associated with multiple critical words.
ERIC Educational Resources Information Center
Teng, Dan W.; Wallot, Sebastian; Kelty-Stephen, Damian G.
2016-01-01
Research on reading comprehension of connected text emphasizes reliance on single-word features that organize a stable, mental lexicon of words and that speed or slow the recognition of each new word. However, the time needed to recognize a word might not actually be as fixed as previous research indicates, and the stability of the mental lexicon…
Hierarchical singleton-type recurrent neural fuzzy networks for noisy speech recognition.
Juang, Chia-Feng; Chiou, Chyi-Tian; Lai, Chun-Lung
2007-05-01
This paper proposes noisy speech recognition using hierarchical singleton-type recurrent neural fuzzy networks (HSRNFNs). The proposed HSRNFN is a hierarchical connection of two singleton-type recurrent neural fuzzy networks (SRNFNs), where one is used for noise filtering and the other for recognition. The SRNFN is constructed by recurrent fuzzy if-then rules with fuzzy singletons in the consequences, and their recurrent properties make them suitable for processing speech patterns with temporal characteristics. In n words recognition, n SRNFNs are created for modeling n words, where each SRNFN receives the current frame feature and predicts the next one of its modeling word. The prediction error of each SRNFN is used as recognition criterion. In filtering, one SRNFN is created, and each SRNFN recognizer is connected to the same SRNFN filter, which filters noisy speech patterns in the feature domain before feeding them to the SRNFN recognizer. Experiments with Mandarin word recognition under different types of noise are performed. Other recognizers, including multilayer perceptron (MLP), time-delay neural networks (TDNNs), and hidden Markov models (HMMs), are also tested and compared. These experiments and comparisons demonstrate good results with HSRNFN for noisy speech recognition tasks.
McEvoy, C L; Nelson, D L; Komatsu, T
1999-09-01
Veridical memory for presented list words and false memory for nonpresented but related items were tested using the Deese/Roediger and McDermott paradigm. The strength and density of preexisting connections among the list words, and from the list words to the critical items, were manipulated. The likelihood of producing false memories in free recall varied with the strength of connections from the list words to the critical items but was inversely related to the density of the interconnections among the list words. In contrast, veridical recall of list words was positively related to the density of the interconnections. A final recognition test showed that both false and veridical memories were more likely when the list words were more densely interconnected. The results are discussed in terms of an associative model of memory, Processing Implicit and Explicit Representations (PIER 2) that describes the influence of implicitly activated preexisting information on memory performance.
Spoken Word Recognition and Serial Recall of Words from Components in the Phonological Network
ERIC Educational Resources Information Center
Siew, Cynthia S. Q.; Vitevitch, Michael S.
2016-01-01
Network science uses mathematical techniques to study complex systems such as the phonological lexicon (Vitevitch, 2008). The phonological network consists of a "giant component" (the largest connected component of the network) and "lexical islands" (smaller groups of words that are connected to each other, but not to the giant…
ERIC Educational Resources Information Center
Sunderman, Gretchen L.; Priya, Kanu
2012-01-01
This study investigates the phonological nature of the lexical links in the bilingual lexicon using different-script bilinguals. Highly proficient Hindi-English bilinguals performed a translation recognition task (i.e., decide whether two words presented sequentially are a correct translation pair). For the critical trials, the second word was a…
Connected word recognition using a cascaded neuro-computational model
NASA Astrophysics Data System (ADS)
Hoya, Tetsuya; van Leeuwen, Cees
2016-10-01
We propose a novel framework for processing a continuous speech stream that contains a varying number of words, as well as non-speech periods. Speech samples are segmented into word-tokens and non-speech periods. An augmented version of an earlier-proposed, cascaded neuro-computational model is used for recognising individual words within the stream. Simulation studies using both a multi-speaker-dependent and speaker-independent digit string database show that the proposed method yields a recognition performance comparable to that obtained by a benchmark approach using hidden Markov models with embedded training.
Audiovisual speech facilitates voice learning.
Sheffert, Sonya M; Olson, Elizabeth
2004-02-01
In this research, we investigated the effects of voice and face information on the perceptual learning of talkers and on long-term memory for spoken words. In the first phase, listeners were trained over several days to identify voices from words presented auditorily or audiovisually. The training data showed that visual information about speakers enhanced voice learning, revealing cross-modal connections in talker processing akin to those observed in speech processing. In the second phase, the listeners completed an auditory or audiovisual word recognition memory test in which equal numbers of words were spoken by familiar and unfamiliar talkers. The data showed that words presented by familiar talkers were more likely to be retrieved from episodic memory, regardless of modality. Together, these findings provide new information about the representational code underlying familiar talker recognition and the role of stimulus familiarity in episodic word recognition.
Stuellein, Nicole; Radach, Ralph R; Jacobs, Arthur M; Hofmann, Markus J
2016-05-15
Computational models of word recognition already successfully used associative spreading from orthographic to semantic levels to account for false memories. But can they also account for semantic effects on event-related potentials in a recognition memory task? To address this question, target words in the present study had either many or few semantic associates in the stimulus set. We found larger P200 amplitudes and smaller N400 amplitudes for old words in comparison to new words. Words with many semantic associates led to larger P200 amplitudes and a smaller N400 in comparison to words with a smaller number of semantic associations. We also obtained inverted response time and accuracy effects for old and new words: faster response times and fewer errors were found for old words that had many semantic associates, whereas new words with a large number of semantic associates produced slower response times and more errors. Both behavioral and electrophysiological results indicate that semantic associations between words can facilitate top-down driven lexical access and semantic integration in recognition memory. Our results support neurophysiologically plausible predictions of the Associative Read-Out Model, which suggests top-down connections from semantic to orthographic layers. Copyright © 2016 Elsevier B.V. All rights reserved.
Oh, Jooyoung; Chun, Ji-Won; Kim, Eunseong; Park, Hae-Jeong; Lee, Boreom; Kim, Jae-Jin
2017-01-01
Patients with schizophrenia exhibit several cognitive deficits, including memory impairment. Problems with recognition memory can hinder socially adaptive behavior. Previous investigations have suggested that altered activation of the frontotemporal area plays an important role in recognition memory impairment. However, the cerebral networks related to these deficits are not known. The aim of this study was to elucidate the brain networks required for recognizing socially relevant information in patients with schizophrenia performing an old-new recognition task. Sixteen patients with schizophrenia and 16 controls participated in this study. First, the subjects performed the theme-identification task during functional magnetic resonance imaging. In this task, pictures depicting social situations were presented with three words, and the subjects were asked to select the best theme word for each picture. The subjects then performed an old-new recognition task in which they were asked to discriminate whether the presented words were old or new. Task performance and neural responses in the old-new recognition task were compared between the subject groups. An independent component analysis of the functional connectivity was performed. The patients with schizophrenia exhibited decreased discriminability and increased activation of the right superior temporal gyrus compared with the controls during correct responses. Furthermore, aberrant network activities were found in the frontopolar and language comprehension networks in the patients. The functional connectivity analysis showed aberrant connectivity in the frontopolar and language comprehension networks in the patients with schizophrenia, and these aberrations possibly contribute to their low recognition performance and social dysfunction. These results suggest that the frontopolar and language comprehension networks are potential therapeutic targets in patients with schizophrenia.
Interactive Processing of Words in Connected Speech in L1 and L2.
ERIC Educational Resources Information Center
Hayashi, Takuo
1991-01-01
A study exploring the differences between first- and second-language word recognition strategies revealed that second-language listeners used more higher level information than native language listeners, when access to higher level information was not hindered by a competence-ceiling effect, indicating that word processing strategy is a function…
Benavides-Varela, Silvia; Siugzdaite, Roma; Gómez, David Maximiliano; Macagno, Francesco; Cattarossi, Luigi; Mehler, Jacques
2017-07-18
Perception and cognition in infants have been traditionally investigated using habituation paradigms, assuming that babies' memories in laboratory contexts are best constructed after numerous repetitions of the very same stimulus in the absence of interference. A crucial, yet open, question regards how babies deal with stimuli experienced in a fashion similar to everyday learning situations-namely, in the presence of interfering stimuli. To address this question, we used functional near-infrared spectroscopy to test 40 healthy newborns on their ability to encode words presented in concomitance with other words. The results evidenced a habituation-like hemodynamic response during encoding in the left-frontal region, which was associated with a progressive decrement of the functional connections between this region and the left-temporal, right-temporal, and right-parietal regions. In a recognition test phase, a characteristic neural signature of recognition recruited first the right-frontal region and subsequently the right-parietal ones. Connections originating from the right-temporal regions to these areas emerged when newborns listened to the familiar word in the test phase. These findings suggest a neural specialization at birth characterized by the lateralization of memory functions: the interplay between temporal and left-frontal regions during encoding and between temporo-parietal and right-frontal regions during recognition of speech sounds. Most critically, the results show that newborns are capable of retaining the sound of specific words despite hearing other stimuli during encoding. Thus, habituation designs that include various items may be as effective for studying early memory as repeated presentation of a single word.
A cascaded neuro-computational model for spoken word recognition
NASA Astrophysics Data System (ADS)
Hoya, Tetsuya; van Leeuwen, Cees
2010-03-01
In human speech recognition, words are analysed at both pre-lexical (i.e., sub-word) and lexical (word) levels. The aim of this paper is to propose a constructive neuro-computational model that incorporates both these levels as cascaded layers of pre-lexical and lexical units. The layered structure enables the system to handle the variability of real speech input. Within the model, receptive fields of the pre-lexical layer consist of radial basis functions; the lexical layer is composed of units that perform pattern matching between their internal template and a series of labels, corresponding to the winning receptive fields in the pre-lexical layer. The model adapts through self-tuning of all units, in combination with the formation of a connectivity structure through unsupervised (first layer) and supervised (higher layers) network growth. Simulation studies show that the model can achieve a level of performance in spoken word recognition similar to that of a benchmark approach using hidden Markov models, while enabling parallel access to word candidates in lexical decision making.
Modernising speech audiometry: using a smartphone application to test word recognition.
van Zyl, Marianne; Swanepoel, De Wet; Myburgh, Hermanus C
2018-04-20
This study aimed to develop and assess a method to measure word recognition abilities using a smartphone application (App) connected to an audiometer. Word lists were recorded in South African English and Afrikaans. Analyses were conducted to determine the effect of hardware used for presentation (computer, compact-disc player, or smartphone) on the frequency content of recordings. An Android App was developed to enable presentation of recorded materials via a smartphone connected to the auxiliary input of the audiometer. Experiments were performed to test feasibility and validity of the developed App and recordings. Participants were 100 young adults (18-30 years) with pure tone thresholds ≤15 dB across the frequency spectrum (250-8000 Hz). Hardware used for presentation had no significant effect on the frequency content of recordings. Listening experiments indicated good inter-list reliability for recordings in both languages, with no significant differences between scores on different lists at each of the tested intensities. Performance-intensity functions had slopes of 4.05%/dB for English and 4.75%/dB for Afrikaans lists at the 50% point. The developed smartphone App constitutes a feasible and valid method for measuring word recognition scores, and can support standardisation and accessibility of recorded speech audiometry.
Recognition of Handwritten Arabic words using a neuro-fuzzy network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boukharouba, Abdelhak; Bennia, Abdelhak
We present a new method for the recognition of handwritten Arabic words based on neuro-fuzzy hybrid network. As a first step, connected components (CCs) of black pixels are detected. Then the system determines which CCs are sub-words and which are stress marks. The stress marks are then isolated and identified separately and the sub-words are segmented into graphemes. Each grapheme is described by topological and statistical features. Fuzzy rules are extracted from training examples by a hybrid learning scheme comprised of two phases: rule generation phase from data using a fuzzy c-means, and rule parameter tuning phase using gradient descentmore » learning. After learning, the network encodes in its topology the essential design parameters of a fuzzy inference system.The contribution of this technique is shown through the significant tests performed on a handwritten Arabic words database.« less
Piercey, C D; Joordens, S
2000-06-01
When performing a lexical decision task, participants can correctly categorize letter strings as words faster if they have multiple meanings (i.e., ambiguous words) than if they have one meaning (i.e., unambiguous words). In contrast, when reading connected text, participants tend to fixate longer on ambiguous words than on unambiguous words. Why are ambiguous words at an advantage in one word recognition task, and at a disadvantage in another? These disparate results can be reconciled if it is assumed that ambiguous words are relatively fast to reach a semantic-blend state sufficient for supporting lexical decisions, but then slow to escape the blend when the task requires a specific meaning be retrieved. We report several experiments that support this possibility.
ERIC Educational Resources Information Center
Knott, Lauren M.; Dewhurst, Stephen A.; Howe, Mark L.
2012-01-01
Factors that affect categorical and associative false memory illusions were investigated in 2 experiments. In Experiment 1, backward associative strength (BAS) from the list word to the critical lure and interitem connectivity were manipulated in Deese-Roediger-McDermott (DRM) and category list types. For both recall and recognition tasks, the…
Developmental reversals in false memory: now you see them, now you don't!
Holliday, Robyn E; Brainerd, Charles J; Reyna, Valerie F
2011-03-01
A developmental reversal in false memory is the counterintuitive phenomenon of higher levels of false memory in older children, adolescents, and adults than in younger children. The ability of verbatim memory to suppress this age trend in false memory was evaluated using the Deese-Roediger-McDermott (DRM) paradigm. Seven and 11-year-old children studied DRM lists either in a standard condition (whole words) that normally produces high levels of false memory or in an alternative condition that should enhance verbatim memory (word fragments). Half the children took 1 recognition test, and the other half took 3 recognition tests. In the single-test condition, the typical age difference in false memory was found for the word condition (higher false memory for 11-year-olds than for 7-year-olds), but in the word fragment condition false memory was lower in the older children. In the word condition, false memory increased over successive recognition tests. Our findings are consistent with 2 principles of fuzzy-trace theory's explanation of false memories: (a) reliance on verbatim rather than gist memory causes such errors to decline with age, and (b) repeated testing increases reliance on gist memory in older children and adults who spontaneously connect meaning across events. PsycINFO Database Record (c) 2011 APA, all rights reserved.
Teng, Dan W; Wallot, Sebastian; Kelty-Stephen, Damian G
2016-12-01
Research on reading comprehension of connected text emphasizes reliance on single-word features that organize a stable, mental lexicon of words and that speed or slow the recognition of each new word. However, the time needed to recognize a word might not actually be as fixed as previous research indicates, and the stability of the mental lexicon may change with task demands. The present study explores the effects of narrative coherence in self-paced story reading to single-word feature effects in lexical decision. We presented single strings of letters to 24 participants, in both lexical decision and self-paced story reading. Both tasks included the same words composing a set of adjective-noun pairs. Reading times revealed that the tasks, and the order of the presentation of the tasks, changed and/or eliminated familiar effects of single-word features. Specifically, experiencing the lexical-decision task first gradually emphasized the role of single-word features, and experiencing the self-paced story-reading task afterwards counteracted the effect of single-word features. We discuss the implications that task-dependence and narrative coherence might have for the organization of the mental lexicon. Future work will need to consider what architectures suit the apparent flexibility with which task can accentuate or diminish effects of single-word features.
Arabic handwritten: pre-processing and segmentation
NASA Astrophysics Data System (ADS)
Maliki, Makki; Jassim, Sabah; Al-Jawad, Naseer; Sellahewa, Harin
2012-06-01
This paper is concerned with pre-processing and segmentation tasks that influence the performance of Optical Character Recognition (OCR) systems and handwritten/printed text recognition. In Arabic, these tasks are adversely effected by the fact that many words are made up of sub-words, with many sub-words there associated one or more diacritics that are not connected to the sub-word's body; there could be multiple instances of sub-words overlap. To overcome these problems we investigate and develop segmentation techniques that first segment a document into sub-words, link the diacritics with their sub-words, and removes possible overlapping between words and sub-words. We shall also investigate two approaches for pre-processing tasks to estimate sub-words baseline, and to determine parameters that yield appropriate slope correction, slant removal. We shall investigate the use of linear regression on sub-words pixels to determine their central x and y coordinates, as well as their high density part. We also develop a new incremental rotation procedure to be performed on sub-words that determines the best rotation angle needed to realign baselines. We shall demonstrate the benefits of these proposals by conducting extensive experiments on publicly available databases and in-house created databases. These algorithms help improve character segmentation accuracy by transforming handwritten Arabic text into a form that could benefit from analysis of printed text.
Sub-word image clustering in Farsi printed books
NASA Astrophysics Data System (ADS)
Soheili, Mohammad Reza; Kabir, Ehsanollah; Stricker, Didier
2015-02-01
Most OCR systems are designed for the recognition of a single page. In case of unfamiliar font faces, low quality papers and degraded prints, the performance of these products drops sharply. However, an OCR system can use redundancy of word occurrences in large documents to improve recognition results. In this paper, we propose a sub-word image clustering method for the applications dealing with large printed documents. We assume that the whole document is printed by a unique unknown font with low quality print. Our proposed method finds clusters of equivalent sub-word images with an incremental algorithm. Due to the low print quality, we propose an image matching algorithm for measuring the distance between two sub-word images, based on Hamming distance and the ratio of the area to the perimeter of the connected components. We built a ground-truth dataset of more than 111000 sub-word images to evaluate our method. All of these images were extracted from an old Farsi book. We cluster all of these sub-words, including isolated letters and even punctuation marks. Then all centers of created clusters are labeled manually. We show that all sub-words of the book can be recognized with more than 99.7% accuracy by assigning the label of each cluster center to all of its members.
Limited connected speech experiment
NASA Astrophysics Data System (ADS)
Landell, P. B.
1983-03-01
The purpose of this contract was to demonstrate that connected Speech Recognition (CSR) can be performed in real-time on a vocabulary of one hundred words and to test the performance of the CSR system for twenty-five male and twenty-five female speakers. This report describes the contractor's real-time laboratory CSR system, the data base and training software developed in accordance with the contract, and the results of the performance tests.
Reading Fluency: The Whole Is More than the Parts
ERIC Educational Resources Information Center
Katzir, Tami; Kim, Youngsuk; Wolf, Maryanne; O'Brien, Beth; Kennedy, Becky; Lovett, Maureen; Morris, Robin
2006-01-01
This study examined the relative contributions of phonological awareness, orthographic pattern recognition, and rapid letter naming to fluent word and connected-text reading within a dyslexic sample of 123 children in second and third grades. Participants were assessed on a variety of fluency measures and reading subskills. Correlations and…
Fan, Qiuyun; Anderson, Adam W; Davis, Nicole; Cutting, Laurie E
2014-10-24
With the advent of neuroimaging techniques, especially functional MRI (fMRI), studies have mapped brain regions that are associated with good and poor reading, most centrally a region within the left occipito-temporal/fusiform region (L-OT/F) often referred to as the visual word form area (VWFA). Despite an abundance of fMRI studies of the putative VWFA, research about its structural connectivity has just started. Provided that the putative VWFA may be connected to distributed regions in the brain, it remains unclear how this network is engaged in constituting a well-tuned reading circuitry in the brain. Here we used diffusion MRI to study the structural connectivity patterns of the putative VWFA and surrounding areas within the L-OT/F in children with typically developing (TD) reading ability and with word recognition deficits (WRD; sometimes referred to as dyslexia). We found that L-OT/F connectivity varied along a posterior-anterior gradient, with specific structural connectivity patterns related to reading ability in the ROIs centered upon the putative VWFA. Findings suggest that the architecture of the putative VWFA connectivity is fundamentally different between TD and WRD, with TD showing greater connectivity to linguistic regions than WRD, and WRD showing greater connectivity to visual and parahippocampal regions than TD. Findings thus reveal clear structural abnormalities underlying the functional abnormalities in the putative VWFA in WRD. Copyright © 2014 Elsevier B.V. All rights reserved.
Symbolic Play Connects to Language through Visual Object Recognition
ERIC Educational Resources Information Center
Smith, Linda B.; Jones, Susan S.
2011-01-01
Object substitutions in play (e.g. using a box as a car) are strongly linked to language learning and their absence is a diagnostic marker of language delay. Classic accounts posit a symbolic function that underlies both words and object substitutions. Here we show that object substitutions depend on developmental changes in visual object…
Since They Are Going to Watch TV Anyway, Why Not Connect It To Reading.
ERIC Educational Resources Information Center
Hutchison, Laveria F.
In view of the great amount of television viewing among poor readers, the Learning Model for Watching Television was developed to capitalize on students' television watching proclivities. The model encompasses three major overlapping component skill areas to reinforce classroom learning: active listening skills, auditory word recognition skills,…
Prospective and Retrospective Processing in Associative Mediated Priming
ERIC Educational Resources Information Center
Jones, Lara L.
2012-01-01
Mediated priming refers to the faster word recognition of a target (e.g., milk) following presentation of a prime (e.g., pasture) that is related indirectly via a connecting "mediator" (e.g., cow). Association strength may be an important factor in whether mediated priming occurs prospectively (with target activation prior to its presentation) or…
L2 Word Recognition: Influence of L1 Orthography on Multi-Syllabic Word Recognition
ERIC Educational Resources Information Center
Hamada, Megumi
2017-01-01
L2 reading research suggests that L1 orthographic experience influences L2 word recognition. Nevertheless, the findings on multi-syllabic words in English are still limited despite the fact that a vast majority of words are multi-syllabic. The study investigated whether L1 orthography influences the recognition of multi-syllabic words, focusing on…
Prosody and Spoken Word Recognition in Early and Late Spanish-English Bilingual Individuals
ERIC Educational Resources Information Center
Boutsen, Frank R.; Dvorak, Justin D.; Deweber, Derick D.
2017-01-01
Purpose: This study was conducted to compare the influence of word properties on gated single-word recognition in monolingual and bilingual individuals under conditions of native and nonnative accent and to determine whether word-form prosody facilitates recognition in bilingual individuals. Method: Word recognition was assessed in monolingual and…
Automated smartphone audiometry: Validation of a word recognition test app.
Dewyer, Nicholas A; Jiradejvong, Patpong; Henderson Sabes, Jennifer; Limb, Charles J
2018-03-01
Develop and validate an automated smartphone word recognition test. Cross-sectional case-control diagnostic test comparison. An automated word recognition test was developed as an app for a smartphone with earphones. English-speaking adults with recent audiograms and various levels of hearing loss were recruited from an audiology clinic and were administered the smartphone word recognition test. Word recognition scores determined by the smartphone app and the gold standard speech audiometry test performed by an audiologist were compared. Test scores for 37 ears were analyzed. Word recognition scores determined by the smartphone app and audiologist testing were in agreement, with 86% of the data points within a clinically acceptable margin of error and a linear correlation value between test scores of 0.89. The WordRec automated smartphone app accurately determines word recognition scores. 3b. Laryngoscope, 128:707-712, 2018. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.
Word Recognition in Auditory Cortex
ERIC Educational Resources Information Center
DeWitt, Iain D. J.
2013-01-01
Although spoken word recognition is more fundamental to human communication than text recognition, knowledge of word-processing in auditory cortex is comparatively impoverished. This dissertation synthesizes current models of auditory cortex, models of cortical pattern recognition, models of single-word reading, results in phonetics and results in…
Lexical and age effects on word recognition in noise in normal-hearing children.
Ren, Cuncun; Liu, Sha; Liu, Haihong; Kong, Ying; Liu, Xin; Li, Shujing
2015-12-01
The purposes of the present study were (1) to examine the lexical and age effects on word recognition of normal-hearing (NH) children in noise, and (2) to compare the word-recognition performance in noise to that in quiet listening conditions. Participants were 213 NH children (age ranged between 3 and 6 years old). Eighty-nine and 124 of the participants were tested in noise and quiet listening conditions, respectively. The Standard-Chinese Lexical Neighborhood Test, which contains lists of words in four lexical categories (i.e., dissyllablic easy (DE), dissyllablic hard (DH), monosyllable easy (ME), and monosyllable hard (MH)) was used to evaluate the Mandarin Chinese word recognition in speech spectrum-shaped noise (SSN) with a signal-to-noise ratio (SNR) of 0dB. A two-way repeated-measures analysis of variance was conducted to examine the lexical effects with syllable length and difficulty level as the main factors on word recognition in the quiet and noise listening conditions. The effects of age on word-recognition performance were examined using a regression model. The word-recognition performance in noise was significantly poorer than that in quiet and the individual variations in performance in noise were much greater than those in quiet. Word recognition scores showed that the lexical effects were significant in the SSN. Children scored higher with dissyllabic words than with monosyllabic words; "easy" words scored higher than "hard" words in the noise condition. The scores of the NH children in the SSN (SNR=0dB) for the DE, DH, ME, and MH words were 85.4, 65.9, 71.7, and 46.2% correct, respectively. The word-recognition performance also increased with age in each lexical category for the NH children tested in noise. Both age and lexical characteristics of words had significant influences on the performance of Mandarin-Chinese word recognition in noise. The lexical effects were more obvious under noise listening conditions than in quiet. The word-recognition performance in noise increased with age in NH children of 3-6 years old and had not reached plateau at 6 years of age in the NH children. Copyright © 2015. Published by Elsevier Ireland Ltd.
ERIC Educational Resources Information Center
Frye, Elizabeth M.; Gosky, Ross
2012-01-01
The present study investigated the relationship between rapid recognition of individual words (Word Recognition Test) and two measures of contextual reading: (1) grade-level Passage Reading Test (IRI passage) and (2) performance on standardized STAR Reading Test. To establish if time of presentation on the word recognition test was a factor in…
The Impact of Strong Assimilation on the Perception of Connected Speech
ERIC Educational Resources Information Center
Gaskell, M. Gareth; Snoeren, Natalie D.
2008-01-01
Models of compensation for phonological variation in spoken word recognition differ in their ability to accommodate complete assimilatory alternations (such as run assimilating fully to rum in the context of a quick run picks you up). Two experiments addressed whether such complete changes can be observed in casual speech, and if so, whether they…
The effect of word concreteness on recognition memory.
Fliessbach, K; Weis, S; Klaver, P; Elger, C E; Weber, B
2006-09-01
Concrete words that are readily imagined are better remembered than abstract words. Theoretical explanations for this effect either claim a dual coding of concrete words in the form of both a verbal and a sensory code (dual-coding theory), or a more accessible semantic network for concrete words than for abstract words (context-availability theory). However, the neural mechanisms of improved memory for concrete versus abstract words are poorly understood. Here, we investigated the processing of concrete and abstract words during encoding and retrieval in a recognition memory task using event-related functional magnetic resonance imaging (fMRI). As predicted, memory performance was significantly better for concrete words than for abstract words. Abstract words elicited stronger activations of the left inferior frontal cortex both during encoding and recognition than did concrete words. Stronger activation of this area was also associated with successful encoding for both abstract and concrete words. Concrete words elicited stronger activations bilaterally in the posterior inferior parietal lobe during recognition. The left parietal activation was associated with correct identification of old stimuli. The anterior precuneus, left cerebellar hemisphere and the posterior and anterior cingulate cortex showed activations both for successful recognition of concrete words and for online processing of concrete words during encoding. Additionally, we observed a correlation across subjects between brain activity in the left anterior fusiform gyrus and hippocampus during recognition of learned words and the strength of the concreteness effect. These findings support the idea of specific brain processes for concrete words, which are reactivated during successful recognition.
Exploring a recognition-induced recognition decrement
Dopkins, Stephen; Ngo, Catherine Trinh; Sargent, Jesse
2007-01-01
Four experiments explored a recognition decrement that is associated with the recognition of a word from a short list. The stimulus material for demonstrating the phenomenon was a list of words of different syntactic types. A word from the list was recognized less well following a decision that a word of the same type had occurred in the list than following a decision that such a word had not occurred in the list. A recognition decrement did not occur for a word of a given type following a positive recognition decision to a word of a different type. A recognition decrement did not occur when the list consisted exclusively of nouns. It was concluded that the phenomenon may reflect a criterion shift but probably does not reflect a list strength effect, suppression, or familiarity attribution consequent to a perceived discrepancy between actual and expected fluency. PMID:17063915
Interaction in Spoken Word Recognition Models: Feedback Helps.
Magnuson, James S; Mirman, Daniel; Luthra, Sahil; Strauss, Ted; Harris, Harlan D
2018-01-01
Human perception, cognition, and action requires fast integration of bottom-up signals with top-down knowledge and context. A key theoretical perspective in cognitive science is the interactive activation hypothesis: forward and backward flow in bidirectionally connected neural networks allows humans and other biological systems to approximate optimal integration of bottom-up and top-down information under real-world constraints. An alternative view is that online feedback is neither necessary nor helpful; purely feed forward alternatives can be constructed for any feedback system, and online feedback could not improve processing and would preclude veridical perception. In the domain of spoken word recognition, the latter view was apparently supported by simulations using the interactive activation model, TRACE, with and without feedback: as many words were recognized more quickly without feedback as were recognized faster with feedback, However, these simulations used only a small set of words and did not address a primary motivation for interaction: making a model robust in noise. We conducted simulations using hundreds of words, and found that the majority were recognized more quickly with feedback than without. More importantly, as we added noise to inputs, accuracy and recognition times were better with feedback than without. We follow these simulations with a critical review of recent arguments that online feedback in interactive activation models like TRACE is distinct from other potentially helpful forms of feedback. We conclude that in addition to providing the benefits demonstrated in our simulations, online feedback provides a plausible means of implementing putatively distinct forms of feedback, supporting the interactive activation hypothesis.
Interaction in Spoken Word Recognition Models: Feedback Helps
Magnuson, James S.; Mirman, Daniel; Luthra, Sahil; Strauss, Ted; Harris, Harlan D.
2018-01-01
Human perception, cognition, and action requires fast integration of bottom-up signals with top-down knowledge and context. A key theoretical perspective in cognitive science is the interactive activation hypothesis: forward and backward flow in bidirectionally connected neural networks allows humans and other biological systems to approximate optimal integration of bottom-up and top-down information under real-world constraints. An alternative view is that online feedback is neither necessary nor helpful; purely feed forward alternatives can be constructed for any feedback system, and online feedback could not improve processing and would preclude veridical perception. In the domain of spoken word recognition, the latter view was apparently supported by simulations using the interactive activation model, TRACE, with and without feedback: as many words were recognized more quickly without feedback as were recognized faster with feedback, However, these simulations used only a small set of words and did not address a primary motivation for interaction: making a model robust in noise. We conducted simulations using hundreds of words, and found that the majority were recognized more quickly with feedback than without. More importantly, as we added noise to inputs, accuracy and recognition times were better with feedback than without. We follow these simulations with a critical review of recent arguments that online feedback in interactive activation models like TRACE is distinct from other potentially helpful forms of feedback. We conclude that in addition to providing the benefits demonstrated in our simulations, online feedback provides a plausible means of implementing putatively distinct forms of feedback, supporting the interactive activation hypothesis. PMID:29666593
Acquired prosopagnosia without word recognition deficits.
Susilo, Tirta; Wright, Victoria; Tree, Jeremy J; Duchaine, Bradley
2015-01-01
It has long been suggested that face recognition relies on specialized mechanisms that are not involved in visual recognition of other object categories, including those that require expert, fine-grained discrimination at the exemplar level such as written words. But according to the recently proposed many-to-many theory of object recognition (MTMT), visual recognition of faces and words are carried out by common mechanisms [Behrmann, M., & Plaut, D. C. ( 2013 ). Distributed circuits, not circumscribed centers, mediate visual recognition. Trends in Cognitive Sciences, 17, 210-219]. MTMT acknowledges that face and word recognition are lateralized, but posits that the mechanisms that predominantly carry out face recognition still contribute to word recognition and vice versa. MTMT makes a key prediction, namely that acquired prosopagnosics should exhibit some measure of word recognition deficits. We tested this prediction by assessing written word recognition in five acquired prosopagnosic patients. Four patients had lesions limited to the right hemisphere while one had bilateral lesions with more pronounced lesions in the right hemisphere. The patients completed a total of seven word recognition tasks: two lexical decision tasks and five reading aloud tasks totalling more than 1200 trials. The performances of the four older patients (3 female, age range 50-64 years) were compared to those of 12 older controls (8 female, age range 56-66 years), while the performances of the younger prosopagnosic (male, 31 years) were compared to those of 14 younger controls (9 female, age range 20-33 years). We analysed all results at the single-patient level using Crawford's t-test. Across seven tasks, four prosopagnosics performed as quickly and accurately as controls. Our results demonstrate that acquired prosopagnosia can exist without word recognition deficits. These findings are inconsistent with a key prediction of MTMT. They instead support the hypothesis that face recognition is carried out by specialized mechanisms that do not contribute to recognition of written words.
Improved word recognition for observers with age-related maculopathies using compensation filters
NASA Technical Reports Server (NTRS)
Lawton, Teri B.
1988-01-01
A method for improving word recognition for people with age-related maculopathies, which cause a loss of central vision, is discussed. It is found that the use of individualized compensation filters based on an person's normalized contrast sensitivity function can improve word recognition for people with age-related maculopathies. It is shown that 27-70 pct more magnification is needed for unfiltered words compared to filtered words. The improvement in word recognition is positively correlated with the severity of vision loss.
[Explicit memory for type font of words in source monitoring and recognition tasks].
Hatanaka, Yoshiko; Fujita, Tetsuya
2004-02-01
We investigated whether people can consciously remember type fonts of words by methods of examining explicit memory; source-monitoring and old/new-recognition. We set matched, non-matched, and non-studied conditions between the study and the test words using two kinds of type fonts; Gothic and MARU. After studying words in one way of encoding, semantic or physical, subjects in a source-monitoring task made a three way discrimination between new words, Gothic words, and MARU words (Exp. 1). Subjects in an old/new-recognition task indicated whether test words were previously presented or not (Exp. 2). We compared the source judgments with old/new recognition data. As a result, these data showed conscious recollection for type font of words on the source monitoring task and dissociation between source monitoring and old/new recognition performance.
Genetic and environmental influences on word recognition and spelling deficits as a function of age.
Friend, Angela; DeFries, John C; Wadsworth, Sally J; Olson, Richard K
2007-05-01
Previous twin studies have suggested a possible developmental dissociation between genetic influences on word recognition and spelling deficits, wherein genetic influence declined across age for word recognition, and increased for spelling recognition. The present study included two measures of word recognition (timed, untimed) and two measures of spelling (recognition, production) in younger and older twins. The heritability estimates for the two word recognition measures were .65 (timed) and .64 (untimed) in the younger group and .65 and .58 respectively in the older group. For spelling, the corresponding estimates were .57 (recognition) and .51 (production) in the younger group and .65 and .67 in the older group. Although these age group differences were not significant, the pattern of decline in heritability across age for reading and increase for spelling conformed to that predicted by the developmental dissociation hypothesis. However, the tests for an interaction between genetic influences on word recognition and spelling deficits as a function of age were not significant.
Strand, Julia F; Sommers, Mitchell S
2011-09-01
Much research has explored how spoken word recognition is influenced by the architecture and dynamics of the mental lexicon (e.g., Luce and Pisoni, 1998; McClelland and Elman, 1986). A more recent question is whether the processes underlying word recognition are unique to the auditory domain, or whether visually perceived (lipread) speech may also be sensitive to the structure of the mental lexicon (Auer, 2002; Mattys, Bernstein, and Auer, 2002). The current research was designed to test the hypothesis that both aurally and visually perceived spoken words are isolated in the mental lexicon as a function of their modality-specific perceptual similarity to other words. Lexical competition (the extent to which perceptually similar words influence recognition of a stimulus word) was quantified using metrics that are well-established in the literature, as well as a statistical method for calculating perceptual confusability based on the phi-square statistic. Both auditory and visual spoken word recognition were influenced by modality-specific lexical competition as well as stimulus word frequency. These findings extend the scope of activation-competition models of spoken word recognition and reinforce the hypothesis (Auer, 2002; Mattys et al., 2002) that perceptual and cognitive properties underlying spoken word recognition are not specific to the auditory domain. In addition, the results support the use of the phi-square statistic as a better predictor of lexical competition than metrics currently used in models of spoken word recognition. © 2011 Acoustical Society of America
Recognizing Spoken Words: The Neighborhood Activation Model
Luce, Paul A.; Pisoni, David B.
2012-01-01
Objective A fundamental problem in the study of human spoken word recognition concerns the structural relations among the sound patterns of words in memory and the effects these relations have on spoken word recognition. In the present investigation, computational and experimental methods were employed to address a number of fundamental issues related to the representation and structural organization of spoken words in the mental lexicon and to lay the groundwork for a model of spoken word recognition. Design Using a computerized lexicon consisting of transcriptions of 20,000 words, similarity neighborhoods for each of the transcriptions were computed. Among the variables of interest in the computation of the similarity neighborhoods were: 1) the number of words occurring in a neighborhood, 2) the degree of phonetic similarity among the words, and 3) the frequencies of occurrence of the words in the language. The effects of these variables on auditory word recognition were examined in a series of behavioral experiments employing three experimental paradigms: perceptual identification of words in noise, auditory lexical decision, and auditory word naming. Results The results of each of these experiments demonstrated that the number and nature of words in a similarity neighborhood affect the speed and accuracy of word recognition. A neighborhood probability rule was developed that adequately predicted identification performance. This rule, based on Luce's (1959) choice rule, combines stimulus word intelligibility, neighborhood confusability, and frequency into a single expression. Based on this rule, a model of auditory word recognition, the neighborhood activation model, was proposed. This model describes the effects of similarity neighborhood structure on the process of discriminating among the acoustic-phonetic representations of words in memory. The results of these experiments have important implications for current conceptions of auditory word recognition in normal and hearing impaired populations of children and adults. PMID:9504270
The effect of background noise on the word activation process in nonnative spoken-word recognition.
Scharenborg, Odette; Coumans, Juul M J; van Hout, Roeland
2018-02-01
This article investigates 2 questions: (1) does the presence of background noise lead to a differential increase in the number of simultaneously activated candidate words in native and nonnative listening? And (2) do individual differences in listeners' cognitive and linguistic abilities explain the differential effect of background noise on (non-)native speech recognition? English and Dutch students participated in an English word recognition experiment, in which either a word's onset or offset was masked by noise. The native listeners outperformed the nonnative listeners in all listening conditions. Importantly, however, the effect of noise on the multiple activation process was found to be remarkably similar in native and nonnative listening. The presence of noise increased the set of candidate words considered for recognition in both native and nonnative listening. The results indicate that the observed performance differences between the English and Dutch listeners should not be primarily attributed to a differential effect of noise, but rather to the difference between native and nonnative listening. Additional analyses showed that word-initial information was found to be more important than word-final information during spoken-word recognition. When word-initial information was no longer reliably available word recognition accuracy dropped and word frequency information could no longer be used suggesting that word frequency information is strongly tied to the onset of words and the earliest moments of lexical access. Proficiency and inhibition ability were found to influence nonnative spoken-word recognition in noise, with a higher proficiency in the nonnative language and worse inhibition ability leading to improved recognition performance. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Ueno, Daisuke; Masumoto, Kouhei; Sutani, Kouichi; Iwaki, Sunao
2015-04-15
This study used magnetoencephalography (MEG) to examine the latency of modality-specific reactivation in the visual and auditory cortices during a recognition task to determine the effects of reactivation on episodic memory retrieval. Nine right-handed healthy young adults participated in the experiment. The experiment consisted of a word-encoding phase and two recognition phases. Three encoding conditions were included: encoding words alone (word-only) and encoding words presented with either related pictures (visual) or related sounds (auditory). The recognition task was conducted in the MEG scanner 15 min after the completion of the encoding phase. After the recognition test, a source-recognition task was given, in which participants were required to choose whether each recognition word was not presented or was presented with which information during the encoding phase. Word recognition in the auditory condition was higher than that in the word-only condition. Confidence-of-recognition scores (d') and the source-recognition test showed superior performance in both the visual and the auditory conditions compared with the word-only condition. An equivalent current dipoles analysis of MEG data indicated that higher equivalent current dipole amplitudes in the right fusiform gyrus occurred during the visual condition and in the superior temporal auditory cortices during the auditory condition, both 450-550 ms after onset of the recognition stimuli. Results suggest that reactivation of visual and auditory brain regions during recognition binds language with modality-specific information and that reactivation enhances confidence in one's recognition performance.
An Investigation of the Role of Grapheme Units in Word Recognition
ERIC Educational Resources Information Center
Lupker, Stephen J.; Acha, Joana; Davis, Colin J.; Perea, Manuel
2012-01-01
In most current models of word recognition, the word recognition process is assumed to be driven by the activation of letter units (i.e., that letters are the perceptual units in reading). An alternative possibility is that the word recognition process is driven by the activation of grapheme units, that is, that graphemes, rather than letters, are…
Einarsson, Einar-Jón; Petersen, Hannes; Wiebe, Thomas; Fransson, Per-Anders; Magnusson, Måns; Moëll, Christian
2011-10-01
To investigate word recognition in noise in subjects treated in childhood with chemotherapy, study benefits of open-fitting hearing-aids for word recognition, and investigate whether self-reported hearing-handicap corresponded to subjects' word recognition ability. Subjects diagnosed with cancer and treated with platinum-based chemotherapy in childhood underwent audiometric evaluations. Fifteen subjects (eight females and seven males) fulfilled the criteria set for the study, and four of those received customized open-fitting hearing-aids. Subjects with cisplatin-induced ototoxicity had severe difficulties recognizing words in noise, and scored as low as 54% below reference scores standardized for age and degree of hearing loss. Hearing-impaired subjects' self-reported hearing-handicap correlated significantly with word recognition in a quiet environment but not in noise. Word recognition in noise improved markedly (up to 46%) with hearing-aids, and the self-reported hearing-handicap and disability score were reduced by more than 50%. This study demonstrates the importance of testing word recognition in noise in subjects treated with platinum-based chemotherapy in childhood, and to use specific custom-made questionnaires to evaluate the experienced hearing-handicap. Open-fitting hearing-aids are a good alternative for subjects suffering from poor word recognition in noise.
Large-scale functional networks connect differently for processing words and symbol strings.
Liljeström, Mia; Vartiainen, Johanna; Kujala, Jan; Salmelin, Riitta
2018-01-01
Reconfigurations of synchronized large-scale networks are thought to be central neural mechanisms that support cognition and behavior in the human brain. Magnetoencephalography (MEG) recordings together with recent advances in network analysis now allow for sub-second snapshots of such networks. In the present study, we compared frequency-resolved functional connectivity patterns underlying reading of single words and visual recognition of symbol strings. Word reading emphasized coherence in a left-lateralized network with nodes in classical perisylvian language regions, whereas symbol processing recruited a bilateral network, including connections between frontal and parietal regions previously associated with spatial attention and visual working memory. Our results illustrate the flexible nature of functional networks, whereby processing of different form categories, written words vs. symbol strings, leads to the formation of large-scale functional networks that operate at distinct oscillatory frequencies and incorporate task-relevant regions. These results suggest that category-specific processing should be viewed not so much as a local process but as a distributed neural process implemented in signature networks. For words, increased coherence was detected particularly in the alpha (8-13 Hz) and high gamma (60-90 Hz) frequency bands, whereas increased coherence for symbol strings was observed in the high beta (21-29 Hz) and low gamma (30-45 Hz) frequency range. These findings attest to the role of coherence in specific frequency bands as a general mechanism for integrating stimulus-dependent information across brain regions.
Word-level recognition of multifont Arabic text using a feature vector matching approach
NASA Astrophysics Data System (ADS)
Erlandson, Erik J.; Trenkle, John M.; Vogt, Robert C., III
1996-03-01
Many text recognition systems recognize text imagery at the character level and assemble words from the recognized characters. An alternative approach is to recognize text imagery at the word level, without analyzing individual characters. This approach avoids the problem of individual character segmentation, and can overcome local errors in character recognition. A word-level recognition system for machine-printed Arabic text has been implemented. Arabic is a script language, and is therefore difficult to segment at the character level. Character segmentation has been avoided by recognizing text imagery of complete words. The Arabic recognition system computes a vector of image-morphological features on a query word image. This vector is matched against a precomputed database of vectors from a lexicon of Arabic words. Vectors from the database with the highest match score are returned as hypotheses for the unknown image. Several feature vectors may be stored for each word in the database. Database feature vectors generated using multiple fonts and noise models allow the system to be tuned to its input stream. Used in conjunction with database pruning techniques, this Arabic recognition system has obtained promising word recognition rates on low-quality multifont text imagery.
Starrfelt, Randi; Klargaard, Solja K; Petersen, Anders; Gerlach, Christian
2018-02-01
Recent models suggest that face and word recognition may rely on overlapping cognitive processes and neural regions. In support of this notion, face recognition deficits have been demonstrated in developmental dyslexia. Here we test whether the opposite association can also be found, that is, impaired reading in developmental prosopagnosia. We tested 10 adults with developmental prosopagnosia and 20 matched controls. All participants completed the Cambridge Face Memory Test, the Cambridge Face Perception test and a Face recognition questionnaire used to quantify everyday face recognition experience. Reading was measured in four experimental tasks, testing different levels of letter, word, and text reading: (a) single word reading with words of varying length,(b) vocal response times in single letter and short word naming, (c) recognition of single letters and short words at brief exposure durations (targeting the word superiority effect), and d) text reading. Participants with developmental prosopagnosia performed strikingly similar to controls across the four reading tasks. Formal analysis revealed a significant dissociation between word and face recognition, as the difference in performance with faces and words was significantly greater for participants with developmental prosopagnosia than for controls. Adult developmental prosopagnosics read as quickly and fluently as controls, while they are seemingly unable to learn efficient strategies for recognizing faces. We suggest that this is due to the differing demands that face and word recognition put on the perceptual system. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Italians Use Abstract Knowledge about Lexical Stress during Spoken-Word Recognition
ERIC Educational Resources Information Center
Sulpizio, Simone; McQueen, James M.
2012-01-01
In two eye-tracking experiments in Italian, we investigated how acoustic information and stored knowledge about lexical stress are used during the recognition of tri-syllabic spoken words. Experiment 1 showed that Italians use acoustic cues to a word's stress pattern rapidly in word recognition, but only for words with antepenultimate stress.…
The Slow Developmental Time Course of Real-Time Spoken Word Recognition
ERIC Educational Resources Information Center
Rigler, Hannah; Farris-Trimble, Ashley; Greiner, Lea; Walker, Jessica; Tomblin, J. Bruce; McMurray, Bob
2015-01-01
This study investigated the developmental time course of spoken word recognition in older children using eye tracking to assess how the real-time processing dynamics of word recognition change over development. We found that 9-year-olds were slower to activate the target words and showed more early competition from competitor words than…
ERIC Educational Resources Information Center
Ebert, Ashlee A.
2009-01-01
Ehri's developmental model of word recognition outlines early reading development that spans from the use of logos to advanced knowledge of oral and written language to read words. Henderson's developmental spelling theory presents stages of word knowledge that progress in a similar manner to Ehri's phases. The purpose of this research study was…
Word position affects stimulus recognition: evidence for early ERP short-term plastic modulation.
Spironelli, Chiara; Galfano, Giovanni; Umiltà, Carlo; Angrilli, Alessandro
2011-12-01
The present study was aimed at investigating the short-term plastic changes that follow word learning at a neurophysiological level. The main hypothesis was that word position (left or right visual field, LVF/RH or RVF/LH) in the initial learning phase would leave a trace that affected, in the subsequent recognition phase, the Recognition Potential (i.e., the first negative component distinguishing words from other stimuli) elicited 220-240 ms after centrally presented stimuli. Forty-eight students were administered, in the learning phase, 125 words for 4s, randomly presented half in the left and half in the right visual field. In the recognition phase, participants were split into two equal groups, one was assigned to the Word task, the other to the Picture task (in which half of the 125 pictures were new, and half matched prior studied words). During the Word task, old RVF/LH words elicited significantly greater negativity in left posterior sites with respect to old LVF/RH words, which in turn showed the same pattern of activation evoked by new words. Therefore, correspondence between stimulus spatial position and hemisphere specialized in automatic word recognition created a robust prime for subsequent recognition. During the Picture task, pictures matching old RVF/LH words showed no differences compared with new pictures, but evoked significantly greater negativity than pictures matching old LVF/RH words. Thus, the priming effect vanished when the task required a switch from visual analysis to stored linguistic information, whereas the lack of correspondence between stimulus position and network specialized in automatic word recognition (i.e., when words were presented to the LVF/RH) revealed the implicit costs for recognition. Results support the view that short-term plastic changes occurring in a linguistic learning task interact with both stimulus position and modality (written word vs. picture representation). Copyright © 2011 Elsevier B.V. All rights reserved.
The A2iA French handwriting recognition system at the Rimes-ICDAR2011 competition
NASA Astrophysics Data System (ADS)
Menasri, Farès; Louradour, Jérôme; Bianne-Bernard, Anne-Laure; Kermorvant, Christopher
2012-01-01
This paper describes the system for the recognition of French handwriting submitted by A2iA to the competition organized at ICDAR2011 using the Rimes database. This system is composed of several recognizers based on three different recognition technologies, combined using a novel combination method. A framework multi-word recognition based on weighted finite state transducers is presented, using an explicit word segmentation, a combination of isolated word recognizers and a language model. The system was tested both for isolated word recognition and for multi-word line recognition and submitted to the RIMES-ICDAR2011 competition. This system outperformed all previously proposed systems on these tasks.
Form–meaning links in the development of visual word recognition
Nation, Kate
2009-01-01
Learning to read takes time and it requires explicit instruction. Three decades of research has taught us a good deal about how children learn about the links between orthography and phonology during word reading development. However, we have learned less about the links that children build between orthographic form and meaning. This is surprising given that the goal of reading development must be for children to develop an orthographic system that allows meanings to be accessed quickly, reliably and efficiently from orthography. This review considers whether meaning-related information is used when children read words aloud, and asks what we know about how and when children make connections between form and meaning during the course of reading development. PMID:19933139
Research and Implementation of Tibetan Word Segmentation Based on Syllable Methods
NASA Astrophysics Data System (ADS)
Jiang, Jing; Li, Yachao; Jiang, Tao; Yu, Hongzhi
2018-03-01
Tibetan word segmentation (TWS) is an important problem in Tibetan information processing, while abbreviated word recognition is one of the key and most difficult problems in TWS. Most of the existing methods of Tibetan abbreviated word recognition are rule-based approaches, which need vocabulary support. In this paper, we propose a method based on sequence tagging model for abbreviated word recognition, and then implement in TWS systems with sequence labeling models. The experimental results show that our abbreviated word recognition method is fast and effective and can be combined easily with the segmentation model. This significantly increases the effect of the Tibetan word segmentation.
The low-frequency encoding disadvantage: Word frequency affects processing demands.
Diana, Rachel A; Reder, Lynne M
2006-07-01
Low-frequency words produce more hits and fewer false alarms than high-frequency words in a recognition task. The low-frequency hit rate advantage has sometimes been attributed to processes that operate during the recognition test (e.g., L. M. Reder et al., 2000). When tasks other than recognition, such as recall, cued recall, or associative recognition, are used, the effects seem to contradict a low-frequency advantage in memory. Four experiments are presented to support the claim that in addition to the advantage of low-frequency words at retrieval, there is a low-frequency disadvantage during encoding. That is, low-frequency words require more processing resources to be encoded episodically than high-frequency words. Under encoding conditions in which processing resources are limited, low-frequency words show a larger decrement in recognition than high-frequency words. Also, studying items (pictures and words of varying frequencies) along with low-frequency words reduces performance for those stimuli. Copyright 2006 APA, all rights reserved.
L2 Word Recognition: Influence of L1 Orthography on Multi-syllabic Word Recognition.
Hamada, Megumi
2017-10-01
L2 reading research suggests that L1 orthographic experience influences L2 word recognition. Nevertheless, the findings on multi-syllabic words in English are still limited despite the fact that a vast majority of words are multi-syllabic. The study investigated whether L1 orthography influences the recognition of multi-syllabic words, focusing on the position of an embedded word. The participants were Arabic ESL learners, Chinese ESL learners, and native speakers of English. The task was a word search task, in which the participants identified a target word embedded in a pseudoword at the initial, middle, or final position. The search accuracy and speed indicated that all groups showed a strong preference for the initial position. The accuracy data further indicated group differences. The Arabic group showed higher accuracy in the final than middle, while the Chinese group showed the opposite and the native speakers showed no difference between the two positions. The findings suggest that L2 multi-syllabic word recognition involves unique processes.
A Limited-Vocabulary, Multi-Speaker Automatic Isolated Word Recognition System.
ERIC Educational Resources Information Center
Paul, James E., Jr.
Techniques for automatic recognition of isolated words are investigated, and a computer simulation of a word recognition system is effected. Considered in detail are data acquisition and digitizing, word detection, amplitude and time normalization, short-time spectral estimation including spectral windowing, spectral envelope approximation,…
Emotion and language: Valence and arousal affect word recognition
Brysbaert, Marc; Warriner, Amy Beth
2014-01-01
Emotion influences most aspects of cognition and behavior, but emotional factors are conspicuously absent from current models of word recognition. The influence of emotion on word recognition has mostly been reported in prior studies on the automatic vigilance for negative stimuli, but the precise nature of this relationship is unclear. Various models of automatic vigilance have claimed that the effect of valence on response times is categorical, an inverted-U, or interactive with arousal. The present study used a sample of 12,658 words, and included many lexical and semantic control factors, to determine the precise nature of the effects of arousal and valence on word recognition. Converging empirical patterns observed in word-level and trial-level data from lexical decision and naming indicate that valence and arousal exert independent monotonic effects: Negative words are recognized more slowly than positive words, and arousing words are recognized more slowly than calming words. Valence explained about 2% of the variance in word recognition latencies, whereas the effect of arousal was smaller. Valence and arousal do not interact, but both interact with word frequency, such that valence and arousal exert larger effects among low-frequency words than among high-frequency words. These results necessitate a new model of affective word processing whereby the degree of negativity monotonically and independently predicts the speed of responding. This research also demonstrates that incorporating emotional factors, especially valence, improves the performance of models of word recognition. PMID:24490848
Clinical implications of word recognition differences in earphone and aided conditions
McRackan, Theodore R.; Ahlstrom, Jayne B.; Clinkscales, William B.; Meyer, Ted A.; Dubno, Judy R
2017-01-01
Objective To compare word recognition scores for adults with hearing loss measured using earphones and in the sound field without and with hearing aids (HA) Study design Independent review of pre-surgical audiological data from an active middle ear implant (MEI) FDA clinical trial Setting Multicenter prospective FDA clinical trial Patients Ninety-four adult HA users Interventions/Main outcomes measured Pre-operative earphone, unaided and aided pure tone thresholds, word recognition scores, and speech intelligibility index. Results We performed an independent review of pre-surgical audiological data from a MEI FDA trial and compared unaided and aided word recognition scores with participants’ HAs fit according to the NAL-R algorithm. For 52 participants (55.3%), differences in scores between earphone and aided conditions were >10%; for 33 participants (35.1%), earphone scores were higher by 10% or more than aided scores. These participants had significantly higher pure tone thresholds at 250 Hz, 500 Hz, and 1000 Hz), higher pure tone averages, higher speech recognition thresholds, (and higher earphone speech levels (p=0.002). No significant correlation was observed between word recognition scores measured with earphones and with hearing aids (r=.14; p=0.16), whereas a moderately high positive correlation was observed between unaided and aided word recognition (r=0.68; p<0.001). Conclusion Results of the these analyses do not support the common clinical practice of using word recognition scores measured with earphones to predict aided word recognition or hearing aid benefit. Rather, these results provide evidence supporting the measurement of aided word recognition in patients who are considering hearing aids. PMID:27631832
Adult Word Recognition and Visual Sequential Memory
ERIC Educational Resources Information Center
Holmes, V. M.
2012-01-01
Two experiments were conducted investigating the role of visual sequential memory skill in the word recognition efficiency of undergraduate university students. Word recognition was assessed in a lexical decision task using regularly and strangely spelt words, and nonwords that were either standard orthographically legal strings or items made from…
Interpreting Chicken-Scratch: Lexical Access for Handwritten Words
ERIC Educational Resources Information Center
Barnhart, Anthony S.; Goldinger, Stephen D.
2010-01-01
Handwritten word recognition is a field of study that has largely been neglected in the psychological literature, despite its prevalence in society. Whereas studies of spoken word recognition almost exclusively employ natural, human voices as stimuli, studies of visual word recognition use synthetic typefaces, thus simplifying the process of word…
Asymmetries in Early Word Recognition: The Case of Stops and Fricatives
ERIC Educational Resources Information Center
Altvater-Mackensen, Nicole; van der Feest, Suzanne V. H.; Fikkert, Paula
2014-01-01
Toddlers' discrimination of native phonemic contrasts is generally unproblematic. Yet using those native contrasts in word learning and word recognition can be more challenging. In this article, we investigate perceptual versus phonological explanations for asymmetrical patterns found in early word recognition. We systematically investigated the…
Memory without context: amnesia with confabulations after infarction of the right capsular genu.
Schnider, A; Gutbrod, K; Hess, C W; Schroth, G
1996-01-01
OBJECTIVE--To explore the mechanism of an amnesia marked by confabulations and lack of insight in a patient with an infarct of the right inferior capsular genu. The confabulations could mostly be traced back to earlier events, indicating that the memory disorder ensued from an inability to store the temporal and spatial context of information acquisition rather than a failure to store new information. METHODS--To test the patient's ability to store the context of information acquisition, two experiments were composed in which she was asked to decide when or where she had learned the words from two word lists presented at different points in time or in different rooms. To test her ability to store new information, two continuous recognition tests with novel non-words and nonsense designs were used. Recognition of these stimuli was assumed to be independent of the context of acquisition because the patient could not have an a priori sense of familiarity with them. RESULTS--The patient performed at chance in the experiments probing knowledge of the context of information acquisition, although she recognised the presented words almost as well as the controls. By contrast, her performance was normal in the recognition tests with non-words and nonsense designs. CONCLUSION--These findings indicate that the patient's amnesia was based on an inability to store the context of information acquisition rather than the information itself. Based on an analysis of her lesion, which disconnected the thalamus from the orbitofrontal cortex and the amygdala, and considering the similarities between her disorder, Wernicke-Korsakoff syndrome, and the amnesia after orbitofrontal lesions, it is proposed that contextual amnesia results from interruption of the loop connecting the amygdala, the dorsomedial nucleus, and the orbitofrontal cortex. Images PMID:8708688
Anticipatory coarticulation facilitates word recognition in toddlers.
Mahr, Tristan; McMillan, Brianna T M; Saffran, Jenny R; Ellis Weismer, Susan; Edwards, Jan
2015-09-01
Children learn from their environments and their caregivers. To capitalize on learning opportunities, young children have to recognize familiar words efficiently by integrating contextual cues across word boundaries. Previous research has shown that adults can use phonetic cues from anticipatory coarticulation during word recognition. We asked whether 18-24 month-olds (n=29) used coarticulatory cues on the word "the" when recognizing the following noun. We performed a looking-while-listening eyetracking experiment to examine word recognition in neutral vs. facilitating coarticulatory conditions. Participants looked to the target image significantly sooner when the determiner contained facilitating coarticulatory cues. These results provide the first evidence that novice word-learners can take advantage of anticipatory sub-phonemic cues during word recognition. Copyright © 2015 Elsevier B.V. All rights reserved.
Beato, María Soledad; Arndt, Jason
2014-01-01
False memory illusions have been widely studied using the Deese/Roediger-McDermott paradigm (DRM). In this paradigm, participants study words semantically related to a single nonpresented critical word. In a memory test critical words are often falsely recalled and recognized. The present study was conducted to measure the levels of false recognition for seventy-five Spanish DRM word lists that have multiple critical words per list. Lists included three critical words (e.g., HELL, LUCEFER, and SATAN) simultaneously associated with six studied words (e.g., devil, demon, fire, red, bad, and evil). Different levels of forward associative strength (FAS) between the critical words and their studied associates were used in the construction of the lists. Specifically, we selected lists with the highest FAS values possible and FAS was continuously decreased in order to obtain the 75 lists. Six words per list, simultaneously associated with three critical words, were sufficient to produce false recognition. Furthermore, there was wide variability in rates of false recognition (e.g., 53% for DUNGEON, PRISON, and GRATES; 1% for BRACKETS, GARMENT, and CLOTHING). Finally, there was no correlation between false recognition and associative strength. False recognition variability could not be attributed to differences in the forward associative strength.
An L2 Reader's Word-Recognition Strategies: Transferred or Developed
ERIC Educational Resources Information Center
Alco, Bonnie
2010-01-01
Transfer of reading strategies from the first language (L1) to the second language (L2) has long puzzled educators, but what happens if the L1 is an alphabet language and the second is not, or if there is a mismatch in the languages' grapheme-phoneme connection? Although some students readily adjust to reading and writing in their second language,…
Qin, Jiangyi; Huang, Zhiping; Liu, Chunwu; Su, Shaojing; Zhou, Jing
2015-01-01
A novel blind recognition algorithm of frame synchronization words is proposed to recognize the frame synchronization words parameters in digital communication systems. In this paper, a blind recognition method of frame synchronization words based on the hard-decision is deduced in detail. And the standards of parameter recognition are given. Comparing with the blind recognition based on the hard-decision, utilizing the soft-decision can improve the accuracy of blind recognition. Therefore, combining with the characteristics of Quadrature Phase Shift Keying (QPSK) signal, an improved blind recognition algorithm based on the soft-decision is proposed. Meanwhile, the improved algorithm can be extended to other signal modulation forms. Then, the complete blind recognition steps of the hard-decision algorithm and the soft-decision algorithm are given in detail. Finally, the simulation results show that both the hard-decision algorithm and the soft-decision algorithm can recognize the parameters of frame synchronization words blindly. What's more, the improved algorithm can enhance the accuracy of blind recognition obviously.
Recognition intent and visual word recognition.
Wang, Man-Ying; Ching, Chi-Le
2009-03-01
This study adopted a change detection task to investigate whether and how recognition intent affects the construction of orthographic representation in visual word recognition. Chinese readers (Experiment 1-1) and nonreaders (Experiment 1-2) detected color changes in radical components of Chinese characters. Explicit recognition demand was imposed in Experiment 2 by an additional recognition task. When the recognition was implicit, a bias favoring the radical location informative of character identity was found in Chinese readers (Experiment 1-1), but not nonreaders (Experiment 1-2). With explicit recognition demands, the effect of radical location interacted with radical function and word frequency (Experiment 2). An estimate of identification performance under implicit recognition was derived in Experiment 3. These findings reflect the joint influence of recognition intent and orthographic regularity in shaping readers' orthographic representation. The implication for the role of visual attention in word recognition was also discussed.
The Impact of Left and Right Intracranial Tumors on Picture and Word Recognition Memory
ERIC Educational Resources Information Center
Goldstein, Bram; Armstrong, Carol L.; Modestino, Edward; Ledakis, George; John, Cameron; Hunter, Jill V.
2004-01-01
This study investigated the effects of left and right intracranial tumors on picture and word recognition memory. We hypothesized that left hemispheric (LH) patients would exhibit greater word recognition memory impairment than right hemispheric (RH) patients, with no significant hemispheric group picture recognition memory differences. The LH…
Frisch, Stefan A.; Pisoni, David B.
2012-01-01
Objective Computational simulations were carried out to evaluate the appropriateness of several psycholinguistic theories of spoken word recognition for children who use cochlear implants. These models also investigate the interrelations of commonly used measures of closed-set and open-set tests of speech perception. Design A software simulation of phoneme recognition performance was developed that uses feature identification scores as input. Two simulations of lexical access were developed. In one, early phoneme decisions are used in a lexical search to find the best matching candidate. In the second, phoneme decisions are made only when lexical access occurs. Simulated phoneme and word identification performance was then applied to behavioral data from the Phonetically Balanced Kindergarten test and Lexical Neighborhood Test of open-set word recognition. Simulations of performance were evaluated for children with prelingual sensorineural hearing loss who use cochlear implants with the MPEAK or SPEAK coding strategies. Results Open-set word recognition performance can be successfully predicted using feature identification scores. In addition, we observed no qualitative differences in performance between children using MPEAK and SPEAK, suggesting that both groups of children process spoken words similarly despite differences in input. Word recognition ability was best predicted in the model in which phoneme decisions were delayed until lexical access. Conclusions Closed-set feature identification and open-set word recognition focus on different, but related, levels of language processing. Additional insight for clinical intervention may be achieved by collecting both types of data. The most successful model of performance is consistent with current psycholinguistic theories of spoken word recognition. Thus it appears that the cognitive process of spoken word recognition is fundamentally the same for pediatric cochlear implant users and children and adults with normal hearing. PMID:11132784
McDermott, Kathleen B; Gilmore, Adrian W; Nelson, Steven M; Watson, Jason M; Ojemann, Jeffrey G
2017-02-01
Neuroimaging investigations of human memory encoding and retrieval have revealed that multiple regions of parietal cortex contribute to memory. Recently, a sparse network of regions within parietal cortex has been identified using resting state functional connectivity (MRI techniques). The regions within this network exhibit consistent task-related responses during memory formation and retrieval, leading to its being called the parietal memory network (PMN). Among its signature patterns are: deactivation during initial experience with an item (e.g., encoding); activation during subsequent repetitions (e.g., at retrieval); greater activation for successfully retrieved familiar words than novel words (e.g., hits relative to correctly-rejected lures). The question of interest here is whether novel words that are subjectively experienced as having been recently studied would elicit PMN activation similar to that of hits. That is, we compared old items correctly recognized to two types of novel items on a recognition test: those correctly identified as new and those incorrectly labeled as old due to their strong associative relation to the studied words (in the DRM false memory protocol). Subjective oldness plays a strong role in driving activation, as hits and false alarms activated similarly (and greater than correctly-rejected lures). Copyright © 2016 Elsevier Ltd. All rights reserved.
Amichetti, Nicole M; Atagi, Eriko; Kong, Ying-Yee; Wingfield, Arthur
The increasing numbers of older adults now receiving cochlear implants raises the question of how the novel signal produced by cochlear implants may interact with cognitive aging in the recognition of words heard spoken within a linguistic context. The objective of this study was to pit the facilitative effects of a constraining linguistic context against a potential age-sensitive negative effect of response competition on effectiveness of word recognition. Younger (n = 8; mean age = 22.5 years) and older (n = 8; mean age = 67.5 years) adult implant recipients heard 20 target words as the final words in sentences that manipulated the target word's probability of occurrence within the sentence context. Data from published norms were also used to measure response entropy, calculated as the total number of different responses and the probability distribution of the responses suggested by the sentence context. Sentence-final words were presented to participants using a word-onset gating paradigm, in which a target word was presented with increasing amounts of its onset duration in 50 msec increments until the word was correctly identified. Results showed that for both younger and older adult implant users, the amount of word-onset information needed for correct recognition of sentence-final words was inversely proportional to their likelihood of occurrence within the sentence context, with older adults gaining differential advantage from the contextual constraints offered by a sentence context. On the negative side, older adults' word recognition was differentially hampered by high response entropy, with this effect being driven primarily by the number of competing responses that might also fit the sentence context. Consistent with previous research with normal-hearing younger and older adults, the present results showed older adult implant users' recognition of spoken words to be highly sensitive to linguistic context. This sensitivity, however, also resulted in a greater degree of interference from other words that might also be activated by the context, with negative effects on ease of word recognition. These results are consistent with an age-related inhibition deficit extending to the domain of semantic constraints on word recognition.
Word Recognition and Critical Reading.
ERIC Educational Resources Information Center
Groff, Patrick
1991-01-01
This article discusses the distinctions between literal and critical reading and explains the role that word recognition ability plays in critical reading behavior. It concludes that correct word recognition provides the raw material on which higher order critical reading is based. (DB)
Do handwritten words magnify lexical effects in visual word recognition?
Perea, Manuel; Gil-López, Cristina; Beléndez, Victoria; Carreiras, Manuel
2016-01-01
An examination of how the word recognition system is able to process handwritten words is fundamental to formulate a comprehensive model of visual word recognition. Previous research has revealed that the magnitude of lexical effects (e.g., the word-frequency effect) is greater with handwritten words than with printed words. In the present lexical decision experiments, we examined whether the quality of handwritten words moderates the recruitment of top-down feedback, as reflected in word-frequency effects. Results showed a reading cost for difficult-to-read and easy-to-read handwritten words relative to printed words. But the critical finding was that difficult-to-read handwritten words, but not easy-to-read handwritten words, showed a greater word-frequency effect than printed words. Therefore, the inherent physical variability of handwritten words does not necessarily boost the magnitude of lexical effects.
Word recognition using a lexicon constrained by first/last character decisions
NASA Astrophysics Data System (ADS)
Zhao, Sheila X.; Srihari, Sargur N.
1995-03-01
In lexicon based recognition of machine-printed word images, the size of the lexicon can be quite extensive. The recognition performance is closely related to the size of the lexicon. Recognition performance drops quickly when lexicon size increases. Here, we present an algorithm to improve the word recognition performance by reducing the size of the given lexicon. The algorithm utilizes the information provided by the first and last characters of a word to reduce the size of the given lexicon. Given a word image and a lexicon that contains the word in the image, the first and last characters are segmented and then recognized by a character classifier. The possible candidates based on the results given by the classifier are selected, which give us the sub-lexicon. Then a word shape analysis algorithm is applied to produce the final ranking of the given lexicon. The algorithm was tested on a set of machine- printed gray-scale word images which includes a wide range of print types and qualities.
Word Spotting and Recognition with Embedded Attributes.
Almazán, Jon; Gordo, Albert; Fornés, Alicia; Valveny, Ernest
2014-12-01
This paper addresses the problems of word spotting and word recognition on images. In word spotting, the goal is to find all instances of a query word in a dataset of images. In recognition, the goal is to recognize the content of the word image, usually aided by a dictionary or lexicon. We describe an approach in which both word images and text strings are embedded in a common vectorial subspace. This is achieved by a combination of label embedding and attributes learning, and a common subspace regression. In this subspace, images and strings that represent the same word are close together, allowing one to cast recognition and retrieval tasks as a nearest neighbor problem. Contrary to most other existing methods, our representation has a fixed length, is low dimensional, and is very fast to compute and, especially, to compare. We test our approach on four public datasets of both handwritten documents and natural images showing results comparable or better than the state-of-the-art on spotting and recognition tasks.
ERIC Educational Resources Information Center
Gelfand, Jessica T.; Christie, Robert E.; Gelfand, Stanley A.
2014-01-01
Purpose: Speech recognition may be analyzed in terms of recognition probabilities for perceptual wholes (e.g., words) and parts (e.g., phonemes), where j or the j-factor reveals the number of independent perceptual units required for recognition of the whole (Boothroyd, 1968b; Boothroyd & Nittrouer, 1988; Nittrouer & Boothroyd, 1990). For…
Etymology as an Aid to Understanding Chemistry Concepts
NASA Astrophysics Data System (ADS)
Sarma, Nittala S.
2004-10-01
Recognition of word roots and the pattern of evolution of scientific terms can be helpful in understanding chemistry concepts (gaining knowledge of new concepts represented by related terms). The meaning and significance of various etymological roots, occurring as prefixes and suffixes in technical terms particularly of organic chemistry, are explained in a unified manner in order to show the connection of various concepts vis à vis the terms in currency. The meanings of some special words and many examples are provided. The interesting aspects of history and culture often involved in the evolution of terms will help sustain an abiding engagement in the study of chemistry.
Evans, Julia L; Gillam, Ronald B; Montgomery, James W
2018-05-10
This study examined the influence of cognitive factors on spoken word recognition in children with developmental language disorder (DLD) and typically developing (TD) children. Participants included 234 children (aged 7;0-11;11 years;months), 117 with DLD and 117 TD children, propensity matched for age, gender, socioeconomic status, and maternal education. Children completed a series of standardized assessment measures, a forward gating task, a rapid automatic naming task, and a series of tasks designed to examine cognitive factors hypothesized to influence spoken word recognition including phonological working memory, updating, attention shifting, and interference inhibition. Spoken word recognition for both initial and final accept gate points did not differ for children with DLD and TD controls after controlling target word knowledge in both groups. The 2 groups also did not differ on measures of updating, attention switching, and interference inhibition. Despite the lack of difference on these measures, for children with DLD, attention shifting and interference inhibition were significant predictors of spoken word recognition, whereas updating and receptive vocabulary were significant predictors of speed of spoken word recognition for the children in the TD group. Contrary to expectations, after controlling for target word knowledge, spoken word recognition did not differ for children with DLD and TD controls; however, the cognitive processing factors that influenced children's ability to recognize the target word in a stream of speech differed qualitatively for children with and without DLDs.
A pilot study to assess oral health literacy by comparing a word recognition and comprehension tool.
Khan, Khadija; Ruby, Brendan; Goldblatt, Ruth S; Schensul, Jean J; Reisine, Susan
2014-11-18
Oral health literacy is important to oral health outcomes. Very little has been established on comparing word recognition to comprehension in oral health literacy especially in older adults. Our goal was to compare methods to measure oral health literacy in older adults by using the Rapid Estimate of Literacy in Dentistry (REALD-30) tool including word recognition and comprehension and by assessing comprehension of a brochure about dry mouth. 75 males and 75 females were recruited from the University of Connecticut Dental practice. Participants were English speakers and at least 50 years of age. They were asked to read the REALD-30 words out loud (word recognition) and then define them (comprehension). Each correctly-pronounced and defined word was scored 1 for total REALD-30 word recognition and REALD-30 comprehension scores of 0-30. Participants then read the National Institute of Dental and Craniofacial Research brochure "Dry Mouth" and answered three questions defining dry mouth, causes and treatment. Participants also completed a survey on dental behavior. Participants scored higher on REALD-30 word recognition with a mean of 22.98 (SD = 5.1) compared to REALD-30 comprehension with a mean of 16.1 (SD = 4.3). The mean score on the brochure comprehension was 5.1 of a possible total of 7 (SD = 1.6). Pearson correlations demonstrated significant associations among the three measures. Multivariate regression showed that females and those with higher education had significantly higher scores on REALD-30 word-recognition, and dry mouth brochure questions. Being white was significantly related to higher REALD-30 recognition and comprehension scores but not to the scores on the brochure. This pilot study demonstrates the feasibility of using the REALD-30 and a brochure to assess literacy in a University setting among older adults. Participants had higher scores on the word recognition than on comprehension agreeing with other studies that recognition does not imply understanding.
Temporal lobe networks supporting the comprehension of spoken words.
Bonilha, Leonardo; Hillis, Argye E; Hickok, Gregory; den Ouden, Dirk B; Rorden, Chris; Fridriksson, Julius
2017-09-01
Auditory word comprehension is a cognitive process that involves the transformation of auditory signals into abstract concepts. Traditional lesion-based studies of stroke survivors with aphasia have suggested that neocortical regions adjacent to auditory cortex are primarily responsible for word comprehension. However, recent primary progressive aphasia and normal neurophysiological studies have challenged this concept, suggesting that the left temporal pole is crucial for word comprehension. Due to its vasculature, the temporal pole is not commonly completely lesioned in stroke survivors and this heterogeneity may have prevented its identification in lesion-based studies of auditory comprehension. We aimed to resolve this controversy using a combined voxel-based-and structural connectome-lesion symptom mapping approach, since cortical dysfunction after stroke can arise from cortical damage or from white matter disconnection. Magnetic resonance imaging (T1-weighted and diffusion tensor imaging-based structural connectome), auditory word comprehension and object recognition tests were obtained from 67 chronic left hemisphere stroke survivors. We observed that damage to the inferior temporal gyrus, to the fusiform gyrus and to a white matter network including the left posterior temporal region and its connections to the middle temporal gyrus, inferior temporal gyrus, and cingulate cortex, was associated with word comprehension difficulties after factoring out object recognition. These results suggest that the posterior lateral and inferior temporal regions are crucial for word comprehension, serving as a hub to integrate auditory and conceptual processing. Early processing linking auditory words to concepts is situated in posterior lateral temporal regions, whereas additional and deeper levels of semantic processing likely require more anterior temporal regions.10.1093/brain/awx169_video1awx169media15555638084001. © The Author (2017). Published by Oxford University Press on behalf of the Guarantors of Brain. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
The cingulo-opercular network provides word-recognition benefit.
Vaden, Kenneth I; Kuchinsky, Stefanie E; Cute, Stephanie L; Ahlstrom, Jayne B; Dubno, Judy R; Eckert, Mark A
2013-11-27
Recognizing speech in difficult listening conditions requires considerable focus of attention that is often demonstrated by elevated activity in putative attention systems, including the cingulo-opercular network. We tested the prediction that elevated cingulo-opercular activity provides word-recognition benefit on a subsequent trial. Eighteen healthy, normal-hearing adults (10 females; aged 20-38 years) performed word recognition (120 trials) in multi-talker babble at +3 and +10 dB signal-to-noise ratios during a sparse sampling functional magnetic resonance imaging (fMRI) experiment. Blood oxygen level-dependent (BOLD) contrast was elevated in the anterior cingulate cortex, anterior insula, and frontal operculum in response to poorer speech intelligibility and response errors. These brain regions exhibited significantly greater correlated activity during word recognition compared with rest, supporting the premise that word-recognition demands increased the coherence of cingulo-opercular network activity. Consistent with an adaptive control network explanation, general linear mixed model analyses demonstrated that increased magnitude and extent of cingulo-opercular network activity was significantly associated with correct word recognition on subsequent trials. These results indicate that elevated cingulo-opercular network activity is not simply a reflection of poor performance or error but also supports word recognition in difficult listening conditions.
Storage and retrieval properties of dual codes for pictures and words in recognition memory.
Snodgrass, J G; McClure, P
1975-09-01
Storage and retrieval properties of pictures and words were studied within a recognition memory paradigm. Storage was manipulated by instructing subjects either to image or to verbalize to both picture and word stimuli during the study sequence. Retrieval was manipulated by representing a proportion of the old picture and word items in their opposite form during the recognition test (i.e., some old pictures were tested with their corresponding words and vice versa). Recognition performance for pictures was identical under the two instructional conditions, whereas recognition performance for words was markedly superior under the imagery instruction condition. It was suggested that subjects may engage in dual coding of simple pictures naturally, regardless of instructions, whereas dual coding of words may occur only under imagery instructions. The form of the test item had no effect on recognition performance for either type of stimulus and under either instructional condition. However, change of form of the test item markedly reduced item-by-item correlations between the two instructional conditions. It is tentatively proposed that retrieval is required in recognition, but that the effect of a form change is simply to make the retrieval process less consistent, not less efficient.
[Representation of letter position in visual word recognition process].
Makioka, S
1994-08-01
Two experiments investigated the representation of letter position in visual word recognition process. In Experiment 1, subjects (12 undergraduates and graduates) were asked to detect a target word in a briefly-presented probe. Probes consisted of two kanji words. The latters which formed targets (critical letters) were always contained in probes. (e.g. target: [symbol: see text] probe: [symbol: see text]) High false alarm rate was observed when critical letters occupied the same within-word relative position (left or right within the word) in the probe words as in the target word. In Experiment 2 (subject were ten undergraduates and graduates), spaces adjacent to probe words were replaced by randomly chosen hiragana letters (e.g. [symbol: see text]), because spaces are not used to separate words in regular Japanese sentences. In addition to the effect of within-word relative position as in Experiment 1, the effect of between-word relative position (left or right across the probe words) was observed. These results suggest that information about within-word relative position of a letter is used in word recognition process. The effect of within-word relative position was explained by a connectionist model of word recognition.
Neuron array with plastic synapses and programmable dendrites.
Ramakrishnan, Shubha; Wunderlich, Richard; Hasler, Jennifer; George, Suma
2013-10-01
We describe a novel neuromorphic chip architecture that models neurons for efficient computation. Traditional architectures of neuron array chips consist of large scale systems that are interfaced with AER for implementing intra- or inter-chip connectivity. We present a chip that uses AER for inter-chip communication but uses fast, reconfigurable FPGA-style routing with local memory for intra-chip connectivity. We model neurons with biologically realistic channel models, synapses and dendrites. This chip is suitable for small-scale network simulations and can also be used for sequence detection, utilizing directional selectivity properties of dendrites, ultimately for use in word recognition.
Eye-fixation behavior, lexical storage, and visual word recognition in a split processing model.
Shillcock, R; Ellison, T M; Monaghan, P
2000-10-01
Some of the implications of a model of visual word recognition in which processing is conditioned by the anatomical splitting of the visual field between the two hemispheres of the brain are explored. The authors investigate the optimal processing of visually presented words within such an architecture, and, for a realistically sized lexicon of English, characterize a computationally optimal fixation point in reading. They demonstrate that this approach motivates a range of behavior observed in reading isolated words and text, including the optimal viewing position and its relationship with the preferred viewing location, the failure to fixate smaller words, asymmetries in hemisphere-specific processing, and the priority given to the exterior letters of words. The authors also show that split architectures facilitate the uptake of all the letter-position information necessary for efficient word recognition and that this information may be less specific than is normally assumed. A split model of word recognition captures a range of behavior in reading that is greater than that covered by existing models of visual word recognition.
Phonological Priming and Cohort Effects in Toddlers
ERIC Educational Resources Information Center
Mani, Nivedita; Plunkett, Kim
2011-01-01
Adult word recognition is influenced by prior exposure to phonologically or semantically related words ("cup" primes "cat" or "plate") compared to unrelated words ("door"), suggesting that words are organised in the adult lexicon based on their phonological and semantic properties and that word recognition implicates not just the heard word, but…
ERIC Educational Resources Information Center
Kearns, Devin M.; Steacy, Laura M.; Compton, Donald L.; Gilbert, Jennifer K.; Goodwin, Amanda P.; Cho, Eunsoo; Lindstrom, Esther R.; Collins, Alyson A.
2016-01-01
Comprehensive models of derived polymorphemic word recognition skill in developing readers, with an emphasis on children with reading difficulty (RD), have not been developed. The purpose of the present study was to model individual differences in polymorphemic word recognition ability at the item level among 5th-grade children (N = 173)…
Schmidtke, Daniel; Matsuki, Kazunaga; Kuperman, Victor
2017-11-01
The current study addresses a discrepancy in the psycholinguistic literature about the chronology of information processing during the visual recognition of morphologically complex words. Form-then-meaning accounts of complex word recognition claim that morphemes are processed as units of form prior to any influence of their meanings, whereas form-and-meaning models posit that recognition of complex word forms involves the simultaneous access of morphological and semantic information. The study reported here addresses this theoretical discrepancy by applying a nonparametric distributional technique of survival analysis (Reingold & Sheridan, 2014) to 2 behavioral measures of complex word processing. Across 7 experiments reported here, this technique is employed to estimate the point in time at which orthographic, morphological, and semantic variables exert their earliest discernible influence on lexical decision RTs and eye movement fixation durations. Contrary to form-then-meaning predictions, Experiments 1-4 reveal that surface frequency is the earliest lexical variable to exert a demonstrable influence on lexical decision RTs for English and Dutch derived words (e.g., badness ; bad + ness ), English pseudoderived words (e.g., wander ; wand + er ) and morphologically simple control words (e.g., ballad ; ball + ad ). Furthermore, for derived word processing across lexical decision and eye-tracking paradigms (Experiments 1-2; 5-7), semantic effects emerge early in the time-course of word recognition, and their effects either precede or emerge simultaneously with morphological effects. These results are not consistent with the premises of the form-then-meaning view of complex word recognition, but are convergent with a form-and-meaning account of complex word recognition. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Walczak, Adam; Ahlstrom, Jayne; Denslow, Stewart; Horwitz, Amy; Dubno, Judy R.
2008-01-01
Speech recognition can be difficult and effortful for older adults, even for those with normal hearing. Declining frontal lobe cognitive control has been hypothesized to cause age-related speech recognition problems. This study examined age-related changes in frontal lobe function for 15 clinically normal hearing adults (21–75 years) when they performed a word recognition task that was made challenging by decreasing word intelligibility. Although there were no age-related changes in word recognition, there were age-related changes in the degree of activity within left middle frontal gyrus (MFG) and anterior cingulate (ACC) regions during word recognition. Older adults engaged left MFG and ACC regions when words were most intelligible compared to younger adults who engaged these regions when words were least intelligible. Declining gray matter volume within temporal lobe regions responsive to word intelligibility significantly predicted left MFG activity, even after controlling for total gray matter volume, suggesting that declining structural integrity of brain regions responsive to speech leads to the recruitment of frontal regions when words are easily understood. Electronic supplementary material The online version of this article (doi:10.1007/s10162-008-0113-3) contains supplementary material, which is available to authorized users. PMID:18274825
Syllable Transposition Effects in Korean Word Recognition
ERIC Educational Resources Information Center
Lee, Chang H.; Kwon, Youan; Kim, Kyungil; Rastle, Kathleen
2015-01-01
Research on the impact of letter transpositions in visual word recognition has yielded important clues about the nature of orthographic representations. This study investigated the impact of syllable transpositions on the recognition of Korean multisyllabic words. Results showed that rejection latencies in visual lexical decision for…
Longitudinal changes in speech recognition in older persons.
Dubno, Judy R; Lee, Fu-Shing; Matthews, Lois J; Ahlstrom, Jayne B; Horwitz, Amy R; Mills, John H
2008-01-01
Recognition of isolated monosyllabic words in quiet and recognition of key words in low- and high-context sentences in babble were measured in a large sample of older persons enrolled in a longitudinal study of age-related hearing loss. Repeated measures were obtained yearly or every 2 to 3 years. To control for concurrent changes in pure-tone thresholds and speech levels, speech-recognition scores were adjusted using an importance-weighted speech-audibility metric (AI). Linear-regression slope estimated the rate of change in adjusted speech-recognition scores. Recognition of words in quiet declined significantly faster with age than predicted by declines in speech audibility. As subjects aged, observed scores deviated increasingly from AI-predicted scores, but this effect did not accelerate with age. Rate of decline in word recognition was significantly faster for females than males and for females with high serum progesterone levels, whereas noise history had no effect. Rate of decline did not accelerate with age but increased with degree of hearing loss, suggesting that with more severe injury to the auditory system, impairments to auditory function other than reduced audibility resulted in faster declines in word recognition as subjects aged. Recognition of key words in low- and high-context sentences in babble did not decline significantly with age.
Phonotactics Constraints and the Spoken Word Recognition of Chinese Words in Speech
ERIC Educational Resources Information Center
Yip, Michael C.
2016-01-01
Two word-spotting experiments were conducted to examine the question of whether native Cantonese listeners are constrained by phonotactics information in spoken word recognition of Chinese words in speech. Because no legal consonant clusters occurred within an individual Chinese word, this kind of categorical phonotactics information of Chinese…
Orthographic Facilitation in Chinese Spoken Word Recognition: An ERP Study
ERIC Educational Resources Information Center
Zou, Lijuan; Desroches, Amy S.; Liu, Youyi; Xia, Zhichao; Shu, Hua
2012-01-01
Orthographic influences in spoken word recognition have been previously examined in alphabetic languages. However, it is unknown whether orthographic information affects spoken word recognition in Chinese, which has a clean dissociation between orthography (O) and phonology (P). The present study investigated orthographic effects using event…
NASA Astrophysics Data System (ADS)
Li, Ji; Ren, Fuji
Weblogs have greatly changed the communication ways of mankind. Affective analysis of blog posts is found valuable for many applications such as text-to-speech synthesis or computer-assisted recommendation. Traditional emotion recognition in text based on single-label classification can not satisfy higher requirements of affective computing. In this paper, the automatic identification of sentence emotion in weblogs is modeled as a multi-label text categorization task. Experiments are carried out on 12273 blog sentences from the Chinese emotion corpus Ren_CECps with 8-dimension emotion annotation. An ensemble algorithm RAKEL is used to recognize dominant emotions from the writer's perspective. Our emotion feature using detailed intensity representation for word emotions outperforms the other main features such as the word frequency feature and the traditional lexicon-based feature. In order to deal with relatively complex sentences, we integrate grammatical characteristics of punctuations, disjunctive connectives, modification relations and negation into features. It achieves 13.51% and 12.49% increases for Micro-averaged F1 and Macro-averaged F1 respectively compared to the traditional lexicon-based feature. Result shows that multiple-dimension emotion representation with grammatical features can efficiently classify sentence emotion in a multi-label problem.
Event Recognition Based on Deep Learning in Chinese Texts
Zhang, Yajun; Liu, Zongtian; Zhou, Wen
2016-01-01
Event recognition is the most fundamental and critical task in event-based natural language processing systems. Existing event recognition methods based on rules and shallow neural networks have certain limitations. For example, extracting features using methods based on rules is difficult; methods based on shallow neural networks converge too quickly to a local minimum, resulting in low recognition precision. To address these problems, we propose the Chinese emergency event recognition model based on deep learning (CEERM). Firstly, we use a word segmentation system to segment sentences. According to event elements labeled in the CEC 2.0 corpus, we classify words into five categories: trigger words, participants, objects, time and location. Each word is vectorized according to the following six feature layers: part of speech, dependency grammar, length, location, distance between trigger word and core word and trigger word frequency. We obtain deep semantic features of words by training a feature vector set using a deep belief network (DBN), then analyze those features in order to identify trigger words by means of a back propagation neural network. Extensive testing shows that the CEERM achieves excellent recognition performance, with a maximum F-measure value of 85.17%. Moreover, we propose the dynamic-supervised DBN, which adds supervised fine-tuning to a restricted Boltzmann machine layer by monitoring its training performance. Test analysis reveals that the new DBN improves recognition performance and effectively controls the training time. Although the F-measure increases to 88.11%, the training time increases by only 25.35%. PMID:27501231
Event Recognition Based on Deep Learning in Chinese Texts.
Zhang, Yajun; Liu, Zongtian; Zhou, Wen
2016-01-01
Event recognition is the most fundamental and critical task in event-based natural language processing systems. Existing event recognition methods based on rules and shallow neural networks have certain limitations. For example, extracting features using methods based on rules is difficult; methods based on shallow neural networks converge too quickly to a local minimum, resulting in low recognition precision. To address these problems, we propose the Chinese emergency event recognition model based on deep learning (CEERM). Firstly, we use a word segmentation system to segment sentences. According to event elements labeled in the CEC 2.0 corpus, we classify words into five categories: trigger words, participants, objects, time and location. Each word is vectorized according to the following six feature layers: part of speech, dependency grammar, length, location, distance between trigger word and core word and trigger word frequency. We obtain deep semantic features of words by training a feature vector set using a deep belief network (DBN), then analyze those features in order to identify trigger words by means of a back propagation neural network. Extensive testing shows that the CEERM achieves excellent recognition performance, with a maximum F-measure value of 85.17%. Moreover, we propose the dynamic-supervised DBN, which adds supervised fine-tuning to a restricted Boltzmann machine layer by monitoring its training performance. Test analysis reveals that the new DBN improves recognition performance and effectively controls the training time. Although the F-measure increases to 88.11%, the training time increases by only 25.35%.
Effects of Error Correction on Word Recognition and Reading Comprehension.
ERIC Educational Resources Information Center
Jenkins, Joseph R.; And Others
1983-01-01
Two procedures for correcting oral reading errors, word supply and word drill, were examined to determine their effects on measures of word recognition and comprehension with 17 learning disabled elementary school students. (Author/SW)
Automatic speech recognition technology development at ITT Defense Communications Division
NASA Technical Reports Server (NTRS)
White, George M.
1977-01-01
An assessment of the applications of automatic speech recognition to defense communication systems is presented. Future research efforts include investigations into the following areas: (1) dynamic programming; (2) recognition of speech degraded by noise; (3) speaker independent recognition; (4) large vocabulary recognition; (5) word spotting and continuous speech recognition; and (6) isolated word recognition.
Speed and accuracy of dyslexic versus typical word recognition: an eye-movement investigation
Kunert, Richard; Scheepers, Christoph
2014-01-01
Developmental dyslexia is often characterized by a dual deficit in both word recognition accuracy and general processing speed. While previous research into dyslexic word recognition may have suffered from speed-accuracy trade-off, the present study employed a novel eye-tracking task that is less prone to such confounds. Participants (10 dyslexics and 12 controls) were asked to look at real word stimuli, and to ignore simultaneously presented non-word stimuli, while their eye-movements were recorded. Improvements in word recognition accuracy over time were modeled in terms of a continuous non-linear function. The words' rhyme consistency and the non-words' lexicality (unpronounceable, pronounceable, pseudohomophone) were manipulated within-subjects. Speed-related measures derived from the model fits confirmed generally slower processing in dyslexics, and showed a rhyme consistency effect in both dyslexics and controls. In terms of overall error rate, dyslexics (but not controls) performed less accurately on rhyme-inconsistent words, suggesting a representational deficit for such words in dyslexics. Interestingly, neither group showed a pseudohomophone effect in speed or accuracy, which might call the task-independent pervasiveness of this effect into question. The present results illustrate the importance of distinguishing between speed- vs. accuracy-related effects for our understanding of dyslexic word recognition. PMID:25346708
ERIC Educational Resources Information Center
Beech, John R.; Mayall, Kate A.
2005-01-01
This study investigates the relative roles of internal and external letter features in word recognition. In Experiment 1 the efficacy of outer word fragments (words with all their horizontal internal features removed) was compared with inner word fragments (words with their outer features removed) as primes in a forward masking paradigm. These…
The Development of Word Recognition in a Second Language.
ERIC Educational Resources Information Center
Muljani, D.; Koda, Keiko; Moates, Danny R.
1998-01-01
A study investigated differences in English word recognition in native speakers of Indonesian (an alphabetic language) and Chinese (a logographic languages) learning English as a Second Language. Results largely confirmed the hypothesis that an alphabetic first language would predict better word recognition in speakers of an alphabetic language,…
The Role of Antibody in Korean Word Recognition
ERIC Educational Resources Information Center
Lee, Chang Hwan; Lee, Yoonhyoung; Kim, Kyungil
2010-01-01
A subsyllabic phonological unit, the antibody, has received little attention as a potential fundamental processing unit in word recognition. The psychological reality of the antibody in Korean recognition was investigated by looking at the performance of subjects presented with nonwords and words in the lexical decision task. In Experiment 1, the…
The Effects of Explicit Word Recognition Training on Japanese EFL Learners
ERIC Educational Resources Information Center
Burrows, Lance; Holsworth, Michael
2016-01-01
This study is a quantitative, quasi-experimental investigation focusing on the effects of word recognition training on word recognition fluency, reading speed, and reading comprehension for 151 Japanese university students at a lower-intermediate reading proficiency level. Four treatment groups were given training in orthographic, phonological,…
Interpreting Chicken-Scratch: Lexical Access for Handwritten Words
Barnhart, Anthony S.; Goldinger, Stephen D.
2014-01-01
Handwritten word recognition is a field of study that has largely been neglected in the psychological literature, despite its prevalence in society. Whereas studies of spoken word recognition almost exclusively employ natural, human voices as stimuli, studies of visual word recognition use synthetic typefaces, thus simplifying the process of word recognition. The current study examined the effects of handwriting on a series of lexical variables thought to influence bottom-up and top-down processing, including word frequency, regularity, bidirectional consistency, and imageability. The results suggest that the natural physical ambiguity of handwritten stimuli forces a greater reliance on top-down processes, because almost all effects were magnified, relative to conditions with computer print. These findings suggest that processes of word perception naturally adapt to handwriting, compensating for physical ambiguity by increasing top-down feedback. PMID:20695708
Barker, Lynne Ann; Morton, Nicholas; Romanowski, Charles A J; Gosden, Kevin
2013-10-24
We report a rare case of a patient unable to read (alexic) and write (agraphic) after a mild head injury. He had preserved speech and comprehension, could spell aloud, identify words spelt aloud and copy letter features. He was unable to visualise letters but showed no problems with digits. Neuropsychological testing revealed general visual memory, processing speed and imaging deficits. Imaging data revealed an 8 mm colloid cyst of the third ventricle that splayed the fornix. Little is known about functions mediated by fornical connectivity, but this region is thought to contribute to memory recall. Other regions thought to mediate letter recognition and letter imagery, visual word form area and visual pathways were intact. We remediated reading and writing by multimodal letter retraining. The study raises issues about the neural substrates of reading, role of fornical tracts to selective memory in the absence of other pathology, and effective remediation strategies for selective functional deficits.
Lin, Nan; Yu, Xi; Zhao, Ying; Zhang, Mingxia
2016-01-01
This fMRI study aimed to identify the neural mechanisms underlying the recognition of Chinese multi-character words by partialling out the confounding effect of reaction time (RT). For this purpose, a special type of nonword-transposable nonword-was created by reversing the character orders of real words. These nonwords were included in a lexical decision task along with regular (non-transposable) nonwords and real words. Through conjunction analysis on the contrasts of transposable nonwords versus regular nonwords and words versus regular nonwords, the confounding effect of RT was eliminated, and the regions involved in word recognition were reliably identified. The word-frequency effect was also examined in emerged regions to further assess their functional roles in word processing. Results showed significant conjunctional effect and positive word-frequency effect in the bilateral inferior parietal lobules and posterior cingulate cortex, whereas only conjunctional effect was found in the anterior cingulate cortex. The roles of these brain regions in recognition of Chinese multi-character words were discussed.
Lin, Nan; Yu, Xi; Zhao, Ying; Zhang, Mingxia
2016-01-01
This fMRI study aimed to identify the neural mechanisms underlying the recognition of Chinese multi-character words by partialling out the confounding effect of reaction time (RT). For this purpose, a special type of nonword—transposable nonword—was created by reversing the character orders of real words. These nonwords were included in a lexical decision task along with regular (non-transposable) nonwords and real words. Through conjunction analysis on the contrasts of transposable nonwords versus regular nonwords and words versus regular nonwords, the confounding effect of RT was eliminated, and the regions involved in word recognition were reliably identified. The word-frequency effect was also examined in emerged regions to further assess their functional roles in word processing. Results showed significant conjunctional effect and positive word-frequency effect in the bilateral inferior parietal lobules and posterior cingulate cortex, whereas only conjunctional effect was found in the anterior cingulate cortex. The roles of these brain regions in recognition of Chinese multi-character words were discussed. PMID:26901644
Recognition and reading aloud of kana and kanji word: an fMRI study.
Ino, Tadashi; Nakai, Ryusuke; Azuma, Takashi; Kimura, Toru; Fukuyama, Hidenao
2009-03-16
It has been proposed that different brain regions are recruited for processing two Japanese writing systems, namely, kanji (morphograms) and kana (syllabograms). However, this difference may depend upon what type of word was used and also on what type of task was performed. Using fMRI, we investigated brain activation for processing kanji and kana words with similar high familiarity in two tasks: word recognition and reading aloud. During both tasks, words and non-words were presented side by side, and the subjects were required to press a button corresponding to the real word in the word recognition task and were required to read aloud the real word in the reading aloud task. Brain activations were similar between kanji and kana during reading aloud task, whereas during word recognition task in which accurate identification and selection were required, kanji relative to kana activated regions of bilateral frontal, parietal and occipitotemporal cortices, all of which were related mainly to visual word-form analysis and visuospatial attention. Concerning the difference of brain activity between two tasks, differential activation was found only in the regions associated with task-specific sensorimotor processing for kana, whereas visuospatial attention network also showed greater activation during word recognition task than during reading aloud task for kanji. We conclude that the differences in brain activation between kanji and kana depend on the interaction between the script characteristics and the task demands.
Intact suppression of increased false recognition in schizophrenia.
Weiss, Anthony P; Dodson, Chad S; Goff, Donald C; Schacter, Daniel L; Heckers, Stephan
2002-09-01
Recognition memory is impaired in patients with schizophrenia, as they rely largely on item familiarity, rather than conscious recollection, to make mnemonic decisions. False recognition of novel items (foils) is increased in schizophrenia and may relate to this deficit in conscious recollection. By studying pictures of the target word during encoding, healthy adults can suppress false recognition. This study examined the effect of pictorial encoding on subsequent recognition of repeated foils in patients with schizophrenia. The study included 40 patients with schizophrenia and 32 healthy comparison subjects. After incidental encoding of 60 words or pictures, subjects were tested for recognition of target items intermixed with 60 new foils. These new foils were subsequently repeated following either a two- or 24-word delay. Subjects were instructed to label these repeated foils as new and not to mistake them for old target words. Schizophrenic patients showed greater overall false recognition of repeated foils. The rate of false recognition of repeated foils was lower after picture encoding than after word encoding. Despite higher levels of false recognition of repeated new items, patients and comparison subjects demonstrated a similar degree of false recognition suppression after picture, as compared to word, encoding. Patients with schizophrenia displayed greater false recognition of repeated foils than comparison subjects, suggesting both a decrement of item- (or source-) specific recollection and a consequent reliance on familiarity in schizophrenia. Despite these deficits, presenting pictorial information at encoding allowed schizophrenic subjects to suppress false recognition to a similar degree as the comparison group, implying the intact use of a high-level cognitive strategy in this population.
ERIC Educational Resources Information Center
Gelfand, Stanley A.; Gelfand, Jessica T.
2012-01-01
Method: Complete psychometric functions for phoneme and word recognition scores at 8 signal-to-noise ratios from -15 dB to 20 dB were generated for the first 10, 20, and 25, as well as all 50, three-word presentations of the Tri-Word or Computer Assisted Speech Recognition Assessment (CASRA) Test (Gelfand, 1998) based on the results of 12…
Shi, Lu-Feng; Koenig, Laura L
2016-09-01
Nonnative listeners have difficulty recognizing English words due to underdeveloped acoustic-phonetic and/or lexical skills. The present study used Boothroyd and Nittrouer's (1988)j factor to tease apart these two components of word recognition. Participants included 15 native English and 29 native Russian listeners. Fourteen and 15 of the Russian listeners reported English (ED) and Russian (RD) to be their dominant language, respectively. Listeners were presented 119 consonant-vowel-consonant real and nonsense words in speech-spectrum noise at +6 dB SNR. Responses were scored for word and phoneme recognition, the logarithmic quotient of which yielded j. Word and phoneme recognition was comparable between native and ED listeners but poorer in RD listeners. Analysis of j indicated less effective use of lexical information in RD than in native and ED listeners. Lexical processing was strongly correlated with the length of residence in the United States. Language background is important for nonnative word recognition. Lexical skills can be regarded as nativelike in ED nonnative listeners. Compromised word recognition in ED listeners is unlikely a result of poor lexical processing. Performance should be interpreted with caution for listeners dominant in their first language, whose word recognition is affected by both lexical and acoustic-phonetic factors.
Semantic Ambiguity: Do Multiple Meanings Inhibit or Facilitate Word Recognition?
Haro, Juan; Ferré, Pilar
2018-06-01
It is not clear whether multiple unrelated meanings inhibit or facilitate word recognition. Some studies have found a disadvantage for words having multiple meanings with respect to unambiguous words in lexical decision tasks (LDT), whereas several others have shown a facilitation for such words. In the present study, we argue that these inconsistent findings may be due to the approach employed to select ambiguous words across studies. To address this issue, we conducted three LDT experiments in which we varied the measure used to classify ambiguous and unambiguous words. The results suggest that multiple unrelated meanings facilitate word recognition. In addition, we observed that the approach employed to select ambiguous words may affect the pattern of experimental results. This evidence has relevant implications for theoretical accounts of ambiguous words processing and representation.
Continuous multiword recognition performance of young and elderly listeners in ambient noise
NASA Astrophysics Data System (ADS)
Sato, Hiroshi
2005-09-01
Hearing threshold shift due to aging is known as a dominant factor to degrade speech recognition performance in noisy conditions. On the other hand, cognitive factors of aging-relating speech recognition performance in various speech-to-noise conditions are not well established. In this study, two kinds of speech test were performed to examine how working memory load relates to speech recognition performance. One is word recognition test with high-familiarity, four-syllable Japanese words (single-word test). In this test, each word was presented to listeners; the listeners were asked to write the word down on paper with enough time to answer. In the other test, five continuous word were presented to listeners and listeners were asked to write the word down after just five words were presented (multiword test). Both tests were done in various speech-to-noise ratios under 50-dBA Hoth spectrum noise with more than 50 young and elderly subjects. The results of two experiments suggest that (1) Hearing level is related to scores of both tests. (2) Scores of single-word test are well correlated with those of multiword test. (3) Scores of multiword test are not improved as speech-to-noise ratio improves in the condition where scores of single-word test reach their ceiling.
ERIC Educational Resources Information Center
Loukusa, Soile; Mäkinen, Leena; Kuusikko-Gauffin, Sanna; Ebeling, Hanna; Moilanen, Irma
2014-01-01
Background: Social perception skills, such as understanding the mind and emotions of others, affect children's communication abilities in real-life situations. In addition to autism spectrum disorder (ASD), there is increasing knowledge that children with specific language impairment (SLI) also demonstrate difficulties in their social…
Standard-Chinese Lexical Neighborhood Test in normal-hearing young children.
Liu, Chang; Liu, Sha; Zhang, Ning; Yang, Yilin; Kong, Ying; Zhang, Luo
2011-06-01
The purposes of the present study were to establish the Standard-Chinese version of Lexical Neighborhood Test (LNT) and to examine the lexical and age effects on spoken-word recognition in normal-hearing children. Six lists of monosyllabic and six lists of disyllabic words (20 words/list) were selected from the database of daily speech materials for normal-hearing (NH) children of ages 3-5 years. The lists were further divided into "easy" and "hard" halves according to the word frequency and neighborhood density in the database based on the theory of Neighborhood Activation Model (NAM). Ninety-six NH children (age ranged between 4.0 and 7.0 years) were divided into three different age groups of 1-year intervals. Speech-perception tests were conducted using the Standard-Chinese monosyllabic and disyllabic LNT. The inter-list performance was found to be equivalent and inter-rater reliability was high with 92.5-95% consistency. Results of word-recognition scores showed that the lexical effects were all significant. Children scored higher with disyllabic words than with monosyllabic words. "Easy" words scored higher than "hard" words. The word-recognition performance also increased with age in each lexical category. A multiple linear regression analysis showed that neighborhood density, age, and word frequency appeared to have increasingly more contributions to Chinese word recognition. The results of the present study indicated that performances of Chinese word recognition were influenced by word frequency, age, and neighborhood density, with word frequency playing a major role. These results were consistent with those in other languages, supporting the application of NAM in the Chinese language. The development of Standard-Chinese version of LNT and the establishment of a database of children of 4-6 years old can provide a reliable means for spoken-word recognition test in children with hearing impairment. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Concurrent Correlates of Chinese Word Recognition in Deaf and Hard-of-Hearing Children
ERIC Educational Resources Information Center
Ching, Boby Ho-Hong; Nunes, Terezinha
2015-01-01
The aim of this study was to explore the relative contributions of phonological, semantic radical, and morphological awareness to Chinese word recognition in deaf and hard-of-hearing (DHH) children. Measures of word recognition, general intelligence, phonological, semantic radical, and morphological awareness were administered to 32 DHH and 35…
ERIC Educational Resources Information Center
Kidd, Joanna C.; Shum, Kathy K.; Wong, Anita M.-Y.; Ho, Connie S.-H.
2017-01-01
Auditory processing and spoken word recognition difficulties have been observed in Specific Language Impairment (SLI), raising the possibility that auditory perceptual deficits disrupt word recognition and, in turn, phonological processing and oral language. In this study, fifty-seven kindergarten children with SLI and fifty-three language-typical…
Formal Models of Word Recognition. Final Report.
ERIC Educational Resources Information Center
Travers, Jeffrey R.
Existing mathematical models of word recognition are reviewed and a new theory is proposed in this research. The new theory integrates earlier proposals within a single framework, sacrificing none of the predictive power of the earlier proposals, but offering a gain in theoretical economy. The theory holds that word recognition is accomplished by…
ERIC Educational Resources Information Center
Schmidtke, Daniel; Matsuki, Kazunaga; Kuperman, Victor
2017-01-01
The current study addresses a discrepancy in the psycholinguistic literature about the chronology of information processing during the visual recognition of morphologically complex words. "Form-then-meaning" accounts of complex word recognition claim that morphemes are processed as units of form prior to any influence of their meanings,…
Specifying Theories of Developmental Dyslexia: A Diffusion Model Analysis of Word Recognition
ERIC Educational Resources Information Center
Zeguers, Maaike H. T.; Snellings, Patrick; Tijms, Jurgen; Weeda, Wouter D.; Tamboer, Peter; Bexkens, Anika; Huizenga, Hilde M.
2011-01-01
The nature of word recognition difficulties in developmental dyslexia is still a topic of controversy. We investigated the contribution of phonological processing deficits and uncertainty to the word recognition difficulties of dyslexic children by mathematical diffusion modeling of visual and auditory lexical decision data. The first study showed…
Individual Differences in Visual Word Recognition: Insights from the English Lexicon Project
ERIC Educational Resources Information Center
Yap, Melvin J.; Balota, David A.; Sibley, Daragh E.; Ratcliff, Roger
2012-01-01
Empirical work and models of visual word recognition have traditionally focused on group-level performance. Despite the emphasis on the prototypical reader, there is clear evidence that variation in reading skill modulates word recognition performance. In the present study, we examined differences among individuals who contributed to the English…
Learning during processing Word learning doesn’t wait for word recognition to finish
Apfelbaum, Keith S.; McMurray, Bob
2017-01-01
Previous research on associative learning has uncovered detailed aspects of the process, including what types of things are learned, how they are learned, and where in the brain such learning occurs. However, perceptual processes, such as stimulus recognition and identification, take time to unfold. Previous studies of learning have not addressed when, during the course of these dynamic recognition processes, learned representations are formed and updated. If learned representations are formed and updated while recognition is ongoing, the result of learning may incorporate spurious, partial information. For example, during word recognition, words take time to be identified, and competing words are often active in parallel. If learning proceeds before this competition resolves, representations may be influenced by the preliminary activations present at the time of learning. In three experiments using word learning as a model domain, we provide evidence that learning reflects the ongoing dynamics of auditory and visual processing during a learning event. These results show that learning can occur before stimulus recognition processes are complete; learning does not wait for ongoing perceptual processing to complete. PMID:27471082
Individual Differences in Visual Word Recognition: Insights from the English Lexicon Project
Yap, Melvin J.; Balota, David A.; Sibley, Daragh E.; Ratcliff, Roger
2011-01-01
Empirical work and models of visual word recognition have traditionally focused on group-level performance. Despite the emphasis on the prototypical reader, there is clear evidence that variation in reading skill modulates word recognition performance. In the present study, we examined differences between individuals who contributed to the English Lexicon Project (http://elexicon.wustl.edu), an online behavioral database containing nearly four million word recognition (speeded pronunciation and lexical decision) trials from over 1,200 participants. We observed considerable within- and between-session reliability across distinct sets of items, in terms of overall mean response time (RT), RT distributional characteristics, diffusion model parameters (Ratcliff, Gomez, & McKoon, 2004), and sensitivity to underlying lexical dimensions. This indicates reliably detectable individual differences in word recognition performance. In addition, higher vocabulary knowledge was associated with faster, more accurate word recognition performance, attenuated sensitivity to stimuli characteristics, and more efficient accumulation of information. Finally, in contrast to suggestions in the literature, we did not find evidence that individuals were trading-off in their utilization of lexical and nonlexical information. PMID:21728459
Dien, Joseph; Brian, Eric S; Molfese, Dennis L; Gold, Brian T
2013-10-01
Two brain regions with established roles in reading are the posterior middle temporal gyrus and the posterior fusiform gyrus (FG). Lesion studies have also suggested that the region located between them, the posterior inferior temporal gyrus (pITG), plays a central role in word recognition. However, these lesion results could reflect disconnection effects since neuroimaging studies have not reported consistent lexicality effects in pITG. Here we tested whether these reported pITG lesion effects are due to disconnection effects or not using parallel Event-related Potentials (ERP)/functional magnetic resonance imaging (fMRI) studies. We predicted that the Recognition Potential (RP), a left-lateralized ERP negativity that peaks at about 200-250 msec, might be the electrophysiological correlate of pITG activity and that conditions that evoke the RP (perceptual degradation) might therefore also evoke pITG activity. In Experiment 1, twenty-three participants performed a lexical decision task (temporally flanked by supraliminal masks) while having high-density 129-channel ERP data collected. In Experiment 2, a separate group of fifteen participants underwent the same task while having fMRI data collected in a 3T scanner. Examination of the ERP data suggested that a canonical RP effect was produced. The strongest corresponding effect in the fMRI data was in the vicinity of the pITG. In addition, results indicated stimulus-dependent functional connectivity between pITG and a region of the posterior FG near the Visual Word Form Area (VWFA) during word compared to nonword processing. These results provide convergent spatiotemporal evidence that the pITG contributes to early lexical access through interaction with the VWFA. Copyright © 2013 Elsevier Ltd. All rights reserved.
Andoh, Jamila; Paus, Tomás
2011-02-01
Repetitive TMS (rTMS) provides a noninvasive tool for modulating neural activity in the human brain. In healthy participants, rTMS applied over the language-related areas in the left hemisphere, including the left posterior temporal area of Wernicke (LTMP) and inferior frontal area of Broca, have been shown to affect performance on word recognition tasks. To investigate the neural substrate of these behavioral effects, off-line rTMS was combined with fMRI acquired during the performance of a word recognition task. Twenty right-handed healthy men underwent fMRI scans before and after a session of 10-Hz rTMS applied outside the magnetic resonance scanner. Functional magnetic resonance images were acquired during the performance of a word recognition task that used English or foreign-language words. rTMS was applied over the LTMP in one group of 10 participants (LTMP group), whereas the homologue region in the right hemisphere was stimulated in another group of 10 participants (RTMP group). Changes in task-related fMRI response (English minus foreign languages) and task performances (response time and accuracy) were measured in both groups and compared between pre-rTMS and post-rTMS. Our results showed that rTMS increased task-related fMRI response in the homologue areas contralateral to the stimulated sites. We also found an effect of rTMS on response time for the LTMP group only. These findings provide insights into changes in neural activity in cortical regions connected to the stimulated site and are consistent with a hypothesis raised in a previous review about the role of the homologue areas in the contralateral hemisphere for preserving behavior after neural interference.
Semantic Ambiguity: Do Multiple Meanings Inhibit or Facilitate Word Recognition?
ERIC Educational Resources Information Center
Haro, Juan; Ferré, Pilar
2018-01-01
It is not clear whether multiple unrelated meanings inhibit or facilitate word recognition. Some studies have found a disadvantage for words having multiple meanings with respect to unambiguous words in lexical decision tasks (LDT), whereas several others have shown a facilitation for such words. In the present study, we argue that these…
Selective attention and recognition: effects of congruency on episodic learning.
Rosner, Tamara M; D'Angelo, Maria C; MacLellan, Ellen; Milliken, Bruce
2015-05-01
Recent research on cognitive control has focused on the learning consequences of high selective attention demands in selective attention tasks (e.g., Botvinick, Cognit Affect Behav Neurosci 7(4):356-366, 2007; Verguts and Notebaert, Psychol Rev 115(2):518-525, 2008). The current study extends these ideas by examining the influence of selective attention demands on remembering. In Experiment 1, participants read aloud the red word in a pair of red and green spatially interleaved words. Half of the items were congruent (the interleaved words had the same identity), and the other half were incongruent (the interleaved words had different identities). Following the naming phase, participants completed a surprise recognition memory test. In this test phase, recognition memory was better for incongruent than for congruent items. In Experiment 2, context was only partially reinstated at test, and again recognition memory was better for incongruent than for congruent items. In Experiment 3, all of the items contained two different words, but in one condition the words were presented close together and interleaved, while in the other condition the two words were spatially separated. Recognition memory was better for the interleaved than for the separated items. This result rules out an interpretation of the congruency effects on recognition in Experiments 1 and 2 that hinges on stronger relational encoding for items that have two different words. Together, the results support the view that selective attention demands for incongruent items lead to encoding that improves recognition.
Comparison of crisp and fuzzy character networks in handwritten word recognition
NASA Technical Reports Server (NTRS)
Gader, Paul; Mohamed, Magdi; Chiang, Jung-Hsien
1992-01-01
Experiments involving handwritten word recognition on words taken from images of handwritten address blocks from the United States Postal Service mailstream are described. The word recognition algorithm relies on the use of neural networks at the character level. The neural networks are trained using crisp and fuzzy desired outputs. The fuzzy outputs were defined using a fuzzy k-nearest neighbor algorithm. The crisp networks slightly outperformed the fuzzy networks at the character level but the fuzzy networks outperformed the crisp networks at the word level.
Acquisition of Malay word recognition skills: lessons from low-progress early readers.
Lee, Lay Wah; Wheldall, Kevin
2011-02-01
Malay is a consistent alphabetic orthography with complex syllable structures. The focus of this research was to investigate word recognition performance in order to inform reading interventions for low-progress early readers. Forty-six Grade 1 students were sampled and 11 were identified as low-progress readers. The results indicated that both syllable awareness and phoneme blending were significant predictors of word recognition, suggesting that both syllable and phonemic grain-sizes are important in Malay word recognition. Item analysis revealed a hierarchical pattern of difficulty based on the syllable and the phonic structure of the words. Error analysis identified the sources of errors to be errors due to inefficient syllable segmentation, oversimplification of syllables, insufficient grapheme-phoneme knowledge and inefficient phonemic code assembly. Evidence also suggests that direct instruction in syllable segmentation, phonemic awareness and grapheme-phoneme correspondence is necessary for low-progress readers to acquire word recognition skills. Finally, a logical sequence to teach grapheme-phoneme decoding in Malay is suggested. Copyright © 2010 John Wiley & Sons, Ltd.
Context effects and false memory for alcohol words in adolescents.
Zack, Martin; Sharpley, Justin; Dent, Clyde W; Stacy, Alan W
2009-03-01
This study assessed incidental recognition of Alcohol and Neutral words in adolescents who encoded the words under distraction. Participants were 171 (87 male) 10th grade students, ages 14-16 (M=15.1) years. Testing was conducted by telephone: Participants listened to a list containing Alcohol and Neutral (Experimental--Group E, n=92) or only Neutral (Control--Group C, n=79) words, while counting backwards from 200 by two's. Recognition was tested immediately thereafter. Group C exhibited higher false recognition of Neutral than Alcohol items, whereas Group E displayed equivalent false rates for both word types. The reported number of alcohol TV ads seen in the past week predicted higher false recognition of Neutral words in Group C and of Alcohol words in Group E. False memory for Alcohol words in Group E was greater in males and high anxiety sensitive participants. These context-dependent biases may contribute to exaggerations in perceived drinking norms previously found to predict alcohol misuse in young drinkers.
Liu, Haihong; Liu, Sha; Wang, Suju; Liu, Chang; Kong, Ying; Zhang, Ning; Li, Shujing; Yang, Yilin; Han, Demin; Zhang, Luo
2013-01-01
The purpose of this study was to examine the open-set word recognition performance of Mandarin Chinese-speaking children who had received a multichannel cochlear implant (CI) and examine the effects of lexical characteristics and demographic factors (i.e., age at implantation and duration of implant use) on Mandarin Chinese open-set word recognition in these children. Participants were 230 prelingually deafened children with CIs. Age at implantation ranged from 0.9 to 16.0 years, with a mean of 3.9 years. The Standard-Chinese version of the Monosyllabic Lexical Neighborhood test and the Multisyllabic Lexical Neighborhood test were used to evaluate the open-set word identification abilities of the children. A two-way analysis of variance was performed to delineate the lexical effects on the open-set word identification, with word difficulty and syllable length as the two main factors. The effects of age at implantation and duration of implant use on open-set, word-recognition performance were examined using correlational/regressional models. First, the average percent-correct scores for the disyllabic "easy" list, disyllabic "hard" list, monosyllabic "easy" list, and monosyllabic "hard" list were 65.0%, 51.3%, 58.9%, and 46.2%, respectively. For both the easy and hard lists, the percentage of words correctly identified was higher for disyllabic words than for monosyllabic words, Second, the CI group scored 26.3%, 31.3%, and 18.8 % points lower than their hearing-age-matched normal-hearing peers for 4, 5, and 6 years of hearing age, respectively. The corresponding gaps between the CI group and the chronological-age-matched normal-hearing group were 47.6, 49.6, and 42.4, respectively. The individual variations in performance were much greater in the CI group than in the normal-hearing group, Third, the children exhibited steady improvements in performance as the duration of implant use increased, especially 1 to 6 years postimplantation. Last, age at implantation had significant effects on postimplantation word-recognition performance. The benefit of early implantation was particularly evident in children 5 years old or younger. First, Mandarin Chinese-speaking pediatric CI users' open-set word recognition was influenced by the lexical characteristics of the stimuli. The score was higher for easy words than for hard words and was higher for disyllabic words than for monosyllabic words, Second, Mandarin-Chinese-speaking pediatric CI users exhibited steady progress in open-set word recognition as the duration of implant use increased. However, the present study also demonstrated that, even after 6 years of CI use, there was a significant deficit in open-set, word-recognition performance in the CI children compared with their normal-hearing peers. Third, age at implantation had significant effects on open-set, word-recognition performance. Early implanted children exhibited better performance than children implanted later.
ERIC Educational Resources Information Center
Suttora, Chiara; Salerni, Nicoletta; Zanchi, Paola; Zampini, Laura; Spinelli, Maria; Fasolo, Mirco
2017-01-01
This study aimed to investigate specific associations between structural and acoustic characteristics of infant-directed (ID) speech and word recognition. Thirty Italian-acquiring children and their mothers were tested when the children were 1;3. Children's word recognition was measured with the looking-while-listening task. Maternal ID speech was…
The Low-Frequency Encoding Disadvantage: Word Frequency Affects Processing Demands
ERIC Educational Resources Information Center
Diana, Rachel A.; Reder, Lynne M.
2006-01-01
Low-frequency words produce more hits and fewer false alarms than high-frequency words in a recognition task. The low-frequency hit rate advantage has sometimes been attributed to processes that operate during the recognition test (e.g., L. M. Reder et al., 2000). When tasks other than recognition, such as recall, cued recall, or associative…
Knowledge of a Second Language Influences Auditory Word Recognition in the Native Language
ERIC Educational Resources Information Center
Lagrou, Evelyne; Hartsuiker, Robert J.; Duyck, Wouter
2011-01-01
Many studies in bilingual visual word recognition have demonstrated that lexical access is not language selective. However, research on bilingual word recognition in the auditory modality has been scarce, and it has yielded mixed results with regard to the degree of this language nonselectivity. In the present study, we investigated whether…
ERIC Educational Resources Information Center
Sheehy, Kieron
2005-01-01
Children with severe learning difficulties who fail to begin word recognition can learn to recognise pictures and symbols relatively easily. However, finding an effective means of using pictures to teach word recognition has proved problematic. This research explores the use of morphing software to support the transition from picture to word…
Examination of the neighborhood activation theory in normal and hearing-impaired listeners.
Dirks, D D; Takayanagi, S; Moshfegh, A; Noffsinger, P D; Fausti, S A
2001-02-01
Experiments were conducted to examine the effects of lexical information on word recognition among normal hearing listeners and individuals with sensorineural hearing loss. The lexical factors of interest were incorporated in the Neighborhood Activation Model (NAM). Central to this model is the concept that words are recognized relationally in the context of other phonemically similar words. NAM suggests that words in the mental lexicon are organized into similarity neighborhoods and the listener is required to select the target word from competing lexical items. Two structural characteristics of similarity neighborhoods that influence word recognition have been identified; "neighborhood density" or the number of phonemically similar words (neighbors) for a particular target item and "neighborhood frequency" or the average frequency of occurrence of all the items within a neighborhood. A third lexical factor, "word frequency" or the frequency of occurrence of a target word in the language, is assumed to optimize the word recognition process by biasing the system toward choosing a high frequency over a low frequency word. Three experiments were performed. In the initial experiments, word recognition for consonant-vowel-consonant (CVC) monosyllables was assessed in young normal hearing listeners by systematically partitioning the items into the eight possible lexical conditions that could be created by two levels of the three lexical factors, word frequency (high and low), neighborhood density (high and low), and average neighborhood frequency (high and low). Neighborhood structure and word frequency were estimated computationally using a large, on-line lexicon-based Webster's Pocket Dictionary. From this program 400 highly familiar, monosyllables were selected and partitioned into eight orthogonal lexical groups (50 words/group). The 400 words were presented randomly to normal hearing listeners in speech-shaped noise (Experiment 1) and "in quiet" (Experiment 2) as well as to an elderly group of listeners with sensorineural hearing loss in the speech-shaped noise (Experiment 3). The results of three experiments verified predictions of NAM in both normal hearing and hearing-impaired listeners. In each experiment, words from low density neighborhoods were recognized more accurately than those from high density neighborhoods. The presence of high frequency neighbors (average neighborhood frequency) produced poorer recognition performance than comparable conditions with low frequency neighbors. Word frequency was found to have a highly significant effect on word recognition. Lexical conditions with high word frequencies produced higher performance scores than conditions with low frequency words. The results supported the basic tenets of NAM theory and identified both neighborhood structural properties and word frequency as significant lexical factors affecting word recognition when listening in noise and "in quiet." The results of the third experiment permit extension of NAM theory to individuals with sensorineural hearing loss. Future development of speech recognition tests should allow for the effects of higher level cognitive (lexical) factors on lower level phonemic processing.
Embedded Words in Visual Word Recognition: Does the Left Hemisphere See the Rain in Brain?
ERIC Educational Resources Information Center
McCormick, Samantha F.; Davis, Colin J.; Brysbaert, Marc
2010-01-01
To examine whether interhemispheric transfer during foveal word recognition entails a discontinuity between the information presented to the left and right of fixation, we presented target words in such a way that participants fixated immediately left or right of an embedded word (as in "gr*apple", "bull*et") or in the middle…
Lexico-Semantic Structure and the Word-Frequency Effect in Recognition Memory
ERIC Educational Resources Information Center
Monaco, Joseph D.; Abbott, L. F.; Kahana, Michael J.
2007-01-01
The word-frequency effect (WFE) in recognition memory refers to the finding that more rare words are better recognized than more common words. We demonstrate that a familiarity-discrimination model operating on data from a semantic word-association space yields a robust WFE in data on both hit rates and false-alarm rates. Our modeling results…
Phonological Activation in Multi-Syllabic Sord Recognition
ERIC Educational Resources Information Center
Lee, Chang H.
2007-01-01
Three experiments were conducted to test the phonological recoding hypothesis in visual word recognition. Most studies on this issue have been conducted using mono-syllabic words, eventually constructing various models of phonological processing. Yet in many languages including English, the majority of words are multi-syllabic words. English…
Individual differences in online spoken word recognition: Implications for SLI
McMurray, Bob; Samelson, Vicki M.; Lee, Sung Hee; Tomblin, J. Bruce
2012-01-01
Thirty years of research has uncovered the broad principles that characterize spoken word processing across listeners. However, there have been few systematic investigations of individual differences. Such an investigation could help refine models of word recognition by indicating which processing parameters are likely to vary, and could also have important implications for work on language impairment. The present study begins to fill this gap by relating individual differences in overall language ability to variation in online word recognition processes. Using the visual world paradigm, we evaluated online spoken word recognition in adolescents who varied in both basic language abilities and non-verbal cognitive abilities. Eye movements to target, cohort and rhyme objects were monitored during spoken word recognition, as an index of lexical activation. Adolescents with poor language skills showed fewer looks to the target and more fixations to the cohort and rhyme competitors. These results were compared to a number of variants of the TRACE model (McClelland & Elman, 1986) that were constructed to test a range of theoretical approaches to language impairment: impairments at sensory and phonological levels; vocabulary size, and generalized slowing. None were strongly supported, and variation in lexical decay offered the best fit. Thus, basic word recognition processes like lexical decay may offer a new way to characterize processing differences in language impairment. PMID:19836014
Hagenbeek, R E; Rombouts, S A R B; Veltman, D J; Van Strien, J W; Witter, M P; Scheltens, P; Barkhof, F
2007-10-01
Changes in brain activation as a function of continuous multiparametric word recognition have not been studied before by using functional MR imaging (fMRI), to our knowledge. Our aim was to identify linear changes in brain activation and, what is more interesting, nonlinear changes in brain activation as a function of extended word repetition. Fifteen healthy young right-handed individuals participated in this study. An event-related extended continuous word-recognition task with 30 target words was used to study the parametric effect of word recognition on brain activation. Word-recognition-related brain activation was studied as a function of 9 word repetitions. fMRI data were analyzed with a general linear model with regressors for linearly changing signal intensity and nonlinearly changing signal intensity, according to group average reaction time (RT) and individual RTs. A network generally associated with episodic memory recognition showed either constant or linearly decreasing brain activation as a function of word repetition. Furthermore, both anterior and posterior cingulate cortices and the left middle frontal gyrus followed the nonlinear curve of the group RT, whereas the anterior cingulate cortex was also associated with individual RT. Linear alteration in brain activation as a function of word repetition explained most changes in blood oxygen level-dependent signal intensity. Using a hierarchically orthogonalized model, we found evidence for nonlinear activation associated with both group and individual RTs.
Multisensory speech perception in autism spectrum disorder: From phoneme to whole-word perception.
Stevenson, Ryan A; Baum, Sarah H; Segers, Magali; Ferber, Susanne; Barense, Morgan D; Wallace, Mark T
2017-07-01
Speech perception in noisy environments is boosted when a listener can see the speaker's mouth and integrate the auditory and visual speech information. Autistic children have a diminished capacity to integrate sensory information across modalities, which contributes to core symptoms of autism, such as impairments in social communication. We investigated the abilities of autistic and typically-developing (TD) children to integrate auditory and visual speech stimuli in various signal-to-noise ratios (SNR). Measurements of both whole-word and phoneme recognition were recorded. At the level of whole-word recognition, autistic children exhibited reduced performance in both the auditory and audiovisual modalities. Importantly, autistic children showed reduced behavioral benefit from multisensory integration with whole-word recognition, specifically at low SNRs. At the level of phoneme recognition, autistic children exhibited reduced performance relative to their TD peers in auditory, visual, and audiovisual modalities. However, and in contrast to their performance at the level of whole-word recognition, both autistic and TD children showed benefits from multisensory integration for phoneme recognition. In accordance with the principle of inverse effectiveness, both groups exhibited greater benefit at low SNRs relative to high SNRs. Thus, while autistic children showed typical multisensory benefits during phoneme recognition, these benefits did not translate to typical multisensory benefit of whole-word recognition in noisy environments. We hypothesize that sensory impairments in autistic children raise the SNR threshold needed to extract meaningful information from a given sensory input, resulting in subsequent failure to exhibit behavioral benefits from additional sensory information at the level of whole-word recognition. Autism Res 2017. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. Autism Res 2017, 10: 1280-1290. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. © 2017 International Society for Autism Research, Wiley Periodicals, Inc.
Morphological Influences on the Recognition of Monosyllabic Monomorphemic Words
ERIC Educational Resources Information Center
Baayen, R. H.; Feldman, L. B.; Schreuder, R.
2006-01-01
Balota et al. [Balota, D., Cortese, M., Sergent-Marshall, S., Spieler, D., & Yap, M. (2004). Visual word recognition for single-syllable words. "Journal of Experimental Psychology: General, 133," 283-316] studied lexical processing in word naming and lexical decision using hierarchical multiple regression techniques for a large data set of…
Orthographic Consistency Affects Spoken Word Recognition at Different Grain-Sizes
ERIC Educational Resources Information Center
Dich, Nadya
2014-01-01
A number of previous studies found that the consistency of sound-to-spelling mappings (feedback consistency) affects spoken word recognition. In auditory lexical decision experiments, words that can only be spelled one way are recognized faster than words with multiple potential spellings. Previous studies demonstrated this by manipulating…
Modelling the Effects of Semantic Ambiguity in Word Recognition
ERIC Educational Resources Information Center
Rodd, Jennifer M.; Gaskell, M. Gareth; Marslen-Wilson, William D.
2004-01-01
Most words in English are ambiguous between different interpretations; words can mean different things in different contexts. We investigate the implications of different types of semantic ambiguity for connectionist models of word recognition. We present a model in which there is competition to activate distributed semantic representations. The…
ERIC Educational Resources Information Center
Pisoni, David B.; And Others
The results of three projects concerned with auditory word recognition and the structure of the lexicon are reported in this paper. The first project described was designed to test experimentally several specific predictions derived from MACS, a simulation model of the Cohort Theory of word recognition. The second project description provides the…
ERIC Educational Resources Information Center
Morford, Jill P.; Kroll, Judith F.; Piñar, Pilar; Wilkinson, Erin
2014-01-01
Recent evidence demonstrates that American Sign Language (ASL) signs are active during print word recognition in deaf bilinguals who are highly proficient in both ASL and English. In the present study, we investigate whether signs are active during print word recognition in two groups of unbalanced bilinguals: deaf ASL-dominant and hearing…
Kim, Albert; Lai, Vicky
2012-05-01
We used ERPs to investigate the time course of interactions between lexical semantic and sublexical visual word form processing during word recognition. Participants read sentence-embedded pseudowords that orthographically resembled a contextually supported real word (e.g., "She measured the flour so she could bake a ceke…") or did not (e.g., "She measured the flour so she could bake a tont…") along with nonword consonant strings (e.g., "She measured the flour so she could bake a srdt…"). Pseudowords that resembled a contextually supported real word ("ceke") elicited an enhanced positivity at 130 msec (P130), relative to real words (e.g., "She measured the flour so she could bake a cake…"). Pseudowords that did not resemble a plausible real word ("tont") enhanced the N170 component, as did nonword consonant strings ("srdt"). The effect pattern shows that the visual word recognition system is, perhaps, counterintuitively, more rapidly sensitive to minor than to flagrant deviations from contextually predicted inputs. The findings are consistent with rapid interactions between lexical and sublexical representations during word recognition, in which rapid lexical access of a contextually supported word (CAKE) provides top-down excitation of form features ("cake"), highlighting the anomaly of an unexpected word "ceke."
Effective connectivity of visual word recognition and homophone orthographic errors
Guàrdia-Olmos, Joan; Peró-Cebollero, Maribel; Zarabozo-Hurtado, Daniel; González-Garrido, Andrés A.; Gudayol-Ferré, Esteve
2015-01-01
The study of orthographic errors in a transparent language like Spanish is an important topic in relation to writing acquisition. The development of neuroimaging techniques, particularly functional magnetic resonance imaging (fMRI), has enabled the study of such relationships between brain areas. The main objective of the present study was to explore the patterns of effective connectivity by processing pseudohomophone orthographic errors among subjects with high and low spelling skills. Two groups of 12 Mexican subjects each, matched by age, were formed based on their results in a series of ad hoc spelling-related out-scanner tests: a high spelling skills (HSSs) group and a low spelling skills (LSSs) group. During the f MRI session, two experimental tasks were applied (spelling recognition task and visuoperceptual recognition task). Regions of Interest and their signal values were obtained for both tasks. Based on these values, structural equation models (SEMs) were obtained for each group of spelling competence (HSS and LSS) and task through maximum likelihood estimation, and the model with the best fit was chosen in each case. Likewise, dynamic causal models (DCMs) were estimated for all the conditions across tasks and groups. The HSS group’s SEM results suggest that, in the spelling recognition task, the right middle temporal gyrus, and, to a lesser extent, the left parahippocampal gyrus receive most of the significant effects, whereas the DCM results in the visuoperceptual recognition task show less complex effects, but still congruent with the previous results, with an important role in several areas. In general, these results are consistent with the major findings in partial studies about linguistic activities but they are the first analyses of statistical effective brain connectivity in transparent languages. PMID:26042070
THE EFFECT OF WORD ASSOCIATIONS ON THE RECOGNITION OF FLASHED WORDS.
ERIC Educational Resources Information Center
SAMUELS, S. JAY
THE HYPOTHESIS THAT WHEN ASSOCIATED PAIRS OF WORDS ARE PRESENTED, SPEED OF RECOGNITION WILL BE FASTER THAN WHEN NONASSOCIATED WORD PAIRS ARE PRESENTED OR WHEN A TARGET WORD IS PRESENTED BY ITSELF WAS TESTED. TWENTY UNIVERSITY STUDENTS, INITIALLY SCREENED FOR VISION, WERE ASSIGNED RANDOMLY TO ROWS OF A 5 X 5 REPEATED-MEASURES LATIN SQUARE DESIGN.…
Influences of High and Low Variability on Infant Word Recognition
ERIC Educational Resources Information Center
Singh, Leher
2008-01-01
Although infants begin to encode and track novel words in fluent speech by 7.5 months, their ability to recognize words is somewhat limited at this stage. In particular, when the surface form of a word is altered, by changing the gender or affective prosody of the speaker, infants begin to falter at spoken word recognition. Given that natural…
Compositional symbol grounding for motor patterns.
Greco, Alberto; Caneva, Claudio
2010-01-01
We developed a new experimental and simulative paradigm to study the establishing of compositional grounded representations for motor patterns. Participants learned to associate non-sense arm motor patterns, performed in three different hand postures, with non-sense words. There were two group conditions: in the first (compositional), each pattern was associated with a two-word (verb-adverb) sentence; in the second (holistic), each same pattern was associated with a unique word. Two experiments were performed. In the first, motor pattern recognition and naming were tested in the two conditions. Results showed that verbal compositionality had no role in recognition and that the main source of confusability in this task came from discriminating hand postures. As the naming task resulted too difficult, some changes in the learning procedure were implemented in the second experiment. In this experiment, the compositional group achieved better results in naming motor patterns especially for patterns where hand postures discrimination was relevant. In order to ascertain the differential effect, upon this result, of memory load and of systematic grounding, neural network simulations were also made. After a basic simulation that worked as a good model of subjects performance, in following simulations the number of stimuli (motor patterns and words) was increased and the systematic association between words and patterns was disrupted, while keeping the same number of words and syntax. Results showed that in both conditions the advantage for the compositional condition significantly increased. These simulations showed that the advantage for this condition may be more related to the systematicity rather than to the mere informational gain. All results are discussed in connection to the possible support of the hypothesis of a compositional motor representation and toward a more precise explanation of the factors that make compositional representations working.
Speaker information affects false recognition of unstudied lexical-semantic associates.
Luthra, Sahil; Fox, Neal P; Blumstein, Sheila E
2018-05-01
Recognition of and memory for a spoken word can be facilitated by a prior presentation of that word spoken by the same talker. However, it is less clear whether this speaker congruency advantage generalizes to facilitate recognition of unheard related words. The present investigation employed a false memory paradigm to examine whether information about a speaker's identity in items heard by listeners could influence the recognition of novel items (critical intruders) phonologically or semantically related to the studied items. In Experiment 1, false recognition of semantically associated critical intruders was sensitive to speaker information, though only when subjects attended to talker identity during encoding. Results from Experiment 2 also provide some evidence that talker information affects the false recognition of critical intruders. Taken together, the present findings indicate that indexical information is able to contact the lexical-semantic network to affect the processing of unheard words.
The impact of inverted text on visual word processing: An fMRI study.
Sussman, Bethany L; Reddigari, Samir; Newman, Sharlene D
2018-06-01
Visual word recognition has been studied for decades. One question that has received limited attention is how different text presentation orientations disrupt word recognition. By examining how word recognition processes may be disrupted by different text orientations it is hoped that new insights can be gained concerning the process. Here, we examined the impact of rotating and inverting text on the neural network responsible for visual word recognition focusing primarily on a region of the occipto-temporal cortex referred to as the visual word form area (VWFA). A lexical decision task was employed in which words and pseudowords were presented in one of three orientations (upright, rotated or inverted). The results demonstrate that inversion caused the greatest disruption of visual word recognition processes. Both rotated and inverted text elicited increased activation in spatial attention regions within the right parietal cortex. However, inverted text recruited phonological and articulatory processing regions within the left inferior frontal and left inferior parietal cortices. Finally, the VWFA was found to not behave similarly to the fusiform face area in that unusual text orientations resulted in increased activation and not decreased activation. It is hypothesized here that the VWFA activation is modulated by feedback from linguistic processes. Copyright © 2018 Elsevier Inc. All rights reserved.
Effects of age and hearing loss on recognition of unaccented and accented multisyllabic words.
Gordon-Salant, Sandra; Yeni-Komshian, Grace H; Fitzgibbons, Peter J; Cohen, Julie I
2015-02-01
The effects of age and hearing loss on recognition of unaccented and accented words of varying syllable length were investigated. It was hypothesized that with increments in length of syllables, there would be atypical alterations in syllable stress in accented compared to native English, and that these altered stress patterns would be sensitive to auditory temporal processing deficits with aging. Sets of one-, two-, three-, and four-syllable words with the same initial syllable were recorded by one native English and two Spanish-accented talkers. Lists of these words were presented in isolation and in sentence contexts to younger and older normal-hearing listeners and to older hearing-impaired listeners. Hearing loss effects were apparent for unaccented and accented monosyllabic words, whereas age effects were observed for recognition of accented multisyllabic words, consistent with the notion that altered syllable stress patterns with accent are sensitive for revealing effects of age. Older listeners also exhibited lower recognition scores for moderately accented words in sentence contexts than in isolation, suggesting that the added demands on working memory for words in sentence contexts impact recognition of accented speech. The general pattern of results suggests that hearing loss, age, and cognitive factors limit the ability to recognize Spanish-accented speech.
Effects of age and hearing loss on recognition of unaccented and accented multisyllabic words
Gordon-Salant, Sandra; Yeni-Komshian, Grace H.; Fitzgibbons, Peter J.; Cohen, Julie I.
2015-01-01
The effects of age and hearing loss on recognition of unaccented and accented words of varying syllable length were investigated. It was hypothesized that with increments in length of syllables, there would be atypical alterations in syllable stress in accented compared to native English, and that these altered stress patterns would be sensitive to auditory temporal processing deficits with aging. Sets of one-, two-, three-, and four-syllable words with the same initial syllable were recorded by one native English and two Spanish-accented talkers. Lists of these words were presented in isolation and in sentence contexts to younger and older normal-hearing listeners and to older hearing-impaired listeners. Hearing loss effects were apparent for unaccented and accented monosyllabic words, whereas age effects were observed for recognition of accented multisyllabic words, consistent with the notion that altered syllable stress patterns with accent are sensitive for revealing effects of age. Older listeners also exhibited lower recognition scores for moderately accented words in sentence contexts than in isolation, suggesting that the added demands on working memory for words in sentence contexts impact recognition of accented speech. The general pattern of results suggests that hearing loss, age, and cognitive factors limit the ability to recognize Spanish-accented speech. PMID:25698021
Visual Word Recognition Across the Adult Lifespan
Cohen-Shikora, Emily R.; Balota, David A.
2016-01-01
The current study examines visual word recognition in a large sample (N = 148) across the adult lifespan and across a large set of stimuli (N = 1187) in three different lexical processing tasks (pronunciation, lexical decision, and animacy judgments). Although the focus of the present study is on the influence of word frequency, a diverse set of other variables are examined as the system ages and acquires more experience with language. Computational models and conceptual theories of visual word recognition and aging make differing predictions for age-related changes in the system. However, these have been difficult to assess because prior studies have produced inconsistent results, possibly due to sample differences, analytic procedures, and/or task-specific processes. The current study confronts these potential differences by using three different tasks, treating age and word variables as continuous, and exploring the influence of individual differences such as vocabulary, vision, and working memory. The primary finding is remarkable stability in the influence of a diverse set of variables on visual word recognition across the adult age spectrum. This pattern is discussed in reference to previous inconsistent findings in the literature and implications for current models of visual word recognition. PMID:27336629
Aging and IQ effects on associative recognition and priming in item recognition
McKoon, Gail; Ratcliff, Roger
2012-01-01
Two ways to examine memory for associative relationships between pairs of words were tested: an explicit method, associative recognition, and an implicit method, priming in item recognition. In an experiment with both kinds of tests, participants were asked to learn pairs of words. For the explicit test, participants were asked to decide whether two words of a test pair had been studied in the same or different pairs. For the implicit test, participants were asked to decide whether single words had or had not been among the studied pairs. Some test words were immediately preceded in the test list by the other word of the same pair and some by a word from a different pair. Diffusion model (Ratcliff, 1978; Ratcliff & McKoon, 2008) analyses were carried out for both tasks for college-age participants, 60–74 year olds, and 75–90 year olds, and for higher- and lower-IQ participants, in order to compare the two measures of associative strength. Results showed parallel behavior of drift rates for associative recognition and priming across ages and across IQ, indicating that they are based, at least to some degree, on the same information in memory. PMID:24976676
Almabruk, Abubaker A. A.; Paterson, Kevin B.; McGowan, Victoria; Jordan, Timothy R.
2011-01-01
Background Previous studies have claimed that a precise split at the vertical midline of each fovea causes all words to the left and right of fixation to project to the opposite, contralateral hemisphere, and this division in hemispheric processing has considerable consequences for foveal word recognition. However, research in this area is dominated by the use of stimuli from Latinate languages, which may induce specific effects on performance. Consequently, we report two experiments using stimuli from a fundamentally different, non-Latinate language (Arabic) that offers an alternative way of revealing effects of split-foveal processing, if they exist. Methods and Findings Words (and pseudowords) were presented to the left or right of fixation, either close to fixation and entirely within foveal vision, or further from fixation and entirely within extrafoveal vision. Fixation location and stimulus presentations were carefully controlled using an eye-tracker linked to a fixation-contingent display. To assess word recognition, Experiment 1 used the Reicher-Wheeler task and Experiment 2 used the lexical decision task. Results Performance in both experiments indicated a functional division in hemispheric processing for words in extrafoveal locations (in recognition accuracy in Experiment 1 and in reaction times and error rates in Experiment 2) but no such division for words in foveal locations. Conclusions These findings from a non-Latinate language provide new evidence that although a functional division in hemispheric processing exists for word recognition outside the fovea, this division does not extend up to the point of fixation. Some implications for word recognition and reading are discussed. PMID:21559084
The Effect of the Balance of Orthographic Neighborhood Distribution in Visual Word Recognition
ERIC Educational Resources Information Center
Robert, Christelle; Mathey, Stephanie; Zagar, Daniel
2007-01-01
The present study investigated whether the balance of neighborhood distribution (i.e., the way orthographic neighbors are spread across letter positions) influences visual word recognition. Three word conditions were compared. Word neighbors were either concentrated on one letter position (e.g.,nasse/basse-lasse-tasse-masse) or were unequally…
ERIC Educational Resources Information Center
Tsang, Yiu-Kei; Chen, Hsuan-Chih
2013-01-01
The role of morphemic meaning in Chinese word recognition was examined with the masked and unmasked priming paradigms. Target words contained ambiguous morphemes biased toward the dominant or the subordinate meanings. Prime words either contained the same ambiguous morphemes in the subordinate interpretations or were unrelated to the targets. In…
Additive and Interactive Effects on Response Time Distributions in Visual Word Recognition
ERIC Educational Resources Information Center
Yap, Melvin J.; Balota, David A.
2007-01-01
Across 3 different word recognition tasks, distributional analyses were used to examine the joint effects of stimulus quality and word frequency on underlying response time distributions. Consistent with the extant literature, stimulus quality and word frequency produced additive effects in lexical decision, not only in the means but also in the…
Evidence for Early Morphological Decomposition in Visual Word Recognition
ERIC Educational Resources Information Center
Solomyak, Olla; Marantz, Alec
2010-01-01
We employ a single-trial correlational MEG analysis technique to investigate early processing in the visual recognition of morphologically complex words. Three classes of affixed words were presented in a lexical decision task: free stems (e.g., taxable), bound roots (e.g., tolerable), and unique root words (e.g., vulnerable, the root of which…
Morphological Structures in Visual Word Recognition: The Case of Arabic
ERIC Educational Resources Information Center
Abu-Rabia, Salim; Awwad, Jasmin (Shalhoub)
2004-01-01
This research examined the function within lexical access of the main morphemic units from which most Arabic words are assembled, namely roots and word patterns. The present study focused on the derivation of nouns, in particular, whether the lexical representation of Arabic words reflects their morphological structure and whether recognition of a…
Perea, Manuel; Panadero, Victoria
2014-01-01
The vast majority of neural and computational models of visual-word recognition assume that lexical access is achieved via the activation of abstract letter identities. Thus, a word's overall shape should play no role in this process. In the present lexical decision experiment, we compared word-like pseudowords like viotín (same shape as its base word: violín) vs. viocín (different shape) in mature (college-aged skilled readers), immature (normally reading children), and immature/impaired (young readers with developmental dyslexia) word-recognition systems. Results revealed similar response times (and error rates) to consistent-shape and inconsistent-shape pseudowords for both adult skilled readers and normally reading children - this is consistent with current models of visual-word recognition. In contrast, young readers with developmental dyslexia made significantly more errors to viotín-like pseudowords than to viocín-like pseudowords. Thus, unlike normally reading children, young readers with developmental dyslexia are sensitive to a word's visual cues, presumably because of poor letter representations.
Lexical leverage: Category knowledge boosts real-time novel word recognition in two-year- olds
Borovsky, Arielle; Ellis, Erica M.; Evans, Julia L.; Elman, Jeffrey L.
2016-01-01
Recent research suggests that infants tend to add words to their vocabulary that are semantically related to other known words, though it is not clear why this pattern emerges. In this paper, we explore whether infants to leverage their existing vocabulary and semantic knowledge when interpreting novel label-object mappings in real-time. We initially identified categorical domains for which individual 24-month-old infants have relatively higher and lower levels of knowledge, irrespective of overall vocabulary size. Next, we taught infants novel words in these higher and lower knowledge domains and then asked if their subsequent real-time recognition of these items varied as a function of their category knowledge. While our participants successfully acquired the novel label -object mappings in our task, there were important differences in the way infants recognized these words in real time. Namely, infants showed more robust recognition of high (vs. low) domain knowledge words. These findings suggest that dense semantic structure facilitates early word learning and real-time novel word recognition. PMID:26452444
ERIC Educational Resources Information Center
Van der Haegen, Lise; Brysbaert, Marc; Davis, Colin J.
2009-01-01
It has recently been shown that interhemispheric communication is needed for the processing of foveally presented words. In this study, we examine whether the integration of information happens at an early stage, before word recognition proper starts, or whether the integration is part of the recognition process itself. Two lexical decision…
Kirk, Karen Iler; Prusick, Lindsay; French, Brian; Gotch, Chad; Eisenberg, Laurie S; Young, Nancy
2012-06-01
Under natural conditions, listeners use both auditory and visual speech cues to extract meaning from speech signals containing many sources of variability. However, traditional clinical tests of spoken word recognition routinely employ isolated words or sentences produced by a single talker in an auditory-only presentation format. The more central cognitive processes used during multimodal integration, perceptual normalization, and lexical discrimination that may contribute to individual variation in spoken word recognition performance are not assessed in conventional tests of this kind. In this article, we review our past and current research activities aimed at developing a series of new assessment tools designed to evaluate spoken word recognition in children who are deaf or hard of hearing. These measures are theoretically motivated by a current model of spoken word recognition and also incorporate "real-world" stimulus variability in the form of multiple talkers and presentation formats. The goal of this research is to enhance our ability to estimate real-world listening skills and to predict benefit from sensory aid use in children with varying degrees of hearing loss. American Academy of Audiology.
Monnier, Catherine; Syssau, Arielle
2008-01-01
In the four experiments reported here, we examined the role of word pleasantness on immediate serial recall and immediate serial recognition. In Experiment 1, we compared verbal serial recall of pleasant and neutral words, using a limited set of items. In Experiment 2, we replicated Experiment 1 with an open set of words (i.e., new items were used on every trial). In Experiments 3 and 4, we assessed immediate serial recognition of pleasant and neutral words, using item sets from Experiments 1 and 2. Pleasantness was found to have a facilitation effect on both immediate serial recall and immediate serial recognition. This study supplies some new supporting arguments in favor of a semantic contribution to verbal short-term memory performance. The pleasantness effect observed in immediate serial recognition showed that, contrary to a number of earlier findings, performance on this task can also turn out to be dependent on semantic factors. The results are discussed in relation to nonlinguistic and psycholinguistic models of short-term memory.
Iconic gestures prime related concepts: an ERP study.
Wu, Ying Croon; Coulson, Seana
2007-02-01
To assess priming by iconic gestures, we recorded EEG (at 29 scalp sites) in two experiments while adults watched short, soundless videos of spontaneously produced, cospeech iconic gestures followed by related or unrelated probe words. In Experiment 1, participants classified the relatedness between gestures and words. In Experiment 2, they attended to stimuli, and performed an incidental recognition memory test on words presented during the EEG recording session. Event-related potentials (ERPs) time-locked to the onset of probe words were measured, along with response latencies and word recognition rates. Although word relatedness did not affect reaction times or recognition rates, contextually related probe words elicited less-negative ERPs than did unrelated ones between 300 and 500 msec after stimulus onset (N400) in both experiments. These findings demonstrate sensitivity to semantic relations between iconic gestures and words in brain activity engendered during word comprehension.
Siakaluk, Paul D; Pexman, Penny M; Aguilera, Laura; Owen, William J; Sears, Christopher R
2008-01-01
We examined the effects of sensorimotor experience in two visual word recognition tasks. Body-object interaction (BOI) ratings were collected for a large set of words. These ratings assess perceptions of the ease with which a human body can physically interact with a word's referent. A set of high BOI words (e.g., mask) and a set of low BOI words (e.g., ship) were created, matched on imageability and concreteness. Facilitatory BOI effects were observed in lexical decision and phonological lexical decision tasks: responses were faster for high BOI words than for low BOI words. We discuss how our findings may be accounted for by (a) semantic feedback within the visual word recognition system, and (b) an embodied view of cognition (e.g., Barsalou's perceptual symbol systems theory), which proposes that semantic knowledge is grounded in sensorimotor interactions with the environment.
Bonin, Patrick; Méot, Alain; Ferrand, Ludovic; Bugaïska, Aurélia
2015-09-01
We collected sensory experience ratings (SERs) for 1,659 French words in adults. Sensory experience for words is a recently introduced variable that corresponds to the degree to which words elicit sensory and perceptual experiences (Juhasz & Yap Behavior Research Methods, 45, 160-168, 2013; Juhasz, Yap, Dicke, Taylor, & Gullick Quarterly Journal of Experimental Psychology, 64, 1683-1691, 2011). The relationships of the sensory experience norms with other psycholinguistic variables (e.g., imageability and age of acquisition) were analyzed. We also investigated the degree to which SER predicted performance in visual word recognition tasks (lexical decision, word naming, and progressive demasking). The analyses indicated that SER reliably predicted response times in lexical decision, but not in word naming or progressive demasking. The findings are discussed in relation to the status of SER, the role of semantic code activation in visual word recognition, and the embodied view of cognition.
Han, Feifei
2017-01-01
While some first language (L1) reading models suggest that inefficient word recognition and small working memory tend to inhibit higher-level comprehension processes; the Compensatory Encoding Model maintains that slow word recognition and small working memory do not normally hinder reading comprehension, as readers are able to operate metacognitive strategies to compensate for inefficient word recognition and working memory limitation as long as readers process a reading task without time constraint. Although empirical evidence is accumulated for support of the Compensatory Encoding Model in L1 reading, there is lack of research for testing of the Compensatory Encoding Model in foreign language (FL) reading. This research empirically tested the Compensatory Encoding Model in English reading among Chinese college English language learners (ELLs). Two studies were conducted. Study one focused on testing whether reading condition varying time affects the relationship between word recognition, working memory, and reading comprehension. Students were tested on a computerized English word recognition test, a computerized Operation Span task, and reading comprehension in time constraint and non-time constraint reading. The correlation and regression analyses showed that the strength of association was much stronger between word recognition, working memory, and reading comprehension in time constraint than that in non-time constraint reading condition. Study two examined whether FL readers were able to operate metacognitive reading strategies as a compensatory way of reading comprehension for inefficient word recognition and working memory limitation in non-time constraint reading. The participants were tested on the same computerized English word recognition test and Operation Span test. They were required to think aloud while reading and to complete the comprehension questions. The think-aloud protocols were coded for concurrent use of reading strategies, classified into language-oriented strategies, content-oriented strategies, re-reading, pausing, and meta-comment. The correlation analyses showed that while word recognition and working memory were only significantly related to frequency of language-oriented strategies, re-reading, and pausing, but not with reading comprehension. Jointly viewed, the results of the two studies, complimenting each other, supported the applicability of the Compensatory Encoding Model in FL reading with Chinese college ELLs. PMID:28522984
Han, Feifei
2017-01-01
While some first language (L1) reading models suggest that inefficient word recognition and small working memory tend to inhibit higher-level comprehension processes; the Compensatory Encoding Model maintains that slow word recognition and small working memory do not normally hinder reading comprehension, as readers are able to operate metacognitive strategies to compensate for inefficient word recognition and working memory limitation as long as readers process a reading task without time constraint. Although empirical evidence is accumulated for support of the Compensatory Encoding Model in L1 reading, there is lack of research for testing of the Compensatory Encoding Model in foreign language (FL) reading. This research empirically tested the Compensatory Encoding Model in English reading among Chinese college English language learners (ELLs). Two studies were conducted. Study one focused on testing whether reading condition varying time affects the relationship between word recognition, working memory, and reading comprehension. Students were tested on a computerized English word recognition test, a computerized Operation Span task, and reading comprehension in time constraint and non-time constraint reading. The correlation and regression analyses showed that the strength of association was much stronger between word recognition, working memory, and reading comprehension in time constraint than that in non-time constraint reading condition. Study two examined whether FL readers were able to operate metacognitive reading strategies as a compensatory way of reading comprehension for inefficient word recognition and working memory limitation in non-time constraint reading. The participants were tested on the same computerized English word recognition test and Operation Span test. They were required to think aloud while reading and to complete the comprehension questions. The think-aloud protocols were coded for concurrent use of reading strategies, classified into language-oriented strategies, content-oriented strategies, re-reading, pausing, and meta-comment. The correlation analyses showed that while word recognition and working memory were only significantly related to frequency of language-oriented strategies, re-reading, and pausing, but not with reading comprehension. Jointly viewed, the results of the two studies, complimenting each other, supported the applicability of the Compensatory Encoding Model in FL reading with Chinese college ELLs.
Shi, Lu-Feng; Morozova, Natalia
2012-08-01
Word recognition is a basic component in a comprehensive hearing evaluation, but data are lacking for listeners speaking two languages. This study obtained such data for Russian natives in the US and analysed the data using the perceptual assimilation model (PAM) and speech learning model (SLM). Listeners were randomly presented 200 NU-6 words in quiet. Listeners responded verbally and in writing. Performance was scored on words and phonemes (word-initial consonants, vowels, and word-final consonants). Seven normal-hearing, adult monolingual English natives (NM), 16 English-dominant (ED), and 15 Russian-dominant (RD) Russian natives participated. ED and RD listeners differed significantly in their language background. Consistent with the SLM, NM outperformed ED listeners and ED outperformed RD listeners, whether responses were scored on words or phonemes. NM and ED listeners shared similar phoneme error patterns, whereas RD listeners' errors had unique patterns that could be largely understood via the PAM. RD listeners had particular difficulty differentiating vowel contrasts /i-I/, /æ-ε/, and /ɑ-Λ/, word-initial consonant contrasts /p-h/ and /b-f/, and word-final contrasts /f-v/. Both first-language phonology and second-language learning history affect word and phoneme recognition. Current findings may help clinicians differentiate word recognition errors due to language background from hearing pathologies.
Peñaloza, Claudia; Mirman, Daniel; Tuomiranta, Leena; Benetello, Annalisa; Heikius, Ida-Maria; Järvinen, Sonja; Majos, Maria C; Cardona, Pedro; Juncadella, Montserrat; Laine, Matti; Martin, Nadine; Rodríguez-Fornells, Antoni
2016-06-01
Recent research suggests that some people with aphasia preserve some ability to learn novel words and to retain them in the long-term. However, this novel word learning ability has been studied only in the context of single word-picture pairings. We examined the ability of people with chronic aphasia to learn novel words using a paradigm that presents new word forms together with a limited set of different possible visual referents and requires the identification of the correct word-object associations on the basis of online feedback. We also studied the relationship between word learning ability and aphasia severity, word processing abilities, and verbal short-term memory (STM). We further examined the influence of gross lesion location on new word learning. The word learning task was first validated with a group of forty-five young adults. Fourteen participants with chronic aphasia were administered the task and underwent tests of immediate and long-term recognition memory at 1 week. Their performance was compared to that of a group of fourteen matched controls using growth curve analysis. The learning curve and recognition performance of the aphasia group was significantly below the matched control group, although above-chance recognition performance and case-by-case analyses indicated that some participants with aphasia had learned the correct word-referent mappings. Verbal STM but not word processing abilities predicted word learning ability after controlling for aphasia severity. Importantly, participants with lesions in the left frontal cortex performed significantly worse than participants with lesions that spared the left frontal region both during word learning and on the recognition tests. Our findings indicate that some people with aphasia can preserve the ability to learn a small novel lexicon in an ambiguous word-referent context. This learning and recognition memory ability was associated with verbal STM capacity, aphasia severity and the integrity of the left inferior frontal region. Copyright © 2016 Elsevier Ltd. All rights reserved.
Peñaloza, Claudia; Mirman, Daniel; Tuomiranta, Leena; Benetello, Annalisa; Heikius, Ida-Maria; Järvinen, Sonja; Majos, Maria C.; Cardona, Pedro; Juncadella, Montserrat; Laine, Matti; Martin, Nadine; Rodríguez-Fornells, Antoni
2017-01-01
Recent research suggests that some people with aphasia preserve some ability to learn novel words and to retain them in the long-term. However, this novel word learning ability has been studied only in the context of single word-picture pairings. We examined the ability of people with chronic aphasia to learn novel words using a paradigm that presents new word forms together with a limited set of different possible visual referents and requires the identification of the correct word-object associations on the basis of online feedback. We also studied the relationship between word learning ability and aphasia severity, word processing abilities, and verbal short-term memory (STM). We further examined the influence of gross lesion location on new word learning. The word learning task was first validated with a group of forty-five young adults. Fourteen participants with chronic aphasia were administered the task and underwent tests of immediate and long-term recognition memory at 1 week. Their performance was compared to that of a group of fourteen matched controls using growth curve analysis. The learning curve and recognition performance of the aphasia group was significantly below the matched control group, although above-chance recognition performance and case-by-case analyses indicated that some participants with aphasia had learned the correct word-referent mappings. Verbal STM but not word processing abilities predicted word learning ability after controlling for aphasia severity. Importantly, participants with lesions in the left frontal cortex performed significantly worse than participants with lesions that spared the left frontal region both during word learning and on the recognition tests. Our findings indicate that some people with aphasia can preserve the ability to learn a small novel lexicon in an ambiguous word-referent context. This learning and recognition memory ability was associated with verbal STM capacity, aphasia severity and the integrity of the left inferior frontal region. PMID:27085892
ERIC Educational Resources Information Center
Dufour, Sophie; Brunelliere, Angele; Frauenfelder, Ulrich H.
2013-01-01
Although the word-frequency effect is one of the most established findings in spoken-word recognition, the precise processing locus of this effect is still a topic of debate. In this study, we used event-related potentials (ERPs) to track the time course of the word-frequency effect. In addition, the neighborhood density effect, which is known to…
The locus of word frequency effects in skilled spelling-to-dictation.
Chua, Shi Min; Liow, Susan J Rickard
2014-01-01
In spelling-to-dictation tasks, skilled spellers consistently initiate spelling of high-frequency words faster than that of low-frequency words. Tainturier and Rapp's model of spelling shows three possible loci for this frequency effect: spoken word recognition, orthographic retrieval, and response execution of the first letter. Thus far, researchers have attributed the effect solely to orthographic retrieval without considering spoken word recognition or response execution. To investigate word frequency effects at each of these three loci, Experiment 1 involved a delayed spelling-to-dictation task and Experiment 2 involved a delayed/uncertain task. In Experiment 1, no frequency effect was found in the 1200-ms delayed condition, suggesting that response execution is not affected by word frequency. In Experiment 2, no frequency effect was found in the delayed/uncertain task that reflects the orthographic retrieval, whereas a frequency effect was found in the comparison immediate/uncertain task that reflects both spoken word recognition and orthographic retrieval. The results of this two-part study suggest that frequency effects in spoken word recognition play a substantial role in skilled spelling-to-dictation. Discrepancies between these findings and previous research, and the limitations of the present study, are discussed.
Meade, Melissa E; Fernandes, Myra A
2016-07-01
We examined the influence of divided attention (DA) on recognition of words when the concurrent task was semantically related or unrelated to the to-be-recognised target words. Participants were asked to either study or retrieve a target list of semantically related words while simultaneously making semantic decisions (i.e., size judgements) to another set of related or unrelated words heard concurrently. We manipulated semantic relatedness of distractor to target words, and whether DA occurred during the encoding or retrieval phase of memory. Recognition accuracy was significantly diminished relative to full attention, following DA conditions at encoding, regardless of relatedness of distractors to study words. However, response times (RTs) were slower with related compared to unrelated distractors. Similarly, under DA at retrieval, recognition RTs were slower when distractors were semantically related than unrelated to target words. Unlike the effect from DA at encoding, recognition accuracy was worse under DA at retrieval when the distractors were related compared to unrelated to the target words. Results suggest that availability of general attentional resources is critical for successful encoding, whereas successful retrieval is particularly reliant on access to a semantic code, making it sensitive to related distractors under DA conditions.
Fink, Elian; de Rosnay, Marc; Wierda, Marlies; Koot, Hans M; Begeer, Sander
2014-09-01
The empirical literature has presented inconsistent evidence for deficits in the recognition of basic emotion expressions in children with autism spectrum disorders (ASD), which may be due to the focus on research with relatively small sample sizes. Additionally, it is proposed that although children with ASD may correctly identify emotion expression they rely on more deliberate, more time-consuming strategies in order to accurately recognize emotion expressions when compared to typically developing children. In the current study, we examine both emotion recognition accuracy and response time in a large sample of children, and explore the moderating influence of verbal ability on these findings. The sample consisted of 86 children with ASD (M age = 10.65) and 114 typically developing children (M age = 10.32) between 7 and 13 years of age. All children completed a pre-test (emotion word-word matching), and test phase consisting of basic emotion recognition, whereby they were required to match a target emotion expression to the correct emotion word; accuracy and response time were recorded. Verbal IQ was controlled for in the analyses. We found no evidence of a systematic deficit in emotion recognition accuracy or response time for children with ASD, controlling for verbal ability. However, when controlling for children's accuracy in word-word matching, children with ASD had significantly lower emotion recognition accuracy when compared to typically developing children. The findings suggest that the social impairments observed in children with ASD are not the result of marked deficits in basic emotion recognition accuracy or longer response times. However, children with ASD may be relying on other perceptual skills (such as advanced word-word matching) to complete emotion recognition tasks at a similar level as typically developing children.
Lúcio, Patrícia S.; Salum, Giovanni; Swardfager, Walter; Mari, Jair de Jesus; Pan, Pedro M.; Bressan, Rodrigo A.; Gadelha, Ary; Rohde, Luis A.; Cogo-Moreira, Hugo
2017-01-01
Although studies have consistently demonstrated that children with attention-deficit/hyperactivity disorder (ADHD) perform significantly lower than controls on word recognition and spelling tests, such studies rely on the assumption that those groups are comparable in these measures. This study investigates comparability of word recognition and spelling tests based on diagnostic status for ADHD through measurement invariance methods. The participants (n = 1,935; 47% female; 11% ADHD) were children aged 6–15 with normal IQ (≥70). Measurement invariance was investigated through Confirmatory Factor Analysis and Multiple Indicators Multiple Causes models. Measurement invariance was attested in both methods, demonstrating the direct comparability of the groups. Children with ADHD were 0.51 SD lower in word recognition and 0.33 SD lower in spelling tests than controls. Results suggest that differences in performance on word recognition and spelling tests are related to true mean differences based on ADHD diagnostic status. Implications for clinical practice and research are discussed. PMID:29118733
Lúcio, Patrícia S; Salum, Giovanni; Swardfager, Walter; Mari, Jair de Jesus; Pan, Pedro M; Bressan, Rodrigo A; Gadelha, Ary; Rohde, Luis A; Cogo-Moreira, Hugo
2017-01-01
Although studies have consistently demonstrated that children with attention-deficit/hyperactivity disorder (ADHD) perform significantly lower than controls on word recognition and spelling tests, such studies rely on the assumption that those groups are comparable in these measures. This study investigates comparability of word recognition and spelling tests based on diagnostic status for ADHD through measurement invariance methods. The participants ( n = 1,935; 47% female; 11% ADHD) were children aged 6-15 with normal IQ (≥70). Measurement invariance was investigated through Confirmatory Factor Analysis and Multiple Indicators Multiple Causes models. Measurement invariance was attested in both methods, demonstrating the direct comparability of the groups. Children with ADHD were 0.51 SD lower in word recognition and 0.33 SD lower in spelling tests than controls. Results suggest that differences in performance on word recognition and spelling tests are related to true mean differences based on ADHD diagnostic status. Implications for clinical practice and research are discussed.
False recognition production indexes in Spanish for 60 DRM lists with three critical words.
Beato, Maria Soledad; Díez, Emiliano
2011-06-01
A normative study was conducted using the Deese/Roediger-McDermott paradigm (DRM) to obtain false recognition for 60 six-word lists in Spanish, designed with a completely new methodology. For the first time, lists included words (e.g., bridal, newlyweds, bond, commitment, couple, to marry) simultaneously associated with three critical words (e.g., love, wedding, marriage). Backward associative strength between lists and critical words was taken into account when creating the lists. The results showed that all lists produced false recognition. Moreover, some lists had a high false recognition rate (e.g., 65%; jail, inmate, prison: bars, prisoner, cell, offender, penitentiary, imprisonment). This is an aspect of special interest for those DRM experiments that, for example, record brain electrical activity. This type of list will enable researchers to raise the signal-to-noise ratio in false recognition event-related potential studies as they increase the number of critical trials per list, and it will be especially useful for the design of future research.
Word recognition in Alzheimer's disease: Effects of semantic degeneration.
Cuetos, Fernando; Arce, Noemí; Martínez, Carmen; Ellis, Andrew W
2017-03-01
Impairments of word recognition in Alzheimer's disease (AD) have been less widely investigated than impairments affecting word retrieval and production. In particular, we know little about what makes individual words easier or harder for patients with AD to recognize. We used a lexical selection task in which participants were shown sets of four items, each set consisting of one word and three non-words. The task was simply to point to the word on each trial. Forty patients with mild-to-moderate AD were significantly impaired on this task relative to matched controls who made very few errors. The number of patients with AD able to recognize each word correctly was predicted by the frequency, age of acquisition, and imageability of the words, but not by their length or number of orthographic neighbours. Patient Mini-Mental State Examination and phonological fluency scores also predicted the number of words recognized. We propose that progressive degradation of central semantic representations in AD differentially affects the ability to recognize low-imageability, low-frequency, late-acquired words, with the same factors affecting word recognition as affecting word retrieval. © 2015 The British Psychological Society.
Contextual diversity facilitates learning new words in the classroom.
Rosa, Eva; Tapia, José Luis; Perea, Manuel
2017-01-01
In the field of word recognition and reading, it is commonly assumed that frequently repeated words create more accessible memory traces than infrequently repeated words, thus capturing the word-frequency effect. Nevertheless, recent research has shown that a seemingly related factor, contextual diversity (defined as the number of different contexts [e.g., films] in which a word appears), is a better predictor than word-frequency in word recognition and sentence reading experiments. Recent research has shown that contextual diversity plays an important role when learning new words in a laboratory setting with adult readers. In the current experiment, we directly manipulated contextual diversity in a very ecological scenario: at school, when Grade 3 children were learning words in the classroom. The new words appeared in different contexts/topics (high-contextual diversity) or only in one of them (low-contextual diversity). Results showed that words encountered in different contexts were learned and remembered more effectively than those presented in redundant contexts. We discuss the practical (educational [e.g., curriculum design]) and theoretical (models of word recognition) implications of these findings.
Contextual diversity facilitates learning new words in the classroom
Tapia, José Luis; Perea, Manuel
2017-01-01
In the field of word recognition and reading, it is commonly assumed that frequently repeated words create more accessible memory traces than infrequently repeated words, thus capturing the word-frequency effect. Nevertheless, recent research has shown that a seemingly related factor, contextual diversity (defined as the number of different contexts [e.g., films] in which a word appears), is a better predictor than word-frequency in word recognition and sentence reading experiments. Recent research has shown that contextual diversity plays an important role when learning new words in a laboratory setting with adult readers. In the current experiment, we directly manipulated contextual diversity in a very ecological scenario: at school, when Grade 3 children were learning words in the classroom. The new words appeared in different contexts/topics (high-contextual diversity) or only in one of them (low-contextual diversity). Results showed that words encountered in different contexts were learned and remembered more effectively than those presented in redundant contexts. We discuss the practical (educational [e.g., curriculum design]) and theoretical (models of word recognition) implications of these findings. PMID:28586354
Medical Named Entity Recognition for Indonesian Language Using Word Representations
NASA Astrophysics Data System (ADS)
Rahman, Arief
2018-03-01
Nowadays, Named Entity Recognition (NER) system is used in medical texts to obtain important medical information, like diseases, symptoms, and drugs. While most NER systems are applied to formal medical texts, informal ones like those from social media (also called semi-formal texts) are starting to get recognition as a gold mine for medical information. We propose a theoretical Named Entity Recognition (NER) model for semi-formal medical texts in our medical knowledge management system by comparing two kinds of word representations: cluster-based word representation and distributed representation.
The impact of left and right intracranial tumors on picture and word recognition memory.
Goldstein, Bram; Armstrong, Carol L; Modestino, Edward; Ledakis, George; John, Cameron; Hunter, Jill V
2004-02-01
This study investigated the effects of left and right intracranial tumors on picture and word recognition memory. We hypothesized that left hemispheric (LH) patients would exhibit greater word recognition memory impairment than right hemispheric (RH) patients, with no significant hemispheric group picture recognition memory differences. The LH patient group obtained a significantly slower mean picture recognition reaction time than the RH group. The LH group had a higher proportion of tumors extending into the temporal lobes, possibly accounting for their greater pictorial processing impairments. Dual coding and enhanced visual imagery may have contributed to the patient groups' similar performance on the remainder of the measures.
Sight Word Recognition among Young Children At-Risk: Picture-Supported vs. Word-Only
ERIC Educational Resources Information Center
Meadan, Hedda; Stoner, Julia B.; Parette, Howard P.
2008-01-01
A quasi-experimental design was used to investigate the impact of Picture Communication Symbols (PCS) on sight word recognition by young children identified as "at risk" for academic and social-behavior difficulties. Ten pre-primer and 10 primer Dolch words were presented to 23 students in the intervention group and 8 students in the…
Word Recognition Error Analysis: Comparing Isolated Word List and Oral Passage Reading
ERIC Educational Resources Information Center
Flynn, Lindsay J.; Hosp, John L.; Hosp, Michelle K.; Robbins, Kelly P.
2011-01-01
The purpose of this study was to determine the relation between word recognition errors made at a letter-sound pattern level on a word list and on a curriculum-based measurement oral reading fluency measure (CBM-ORF) for typical and struggling elementary readers. The participants were second, third, and fourth grade typical and struggling readers…
ERIC Educational Resources Information Center
Shafiro, Valeriy; Kharkhurin, Anatoliy V.
2009-01-01
Abstract Does native language phonology influence visual word processing in a second language? This question was investigated in two experiments with two groups of Russian-English bilinguals, differing in their English experience, and a monolingual English control group. Experiment 1 tested visual word recognition following semantic…
ERIC Educational Resources Information Center
Boot, Inge; Pecher, Diane
2008-01-01
Many models of word recognition predict that neighbours of target words will be activated during word processing. Cascaded models can make the additional prediction that semantic features of those neighbours get activated before the target has been uniquely identified. In two semantic decision tasks neighbours that were congruent (i.e., from the…
Semantic Ambiguity Effects in L2 Word Recognition
ERIC Educational Resources Information Center
Ishida, Tomomi
2018-01-01
The present study examined the ambiguity effects in second language (L2) word recognition. Previous studies on first language (L1) lexical processing have observed that ambiguous words are recognized faster and more accurately than unambiguous words on lexical decision tasks. In this research, L1 and L2 speakers of English were asked whether a…
The Role of Derivative Suffix Productivity in the Visual Word Recognition of Complex Words
ERIC Educational Resources Information Center
Lázaro, Miguel; Sainz, Javier; Illera, Víctor
2015-01-01
In this article we present two lexical decision experiments that examine the role of base frequency and of derivative suffix productivity in visual recognition of Spanish words. In the first experiment we find that complex words with productive derivative suffixes result in lower response times than those with unproductive derivative suffixes.…
ERIC Educational Resources Information Center
Janssen, David Rainsford
This study investigated alternate methods of letter discrimination pretraining and word recognition training in young children. Seventy kindergarten children were trained to recognize eight printed words in a vocabulary list by a mixed-list paired-associate method. Four of the stimulus words had visual response choices (pictures) and four had…
From Numbers to Letters: Feedback Regularization in Visual Word Recognition
ERIC Educational Resources Information Center
Molinaro, Nicola; Dunabeitia, Jon Andoni; Marin-Gutierrez, Alejandro; Carreiras, Manuel
2010-01-01
Word reading in alphabetic languages involves letter identification, independently of the format in which these letters are written. This process of letter "regularization" is sensitive to word context, leading to the recognition of a word even when numbers that resemble letters are inserted among other real letters (e.g., M4TERI4L). The present…
Morphological Processing during Visual Word Recognition in Hebrew as a First and a Second Language
ERIC Educational Resources Information Center
Norman, Tal; Degani, Tamar; Peleg, Orna
2017-01-01
The present study examined whether sublexical morphological processing takes place during visual word-recognition in Hebrew, and whether morphological decomposition of written words depends on lexical activation of the complete word. Furthermore, it examined whether morphological processing is similar when reading Hebrew as a first language (L1)…
Individual differences in language and working memory affect children's speech recognition in noise.
McCreery, Ryan W; Spratford, Meredith; Kirby, Benjamin; Brennan, Marc
2017-05-01
We examined how cognitive and linguistic skills affect speech recognition in noise for children with normal hearing. Children with better working memory and language abilities were expected to have better speech recognition in noise than peers with poorer skills in these domains. As part of a prospective, cross-sectional study, children with normal hearing completed speech recognition in noise for three types of stimuli: (1) monosyllabic words, (2) syntactically correct but semantically anomalous sentences and (3) semantically and syntactically anomalous word sequences. Measures of vocabulary, syntax and working memory were used to predict individual differences in speech recognition in noise. Ninety-six children with normal hearing, who were between 5 and 12 years of age. Higher working memory was associated with better speech recognition in noise for all three stimulus types. Higher vocabulary abilities were associated with better recognition in noise for sentences and word sequences, but not for words. Working memory and language both influence children's speech recognition in noise, but the relationships vary across types of stimuli. These findings suggest that clinical assessment of speech recognition is likely to reflect underlying cognitive and linguistic abilities, in addition to a child's auditory skills, consistent with the Ease of Language Understanding model.
Reading component skills in dyslexia: word recognition, comprehension and processing speed.
de Oliveira, Darlene G; da Silva, Patrícia B; Dias, Natália M; Seabra, Alessandra G; Macedo, Elizeu C
2014-01-01
The cognitive model of reading comprehension (RC) posits that RC is a result of the interaction between decoding and linguistic comprehension. Recently, the notion of decoding skill was expanded to include word recognition. In addition, some studies suggest that other skills could be integrated into this model, like processing speed, and have consistently indicated that this skill influences and is an important predictor of the main components of the model, such as vocabulary for comprehension and phonological awareness of word recognition. The following study evaluated the components of the RC model and predictive skills in children and adolescents with dyslexia. 40 children and adolescents (8-13 years) were divided in a Dyslexic Group (DG; 18 children, MA = 10.78, SD = 1.66) and control group (CG 22 children, MA = 10.59, SD = 1.86). All were students from the 2nd to 8th grade of elementary school and groups were equivalent in school grade, age, gender, and IQ. Oral and RC, word recognition, processing speed, picture naming, receptive vocabulary, and phonological awareness were assessed. There were no group differences regarding the accuracy in oral and RC, phonological awareness, naming, and vocabulary scores. DG performed worse than the CG in word recognition (general score and orthographic confusion items) and were slower in naming. Results corroborated the literature regarding word recognition and processing speed deficits in dyslexia. However, dyslexics can achieve normal scores on RC test. Data supports the importance of delimitation of different reading strategies embedded in the word recognition component. The role of processing speed in reading problems remain unclear.
Effect of Context and Hearing Loss on Time-Gated Word Recognition in Children.
Lewis, Dawna; Kopun, Judy; McCreery, Ryan; Brennan, Marc; Nishi, Kanae; Cordrey, Evan; Stelmachowicz, Pat; Moeller, Mary Pat
The purpose of this study was to examine word recognition in children who are hard of hearing (CHH) and children with normal hearing (CNH) in response to time-gated words presented in high- versus low-predictability sentences (HP, LP), where semantic cues were manipulated. Findings inform our understanding of how CHH combine cognitive-linguistic and acoustic-phonetic cues to support spoken word recognition. It was hypothesized that both groups of children would be able to make use of linguistic cues provided by HP sentences to support word recognition. CHH were expected to require greater acoustic information (more gates) than CNH to correctly identify words in the LP condition. In addition, it was hypothesized that error patterns would differ across groups. Sixteen CHH with mild to moderate hearing loss and 16 age-matched CNH participated (5 to 12 years). Test stimuli included 15 LP and 15 HP age-appropriate sentences. The final word of each sentence was divided into segments and recombined with the sentence frame to create series of sentences in which the final word was progressively longer by the gated increments. Stimuli were presented monaurally through headphones and children were asked to identify the target word at each successive gate. They also were asked to rate their confidence in their word choice using a five- or three-point scale. For CHH, the signals were processed through a hearing aid simulator. Standardized language measures were used to assess the contribution of linguistic skills. Analysis of language measures revealed that the CNH and CHH performed within the average range on language abilities. Both groups correctly recognized a significantly higher percentage of words in the HP condition than in the LP condition. Although CHH performed comparably with CNH in terms of successfully recognizing the majority of words, differences were observed in the amount of acoustic-phonetic information needed to achieve accurate word recognition. CHH needed more gates than CNH to identify words in the LP condition. CNH were significantly lower in rating their confidence in the LP condition than in the HP condition. CHH, however, were not significantly different in confidence between the conditions. Error patterns for incorrect word responses across gates and predictability varied depending on hearing status. The results of this study suggest that CHH with age-appropriate language abilities took advantage of context cues in the HP sentences to guide word recognition in a manner similar to CNH. However, in the LP condition, they required more acoustic information (more gates) than CNH for word recognition. Differences in the structure of incorrect word responses and their nomination patterns across gates for CHH compared with their peers with NH suggest variations in how these groups use limited acoustic information to select word candidates.
ERIC Educational Resources Information Center
Siakaluk, Paul D.; Pexman, Penny M.; Aguilera, Laura; Owen, William J.; Sears, Christopher R.
2008-01-01
We examined the effects of sensorimotor experience in two visual word recognition tasks. Body-object interaction (BOI) ratings were collected for a large set of words. These ratings assess perceptions of the ease with which a human body can physically interact with a word's referent. A set of high BOI words (e.g., "mask") and a set of low BOI…
Reading laterally: the cerebral hemispheric use of spatial frequencies in visual word recognition.
Tadros, Karine; Dupuis-Roy, Nicolas; Fiset, Daniel; Arguin, Martin; Gosselin, Frédéric
2013-01-04
It is generally accepted that the left hemisphere (LH) is more capable for reading than the right hemisphere (RH). Left hemifield presentations (initially processed by the RH) lead to a globally higher error rate, slower word identification, and a significantly stronger word length effect (i.e., slower reaction times for longer words). Because the visuo-perceptual mechanisms of the brain for word recognition are primarily localized in the LH (Cohen et al., 2003), it is possible that this part of the brain possesses better spatial frequency (SF) tuning for processing the visual properties of words than the RH. The main objective of this study is to determine the SF tuning functions of the LH and RH for word recognition. Each word image was randomly sampled in the SF domain using the SF bubbles method (Willenbockel et al., 2010) and was presented laterally to the left or right visual hemifield. As expected, the LH requires less visual information than the RH to reach the same level of performance, illustrating the well-known LH advantage for word recognition. Globally, the SF tuning of both hemispheres is similar. However, these seemingly identical tuning functions hide important differences. Most importantly, we argue that the RH requires higher SFs to identify longer words because of crowding.
Meyer, Ted A; Frisch, Stefan A; Pisoni, David B; Miyamoto, Richard T; Svirsky, Mario A
2003-07-01
Do cochlear implants provide enough information to allow adult cochlear implant users to understand words in ways that are similar to listeners with acoustic hearing? Can we use a computational model to gain insight into the underlying mechanisms used by cochlear implant users to recognize spoken words? The Neighborhood Activation Model has been shown to be a reasonable model of word recognition for listeners with normal hearing. The Neighborhood Activation Model assumes that words are recognized in relation to other similar-sounding words in a listener's lexicon. The probability of correctly identifying a word is based on the phoneme perception probabilities from a listener's closed-set consonant and vowel confusion matrices modified by the relative frequency of occurrence of the target word compared with similar-sounding words (neighbors). Common words with few similar-sounding neighbors are more likely to be selected as responses than less common words with many similar-sounding neighbors. Recent studies have shown that several of the assumptions of the Neighborhood Activation Model also hold true for cochlear implant users. Closed-set consonant and vowel confusion matrices were obtained from 26 postlingually deafened adults who use cochlear implants. Confusion matrices were used to represent input errors to the Neighborhood Activation Model. Responses to the different stimuli were then generated by the Neighborhood Activation Model after incorporating the frequency of occurrence counts of the stimuli and their neighbors. Model outputs were compared with obtained performance measures on the Consonant-Vowel Nucleus-Consonant word test. Information transmission analysis was used to assess whether the Neighborhood Activation Model was able to successfully generate and predict word and individual phoneme recognition by cochlear implant users. The Neighborhood Activation Model predicted Consonant-Vowel Nucleus-Consonant test words at levels similar to those correctly identified by the cochlear implant users. The Neighborhood Activation Model also predicted phoneme feature information well. The results obtained suggest that the Neighborhood Activation Model provides a reasonable explanation of word recognition by postlingually deafened adults after cochlear implantation. It appears that multichannel cochlear implants give cochlear implant users access to their mental lexicons in a manner that is similar to listeners with acoustic hearing. The lexical properties of the test stimuli used to assess performance are important to spoken-word recognition and should be included in further models of the word recognition process.
Lexical is as lexical does: computational approaches to lexical representation
Woollams, Anna M.
2015-01-01
In much of neuroimaging and neuropsychology, regions of the brain have been associated with ‘lexical representation’, with little consideration as to what this cognitive construct actually denotes. Within current computational models of word recognition, there are a number of different approaches to the representation of lexical knowledge. Structural lexical representations, found in original theories of word recognition, have been instantiated in modern localist models. However, such a representational scheme lacks neural plausibility in terms of economy and flexibility. Connectionist models have therefore adopted distributed representations of form and meaning. Semantic representations in connectionist models necessarily encode lexical knowledge. Yet when equipped with recurrent connections, connectionist models can also develop attractors for familiar forms that function as lexical representations. Current behavioural, neuropsychological and neuroimaging evidence shows a clear role for semantic information, but also suggests some modality- and task-specific lexical representations. A variety of connectionist architectures could implement these distributed functional representations, and further experimental and simulation work is required to discriminate between these alternatives. Future conceptualisations of lexical representations will therefore emerge from a synergy between modelling and neuroscience. PMID:25893204
NASA Technical Reports Server (NTRS)
Simpson, Carol A.
1990-01-01
The U.S. Army Crew Station Research and Development Facility uses vintage 1984 speech recognizers. An evaluation was performed of newer off-the-shelf speech recognition devices to determine whether newer technology performance and capabilities are substantially better than that of the Army's current speech recognizers. The Phonetic Discrimination (PD-100) Test was used to compare recognizer performance in two ambient noise conditions: quiet office and helicopter noise. Test tokens were spoken by males and females and in isolated-word and connected-work mode. Better overall recognition accuracy was obtained from the newer recognizers. Recognizer capabilities needed to support the development of human factors design requirements for speech command systems in advanced combat helicopters are listed.
Orthographic neighborhood effects in recognition and recall tasks in a transparent orthography.
Justi, Francis R R; Jaeger, Antonio
2017-04-01
The number of orthographic neighbors of a word influences its probability of being retrieved in recognition and free recall memory tests. Even though this phenomenon is well demonstrated for English words, it has yet to be demonstrated for languages with more predictable grapheme-phoneme mappings than English. To address this issue, 4 experiments were conducted to investigate effects of number of orthographic neighbors (N) and effects of frequency of occurrence of orthographic neighbors (NF) on memory retrieval of Brazilian Portuguese words. One hundred twenty-four Brazilian Portuguese speakers performed first a lexical-decision task (LDT) on words that were factorially manipulated according to N and NF, and intermixed with either nonpronounceable nonwords without orthographic neighbors (Experiments 1A and 2A), or with pronounceable nonwords with a large number of orthographic neighbors (Experiments 1B and 2B). The words were later used as probes on either recognition (Experiments 1A and 1B) or recall tests (Experiments 2A and 2B). Words with 1 orthographic neighbor were consistently better remembered than words with several orthographic neighbors in all recognition and recall tests. Notably, whereas in Experiment 1A higher false alarm rates were yielded for words with several rather than 1 orthographic neighbor, in Experiment 1B higher false alarm rates were yielded for words with 1 rather than several orthographic neighbors. Effects of NF, on the other hand, were not consistent among memory tasks. The effects of N on the recognition and recall tests conducted here are interpreted in light of dual process models of recognition. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Conducting spoken word recognition research online: Validation and a new timing method.
Slote, Joseph; Strand, Julia F
2016-06-01
Models of spoken word recognition typically make predictions that are then tested in the laboratory against the word recognition scores of human subjects (e.g., Luce & Pisoni Ear and Hearing, 19, 1-36, 1998). Unfortunately, laboratory collection of large sets of word recognition data can be costly and time-consuming. Due to the numerous advantages of online research in speed, cost, and participant diversity, some labs have begun to explore the use of online platforms such as Amazon's Mechanical Turk (AMT) to source participation and collect data (Buhrmester, Kwang, & Gosling Perspectives on Psychological Science, 6, 3-5, 2011). Many classic findings in cognitive psychology have been successfully replicated online, including the Stroop effect, task-switching costs, and Simon and flanker interference (Crump, McDonnell, & Gureckis PLoS ONE, 8, e57410, 2013). However, tasks requiring auditory stimulus delivery have not typically made use of AMT. In the present study, we evaluated the use of AMT for collecting spoken word identification and auditory lexical decision data. Although online users were faster and less accurate than participants in the lab, the results revealed strong correlations between the online and laboratory measures for both word identification accuracy and lexical decision speed. In addition, the scores obtained in the lab and online were equivalently correlated with factors that have been well established to predict word recognition, including word frequency and phonological neighborhood density. We also present and analyze a method for precise auditory reaction timing that is novel to behavioral research. Taken together, these findings suggest that AMT can be a viable alternative to the traditional laboratory setting as a source of participation for some spoken word recognition research.
Yoneyama, Kiyoko; Munson, Benjamin
2017-02-01
Whether or not the influence of listeners' language proficiency on L2 speech recognition was affected by the structure of the lexicon was examined. This specific experiment examined the effect of word frequency (WF) and phonological neighborhood density (PND) on word recognition in native speakers of English and second-language (L2) speakers of English whose first language was Japanese. The stimuli included English words produced by a native speaker of English and English words produced by a native speaker of Japanese (i.e., with Japanese-accented English). The experiment was inspired by the finding of Imai, Flege, and Walley [(2005). J. Acoust. Soc. Am. 117, 896-907] that the influence of talker accent on speech intelligibility for L2 learners of English whose L1 is Spanish varies as a function of words' PND. In the currently study, significant interactions between stimulus accentedness and listener group on the accuracy and speed of spoken word recognition were found, as were significant effects of PND and WF on word-recognition accuracy. However, no significant three-way interaction among stimulus talker, listener group, and PND on either measure was found. Results are discussed in light of recent findings on cross-linguistic differences in the nature of the effects of PND on L2 phonological and lexical processing.
Is nevtral NEUTRAL? Visual similarity effects in the early phases of written-word recognition.
Marcet, Ana; Perea, Manuel
2017-08-01
For simplicity, contemporary models of written-word recognition and reading have unspecified feature/letter levels-they predict that the visually similar substituted-letter nonword PEQPLE is as effective at activating the word PEOPLE as the visually dissimilar substituted-letter nonword PEYPLE. Previous empirical evidence on the effects of visual similarly across letters during written-word recognition is scarce and nonconclusive. To examine whether visual similarity across letters plays a role early in word processing, we conducted two masked priming lexical decision experiments (stimulus-onset asynchrony = 50 ms). The substituted-letter primes were visually very similar to the target letters (u/v in Experiment 1 and i/j in Experiment 2; e.g., nevtral-NEUTRAL). For comparison purposes, we included an identity prime condition (neutral-NEUTRAL) and a dissimilar-letter prime condition (neztral-NEUTRAL). Results showed that the similar-letter prime condition produced faster word identification times than the dissimilar-letter prime condition. We discuss how models of written-word recognition should be amended to capture visual similarity effects across letters.
Boles, D B
1989-01-01
Three attributes of words are their imageability, concreteness, and familiarity. From a literature review and several experiments, I previously concluded (Boles, 1983a) that only familiarity affects the overall near-threshold recognition of words, and that none of the attributes affects right-visual-field superiority for word recognition. Here these conclusions are modified by two experiments demonstrating a critical mediating influence of intentional versus incidental memory instructions. In Experiment 1, subjects were instructed to remember the words they were shown, for subsequent recall. The results showed effects of both imageability and familiarity on overall recognition, as well as an effect of imageability on lateralization. In Experiment 2, word-memory instructions were deleted and the results essentially reinstated the findings of Boles (1983a). It is concluded that right-hemisphere imagery processes can participate in word recognition under intentional memory instructions. Within the dual coding theory (Paivio, 1971), the results argue that both discrete and continuous processing modes are available, that the modes can be used strategically, and that continuous processing can occur prior to response stages.
Impaired Word and Face Recognition in Older Adults with Type 2 Diabetes.
Jones, Nicola; Riby, Leigh M; Smith, Michael A
2016-07-01
Older adults with type 2 diabetes mellitus (DM2) exhibit accelerated decline in some domains of cognition including verbal episodic memory. Few studies have investigated the influence of DM2 status in older adults on recognition memory for more complex stimuli such as faces. In the present study we sought to compare recognition memory performance for words, objects and faces under conditions of relatively low and high cognitive load. Healthy older adults with good glucoregulatory control (n = 13) and older adults with DM2 (n = 24) were administered recognition memory tasks in which stimuli (faces, objects and words) were presented under conditions of either i) low (stimulus presented without a background pattern) or ii) high (stimulus presented against a background pattern) cognitive load. In a subsequent recognition phase, the DM2 group recognized fewer faces than healthy controls. Further, the DM2 group exhibited word recognition deficits in the low cognitive load condition. The recognition memory impairment observed in patients with DM2 has clear implications for day-to-day functioning. Although these deficits were not amplified under conditions of increased cognitive load, the present study emphasizes that recognition memory impairment for both words and more complex stimuli such as face are a feature of DM2 in older adults. Copyright © 2016 IMSS. Published by Elsevier Inc. All rights reserved.
Talker and lexical effects on audiovisual word recognition by adults with cochlear implants.
Kaiser, Adam R; Kirk, Karen Iler; Lachs, Lorin; Pisoni, David B
2003-04-01
The present study examined how postlingually deafened adults with cochlear implants combine visual information from lipreading with auditory cues in an open-set word recognition task. Adults with normal hearing served as a comparison group. Word recognition performance was assessed using lexically controlled word lists presented under auditory-only, visual-only, and combined audiovisual presentation formats. Effects of talker variability were studied by manipulating the number of talkers producing the stimulus tokens. Lexical competition was investigated using sets of lexically easy and lexically hard test words. To assess the degree of audiovisual integration, a measure of visual enhancement, R(a), was used to assess the gain in performance provided in the audiovisual presentation format relative to the maximum possible performance obtainable in the auditory-only format. Results showed that word recognition performance was highest for audiovisual presentation followed by auditory-only and then visual-only stimulus presentation. Performance was better for single-talker lists than for multiple-talker lists, particularly under the audiovisual presentation format. Word recognition performance was better for the lexically easy than for the lexically hard words regardless of presentation format. Visual enhancement scores were higher for single-talker conditions compared to multiple-talker conditions and tended to be somewhat better for lexically easy words than for lexically hard words. The pattern of results suggests that information from the auditory and visual modalities is used to access common, multimodal lexical representations in memory. The findings are discussed in terms of the complementary nature of auditory and visual sources of information that specify the same underlying gestures and articulatory events in speech.
Talker and Lexical Effects on Audiovisual Word Recognition by Adults With Cochlear Implants
Kaiser, Adam R.; Kirk, Karen Iler; Lachs, Lorin; Pisoni, David B.
2012-01-01
The present study examined how postlingually deafened adults with cochlear implants combine visual information from lipreading with auditory cues in an open-set word recognition task. Adults with normal hearing served as a comparison group. Word recognition performance was assessed using lexically controlled word lists presented under auditory-only, visual-only, and combined audiovisual presentation formats. Effects of talker variability were studied by manipulating the number of talkers producing the stimulus tokens. Lexical competition was investigated using sets of lexically easy and lexically hard test words. To assess the degree of audiovisual integration, a measure of visual enhancement, Ra, was used to assess the gain in performance provided in the audiovisual presentation format relative to the maximum possible performance obtainable in the auditory-only format. Results showed that word recognition performance was highest for audiovisual presentation followed by auditory-only and then visual-only stimulus presentation. Performance was better for single-talker lists than for multiple-talker lists, particularly under the audiovisual presentation format. Word recognition performance was better for the lexically easy than for the lexically hard words regardless of presentation format. Visual enhancement scores were higher for single-talker conditions compared to multiple-talker conditions and tended to be somewhat better for lexically easy words than for lexically hard words. The pattern of results suggests that information from the auditory and visual modalities is used to access common, multimodal lexical representations in memory. The findings are discussed in terms of the complementary nature of auditory and visual sources of information that specify the same underlying gestures and articulatory events in speech. PMID:14700380
The effects of age and divided attention on spontaneous recognition.
Anderson, Benjamin A; Jacoby, Larry L; Thomas, Ruthann C; Balota, David A
2011-05-01
Studies of recognition typically involve tests in which the participant's memory for a stimulus is directly questioned. There are occasions however, in which memory occurs more spontaneously (e.g., an acquaintance seeming familiar out of context). Spontaneous recognition was investigated in a novel paradigm involving study of pictures and words followed by recognition judgments on stimuli with an old or new word superimposed over an old or new picture. Participants were instructed to make their recognition decision on either the picture or word and to ignore the distracting stimulus. Spontaneous recognition was measured as the influence of old vs. new distracters on target recognition. Across two experiments, older adults and younger adults placed under divided-attention showed a greater tendency to spontaneously recognize old distracters as compared to full-attention younger adults. The occurrence of spontaneous recognition is discussed in relation to ability to constrain retrieval to goal-relevant information.
Effects of hydrocortisone on false memory recognition in healthy men and women.
Duesenberg, Moritz; Weber, Juliane; Schaeuffele, Carmen; Fleischer, Juliane; Hellmann-Regen, Julian; Roepke, Stefan; Moritz, Steffen; Otte, Christian; Wingenfeld, Katja
2016-12-01
Most of the studies focusing on the effect of stress on false memories by using psychosocial and physiological stressors yielded diverse results. In the present study, we systematically tested the effect of exogenous hydrocortisone using a false memory paradigm. In this placebo-controlled study, 37 healthy men and 38 healthy women (mean age 24.59 years) received either 10 mg of hydrocortisone or placebo 75 min before using the false memory, that is, Deese-Roediger-McDermott (DRM), paradigm. We used emotionally charged and neutral DRM-based word lists to look for false recognition rates in comparison to true recognition rates. Overall, we expected an increase in false memory after hydrocortisone compared to placebo. No differences between the cortisol and the placebo group were revealed for false and for true recognition performance. In general, false recognition rates were lower compared to true recognition rates. Furthermore, we found a valence effect (neutral, positive, negative, disgust word stimuli), indicating higher rates of true and false recognition for emotional compared to neutral words. We further found an interaction effect between sex and recognition. Post hoc t tests showed that for true recognition women showed a significantly better memory performance than men, independent of treatment. This study does not support the hypothesis that cortisol decreases the ability to distinguish between old versus novel words in young healthy individuals. However, sex and emotional valence of word stimuli appear to be important moderators. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Adults' Self-Directed Learning of an Artificial Lexicon: The Dynamics of Neighborhood Reorganization
ERIC Educational Resources Information Center
Bardhan, Neil Prodeep
2010-01-01
Artificial lexicons have previously been used to examine the time course of the learning and recognition of spoken words, the role of segment type in word learning, and the integration of context during spoken word recognition. However, in all of these studies the experimenter determined the frequency and order of the words to be learned. In three…
ERIC Educational Resources Information Center
Sauval, Karinne; Casalis, Séverine; Perre, Laetitia
2017-01-01
This study investigated the phonological contribution during visual word recognition in child readers as a function of general reading expertise (third and fifth grades) and specific word exposure (frequent and less-frequent words). An intermodal priming in lexical decision task was performed. Auditory primes (identical and unrelated) were used in…
ERIC Educational Resources Information Center
Zhuang, Jie; Randall, Billi; Stamatakis, Emmanuel A.; Marslen-Wilson, William D.; Tyler, Lorraine K.
2011-01-01
Spoken word recognition involves the activation of multiple word candidates on the basis of the initial speech input--the "cohort"--and selection among these competitors. Selection may be driven primarily by bottom-up acoustic-phonetic inputs or it may be modulated by other aspects of lexical representation, such as a word's meaning…
Russian Character Recognition using Self-Organizing Map
NASA Astrophysics Data System (ADS)
Gunawan, D.; Arisandi, D.; Ginting, F. M.; Rahmat, R. F.; Amalia, A.
2017-01-01
The World Tourism Organization (UNWTO) in 2014 released that there are 28 million visitors who visit Russia. Most of the visitors might have problem in typing Russian word when using digital dictionary. This is caused by the letters, called Cyrillic that used by the Russian and the countries around it, have different shape than Latin letters. The visitors might not familiar with Cyrillic. This research proposes an alternative way to input the Cyrillic words. Instead of typing the Cyrillic words directly, camera can be used to capture image of the words as input. The captured image is cropped, then several pre-processing steps are applied such as noise filtering, binary image processing, segmentation and thinning. Next, the feature extraction process is applied to the image. Cyrillic letters recognition in the image is done by utilizing Self-Organizing Map (SOM) algorithm. SOM successfully recognizes 89.09% Cyrillic letters from the computer-generated images. On the other hand, SOM successfully recognizes 88.89% Cyrillic letters from the image captured by the smartphone’s camera. For the word recognition, SOM successfully recognized 292 words and partially recognized 58 words from the image captured by the smartphone’s camera. Therefore, the accuracy of the word recognition using SOM is 83.42%
Predicting individual differences in reading comprehension: a twin study
Cutting, Laurie; Deater-Deckard, Kirby; DeThorne, Laura S.; Justice, Laura M.; Schatschneider, Chris; Thompson, Lee A.; Petrill, Stephen A.
2010-01-01
We examined the Simple View of reading from a behavioral genetic perspective. Two aspects of word decoding (phonological decoding and word recognition), two aspects of oral language skill (listening comprehension and vocabulary), and reading comprehension were assessed in a twin sample at age 9. Using latent factor models, we found that overlap among phonological decoding, word recognition, listening comprehension, vocabulary, and reading comprehension was primarily due to genetic influences. Shared environmental influences accounted for associations among word recognition, listening comprehension, vocabulary, and reading comprehension. Independent of phonological decoding and word recognition, there was a separate genetic link between listening comprehension, vocabulary, and reading comprehension and a specific shared environmental link between vocabulary and reading comprehension. There were no residual genetic or environmental influences on reading comprehension. The findings provide evidence for a genetic basis to the “Simple View” of reading. PMID:20814768
Ease of identifying words degraded by visual noise.
Barber, P; de la Mahotière, C
1982-08-01
A technique is described for investigating word recognition involving the superimposition of 'noise' on the visual target word. For this task a word is printed in the form of letters made up of separate elements; noise consists of additional elements which serve to reduce the ease whereby the words may be recognized, and a threshold-like measure can be obtained in terms of the amount of noise. A word frequency effect was obtained for the noise task, and for words presented tachistoscopically but in conventional typography. For the tachistoscope task, however, the frequency effect depended on the method of presentation. A second study showed no effect of inspection interval on performance on the noise task. A word-frequency effect was also found in a third experiment with tachistoscopic exposure of the noise task stimuli in undegraded form. The question of whether common processes are drawn on by tasks entailing different ways of varying ease of recognition is addressed, and the suitability of different tasks for word recognition research is discussed.
Rapid extraction of gist from visual text and its influence on word recognition.
Asano, Michiko; Yokosawa, Kazuhiko
2011-01-01
Two experiments explored rapid extraction of gist from a visual text and its influence on word recognition. In both, a short text (sentence) containing a target word was presented for 200 ms and was followed by a target recognition task. Results showed that participants recognized contextually anomalous word targets less frequently than contextually consistent counterparts (Experiment 1). This context effect was obtained when sentences contained the same semantic content but with disrupted syntactic structure (Experiment 2). Results demonstrate that words in a briefly presented visual sentence are processed in parallel and that rapid extraction of sentence gist relies on a primitive representation of sentence context (termed protocontext) that is semantically activated by the simultaneous presentation of multiple words (i.e., a sentence) before syntactic processing.
Estes, Zachary; Adelman, James S
2008-08-01
An automatic vigilance hypothesis states that humans preferentially attend to negative stimuli, and this attention to negative valence disrupts the processing of other stimulus properties. Thus, negative words typically elicit slower color naming, word naming, and lexical decisions than neutral or positive words. Larsen, Mercer, and Balota analyzed the stimuli from 32 published studies, and they found that word valence was confounded with several lexical factors known to affect word recognition. Indeed, with these lexical factors covaried out, Larsen et al. found no evidence of automatic vigilance. The authors report a more sensitive analysis of 1011 words. Results revealed a small but reliable valence effect, such that negative words (e.g., "shark") elicit slower lexical decisions and naming than positive words (e.g., "beach"). Moreover, the relation between valence and recognition was categorical rather than linear; the extremity of a word's valence did not affect its recognition. This valence effect was not attributable to word length, frequency, orthographic neighborhood size, contextual diversity, first phoneme, or arousal. Thus, the present analysis provides the most powerful demonstration of automatic vigilance to date.
Development of First-Graders' Word Reading Skills: For Whom Can Dynamic Assessment Tell Us More?
Cho, Eunsoo; Compton, Donald L; Gilbert, Jennifer K; Steacy, Laura M; Collins, Alyson A; Lindström, Esther R
2017-01-01
Dynamic assessment (DA) of word reading measures learning potential for early reading development by documenting the amount of assistance needed to learn how to read words with unfamiliar orthography. We examined the additive value of DA for predicting first-grade decoding and word recognition development while controlling for autoregressive effects. Additionally, we examined whether predictive validity of DA would be higher for students who have poor phonological awareness skills. First-grade students (n = 105) were assessed on measures of word reading, phonological awareness, rapid automatized naming, and DA in the fall and again assessed on word reading measures in the spring. A series of planned, moderated multiple regression analyses indicated that DA made a significant and unique contribution in predicting word recognition development above and beyond the autoregressor, particularly for students with poor phonological awareness skills. For these students, DA explained 3.5% of the unique variance in end-of-first-grade word recognition that was not attributable to autoregressive effect. Results suggest that DA provides an important source of individual differences in the development of word recognition skills that cannot be fully captured by merely assessing the present level of reading skills through traditional static assessment, particularly for students at risk for developing reading disabilities. © Hammill Institute on Disabilities 2015.
Textual emotion recognition for enhancing enterprise computing
NASA Astrophysics Data System (ADS)
Quan, Changqin; Ren, Fuji
2016-05-01
The growing interest in affective computing (AC) brings a lot of valuable research topics that can meet different application demands in enterprise systems. The present study explores a sub area of AC techniques - textual emotion recognition for enhancing enterprise computing. Multi-label emotion recognition in text is able to provide a more comprehensive understanding of emotions than single label emotion recognition. A representation of 'emotion state in text' is proposed to encompass the multidimensional emotions in text. It ensures the description in a formal way of the configurations of basic emotions as well as of the relations between them. Our method allows recognition of the emotions for the words bear indirect emotions, emotion ambiguity and multiple emotions. We further investigate the effect of word order for emotional expression by comparing the performances of bag-of-words model and sequence model for multi-label sentence emotion recognition. The experiments show that the classification results under sequence model are better than under bag-of-words model. And homogeneous Markov model showed promising results of multi-label sentence emotion recognition. This emotion recognition system is able to provide a convenient way to acquire valuable emotion information and to improve enterprise competitive ability in many aspects.
Extrinsic Cognitive Load Impairs Spoken Word Recognition in High- and Low-Predictability Sentences.
Hunter, Cynthia R; Pisoni, David B
Listening effort (LE) induced by speech degradation reduces performance on concurrent cognitive tasks. However, a converse effect of extrinsic cognitive load on recognition of spoken words in sentences has not been shown. The aims of the present study were to (a) examine the impact of extrinsic cognitive load on spoken word recognition in a sentence recognition task and (b) determine whether cognitive load and/or LE needed to understand spectrally degraded speech would differentially affect word recognition in high- and low-predictability sentences. Downstream effects of speech degradation and sentence predictability on the cognitive load task were also examined. One hundred twenty young adults identified sentence-final spoken words in high- and low-predictability Speech Perception in Noise sentences. Cognitive load consisted of a preload of short (low-load) or long (high-load) sequences of digits, presented visually before each spoken sentence and reported either before or after identification of the sentence-final word. LE was varied by spectrally degrading sentences with four-, six-, or eight-channel noise vocoding. Level of spectral degradation and order of report (digits first or words first) were between-participants variables. Effects of cognitive load, sentence predictability, and speech degradation on accuracy of sentence-final word identification as well as recall of preload digit sequences were examined. In addition to anticipated main effects of sentence predictability and spectral degradation on word recognition, we found an effect of cognitive load, such that words were identified more accurately under low load than high load. However, load differentially affected word identification in high- and low-predictability sentences depending on the level of sentence degradation. Under severe spectral degradation (four-channel vocoding), the effect of cognitive load on word identification was present for high-predictability sentences but not for low-predictability sentences. Under mild spectral degradation (eight-channel vocoding), the effect of load was present for low-predictability sentences but not for high-predictability sentences. There were also reliable downstream effects of speech degradation and sentence predictability on recall of the preload digit sequences. Long digit sequences were more easily recalled following spoken sentences that were less spectrally degraded. When digits were reported after identification of sentence-final words, short digit sequences were recalled more accurately when the spoken sentences were predictable. Extrinsic cognitive load can impair recognition of spectrally degraded spoken words in a sentence recognition task. Cognitive load affected word identification in both high- and low-predictability sentences, suggesting that load may impact both context use and lower-level perceptual processes. Consistent with prior work, LE also had downstream effects on memory for visual digit sequences. Results support the proposal that extrinsic cognitive load and LE induced by signal degradation both draw on a central, limited pool of cognitive resources that is used to recognize spoken words in sentences under adverse listening conditions.
The Influence of Phonotactic Probability on Word Recognition in Toddlers
ERIC Educational Resources Information Center
MacRoy-Higgins, Michelle; Shafer, Valerie L.; Schwartz, Richard G.; Marton, Klara
2014-01-01
This study examined the influence of phonotactic probability on word recognition in English-speaking toddlers. Typically developing toddlers completed a preferential looking paradigm using familiar words, which consisted of either high or low phonotactic probability sound sequences. The participants' looking behavior was recorded in response to…
Individual Differences in Online Spoken Word Recognition: Implications for SLI
ERIC Educational Resources Information Center
McMurray, Bob; Samelson, Vicki M.; Lee, Sung Hee; Tomblin, J. Bruce
2010-01-01
Thirty years of research has uncovered the broad principles that characterize spoken word processing across listeners. However, there have been few systematic investigations of individual differences. Such an investigation could help refine models of word recognition by indicating which processing parameters are likely to vary, and could also have…
Influences of Lexical Processing on Reading.
ERIC Educational Resources Information Center
Yang, Yu-Fen; Kuo, Hsing-Hsiu
2003-01-01
Investigates how early lexical processing (word recognition) could influence reading. Finds that less-proficient readers could not finish the task of word recognition within time limits and their accuracy rates were quite low, whereas the proficient readers processed the physical words immediately and translated them into meanings quickly in order…
A Prerequisite to L1 Homophone Effects in L2 Spoken-Word Recognition
ERIC Educational Resources Information Center
Nakai, Satsuki; Lindsay, Shane; Ota, Mitsuhiko
2015-01-01
When both members of a phonemic contrast in L2 (second language) are perceptually mapped to a single phoneme in one's L1 (first language), L2 words containing a member of that contrast can spuriously activate L2 words in spoken-word recognition. For example, upon hearing cattle, Dutch speakers of English are reported to experience activation…
Cross-modal working memory binding and word recognition skills: how specific is the link?
Wang, Shinmin; Allen, Richard J
2018-04-01
Recent research has suggested that the creation of temporary bound representations of information from different sources within working memory uniquely relates to word recognition abilities in school-age children. However, it is unclear to what extent this link is attributable specifically to the binding ability for cross-modal information. This study examined the performance of Grade 3 (8-9 years old) children on binding tasks requiring either temporary association formation of two visual items (i.e., within-modal binding) or pairs of visually presented abstract shapes and auditorily presented nonwords (i.e., cross-modal binding). Children's word recognition skills were related to performance on the cross-modal binding task but not on the within-modal binding task. Further regression models showed that cross-modal binding memory was a significant predictor of word recognition when memory for its constituent elements, general abilities, and crucially, within-modal binding memory were taken into account. These findings may suggest a specific link between the ability to bind information across modalities within working memory and word recognition skills.
Ward, Emma V; Maylor, Elizabeth A; Poirier, Marie; Korko, Malgorzata; Ruud, Jens C M
2017-11-01
Reinstatement of encoding context facilitates memory for targets in young and older individuals (e.g., a word studied on a particular background scene is more likely to be remembered later if it is presented on the same rather than a different scene or no scene), yet older adults are typically inferior at recalling and recognizing target-context pairings. This study examined the mechanisms of the context effect in normal aging. Age differences in word recognition by context condition (original, switched, none, new), and the ability to explicitly remember target-context pairings were investigated using word-scene pairs (Experiment 1) and word-word pairs (Experiment 2). Both age groups benefited from context reinstatement in item recognition, although older adults were significantly worse than young adults at identifying original pairings and at discriminating between original and switched pairings. In Experiment 3, participants were given a three-alternative forced-choice recognition task that allowed older individuals to draw upon intact familiarity processes in selecting original pairings. Performance was age equivalent. Findings suggest that heightened familiarity associated with context reinstatement is useful for boosting recognition memory in aging.
Effect of Context and Hearing Loss on Time-Gated Word Recognition in Children
Lewis, Dawna E.; Kopun, Judy; McCreery, Ryan; Brennan, Marc; Nishi, Kanae; Cordrey, Evan; Stelmachowicz, Pat; Moeller, Mary Pat
2016-01-01
Objectives The purpose of this study was to examine word recognition in children who are hard of hearing (CHH) and children with normal hearing (CNH) in response to time-gated words presented in high- vs. low-predictability sentences (HP, LP), where semantic cues were manipulated. Findings inform our understanding of how CHH combine cognitive-linguistic and acoustic-phonetic cues to support spoken word recognition. It was hypothesized that both groups of children would be able to make use of linguistic cues provided by HP sentences to support word recognition. CHH were expected to require greater acoustic information (more gates) than CNH to correctly identify words in the LP condition. In addition, it was hypothesized that error patterns would differ across groups. Design Sixteen CHH with mild-to-moderate hearing loss and 16 age-matched CNH participated (5–12 yrs). Test stimuli included 15 LP and 15 HP age-appropriate sentences. The final word of each sentence was divided into segments and recombined with the sentence frame to create series of sentences in which the final word was progressively longer by the gated increments. Stimuli were presented monaurally through headphones and children were asked to identify the target word at each successive gate. They also were asked to rate their confidence in their word choice using a 5- or 3-point scale. For CHH, the signals were processed through a hearing aid simulator. Standardized language measures were used to assess the contribution of linguistic skills. Results Analysis of language measures revealed that the CNH and CHH performed within the average range on language abilities. Both groups correctly recognized a significantly higher percentage of words in the HP condition than in the LP condition. Although CHH performed comparably to CNH in terms of successfully recognizing the majority of words, differences were observed in the amount of acoustic-phonetic information needed to achieve accurate word recognition. CHH needed more gates than CNH to identify words in the LP condition. CNH were significantly lower in rating their confidence in the LP condition than in the HP condition. CHH, however, were not significantly different in confidence between the conditions. Error patterns for incorrect word responses across gates and predictability varied depending on hearing status. Conclusions The results of this study suggest that CHH with age-appropriate language abilities took advantage of context cues in the HP sentences to guide word recognition in a manner similar to CNH. However, in the LP condition, they required more acoustic information (more gates) than CNH for word recognition. Differences in the structure of incorrect word responses and their nomination patterns across gates for CHH compared to their peers with normal hearing suggest variations in how these groups use limited acoustic information to select word candidates. PMID:28045838
The Role of the Association in Recognition Memory.
ERIC Educational Resources Information Center
Underwood, Benton J.
The purpose of the eight experiments was to assess the role which associations between two words played in recognition decisions. The evidence on weak associations established in the laboratory indicated that association was playing a small role, but that the recognition performance on pairs of words was highly predictable from frequency…
Memory effects of sleep, emotional valence, arousal and novelty in children.
Vermeulen, Marije C M; van der Heijden, Kristiaan B; Benjamins, Jeroen S; Swaab, Hanna; van Someren, Eus J W
2017-06-01
Effectiveness of memory consolidation is determined by multiple factors, including sleep after learning, emotional valence, arousal and novelty. Few studies investigated how the effect of sleep compares with (and interacts with) these other factors, of which virtually none are in children. The present study did so by repeated assessment of declarative memory in 386 children (45% boys) aged 9-11 years through an online word-pair task. Children were randomly assigned to either a morning or evening learning session of 30 unrelated word-pairs with positive, neutral or negative valenced cues and neutral targets. After immediately assessing baseline recognition, delayed recognition was recorded either 12 or 24 h later, resulting in four different assessment schedules. One week later, the procedure was repeated with exactly the same word-pairs to evaluate whether effects differed for relearning versus original novel learning. Mixed-effect logistic regression models were used to evaluate how the probability of correct recognition was affected by sleep, valence, arousal, novelty and their interactions. Both immediate and delayed recognition were worse for pairs with negatively valenced or less arousing cue words. Relearning improved immediate and delayed word-pair recognition. In contrast to these effects, sleep did not affect recognition, nor did sleep moderate the effects of arousal, valence and novelty. The findings suggest a robust inclination of children to specifically forget the pairing of words to negatively valenced cue words. In agreement with a recent meta-analysis, children seem to depend less on sleep for the consolidation of information than has been reported for adults, irrespective of the emotional valence, arousal and novelty of word-pairs. © 2017 European Sleep Research Society.
Cognitive Factors Affecting Free Recall, Cued Recall, and Recognition Tasks in Alzheimer's Disease
Yamagishi, Takashi; Sato, Takuya; Sato, Atsushi; Imamura, Toru
2012-01-01
Background/Aims Our aim was to identify cognitive factors affecting free recall, cued recall, and recognition tasks in patients with Alzheimer's disease (AD). Subjects: We recruited 349 consecutive AD patients who attended a memory clinic. Methods Each patient was assessed using the Alzheimer's Disease Assessment Scale (ADAS) and the extended 3-word recall test. In this task, each patient was asked to freely recall 3 previously presented words. If patients could not recall 1 or more of the target words, the examiner cued their recall by providing the category of the target word and then provided a forced-choice recognition of the target word with 2 distracters. The patients were divided into groups according to the results of the free recall, cued recall, and recognition tasks. Multivariate logistic regression analysis for repeated measures was carried out to evaluate the net effects of cognitive factors on the free recall, cued recall, and recognition tasks after controlling for the effects of age and recent memory deficit. Results Performance on the ADAS Orientation task was found to be related to performance on the free and cued recall tasks, performance on the ADAS Following Commands task was found to be related to performance on the cued recall task, and performance on the ADAS Ideational Praxis task was found to be related to performance on the free recall, cued recall, and recognition tasks. Conclusion The extended 3-word recall test reflects deficits in a wider range of memory and other cognitive processes, including memory retention after interference, divided attention, and executive functions, compared with word-list recall tasks. The characteristics of the extended 3-word recall test may be advantageous for evaluating patients’ memory impairments in daily living. PMID:22962551
Cognitive factors affecting free recall, cued recall, and recognition tasks in Alzheimer's disease.
Yamagishi, Takashi; Sato, Takuya; Sato, Atsushi; Imamura, Toru
2012-01-01
Our aim was to identify cognitive factors affecting free recall, cued recall, and recognition tasks in patients with Alzheimer's disease (AD). We recruited 349 consecutive AD patients who attended a memory clinic. Each patient was assessed using the Alzheimer's Disease Assessment Scale (ADAS) and the extended 3-word recall test. In this task, each patient was asked to freely recall 3 previously presented words. If patients could not recall 1 or more of the target words, the examiner cued their recall by providing the category of the target word and then provided a forced-choice recognition of the target word with 2 distracters. The patients were divided into groups according to the results of the free recall, cued recall, and recognition tasks. Multivariate logistic regression analysis for repeated measures was carried out to evaluate the net effects of cognitive factors on the free recall, cued recall, and recognition tasks after controlling for the effects of age and recent memory deficit. Performance on the ADAS Orientation task was found to be related to performance on the free and cued recall tasks, performance on the ADAS Following Commands task was found to be related to performance on the cued recall task, and performance on the ADAS Ideational Praxis task was found to be related to performance on the free recall, cued recall, and recognition tasks. The extended 3-word recall test reflects deficits in a wider range of memory and other cognitive processes, including memory retention after interference, divided attention, and executive functions, compared with word-list recall tasks. The characteristics of the extended 3-word recall test may be advantageous for evaluating patients' memory impairments in daily living.
Reconsidering the role of temporal order in spoken word recognition.
Toscano, Joseph C; Anderson, Nathaniel D; McMurray, Bob
2013-10-01
Models of spoken word recognition assume that words are represented as sequences of phonemes. We evaluated this assumption by examining phonemic anadromes, words that share the same phonemes but differ in their order (e.g., sub and bus). Using the visual-world paradigm, we found that listeners show more fixations to anadromes (e.g., sub when bus is the target) than to unrelated words (well) and to words that share the same vowel but not the same set of phonemes (sun). This contrasts with the predictions of existing models and suggests that words are not defined as strict sequences of phonemes.
Choi, Wonil; Gordon, Peter C.
2013-01-01
The coordination of word-recognition and oculomotor processes during reading was evaluated in two eye-tracking experiments that examined how word skipping, where a word is not fixated during first-pass reading, is affected by the lexical status of a letter string in the parafovea and ease of recognizing that string. Ease of lexical recognition was manipulated through target-word frequency (Experiment 1) and through repetition priming between prime-target pairs embedded in a sentence (Experiment 2). Using the gaze-contingent boundary technique the target word appeared in the parafovea either with full preview or with transposed-letter (TL) preview. The TL preview strings were nonwords in Experiment 1 (e.g., bilnk created from the target blink), but were words in Experiment 2 (e.g., sacred created from the target scared). Experiment 1 showed greater skipping for high-frequency than low-frequency target words in the full preview condition but not in the TL preview (nonword) condition. Experiment 2 showed greater skipping for target words that repeated an earlier prime word than for those that did not, with this repetition priming occurring both with preview of the full target and with preview of the target’s TL neighbor word. However, time to progress from the word after the target was greater following skips of the TL preview word, whose meaning was anomalous in the sentence context, than following skips of the full preview word whose meaning fit sensibly into the sentence context. Together, the results support the idea that coordination between word-recognition and oculomotor processes occurs at the level of implicit lexical decisions. PMID:23106372
Meyer, Ted A.; Frisch, Stefan A.; Pisoni, David B.; Miyamoto, Richard T.; Svirsky, Mario A.
2012-01-01
Hypotheses Do cochlear implants provide enough information to allow adult cochlear implant users to understand words in ways that are similar to listeners with acoustic hearing? Can we use a computational model to gain insight into the underlying mechanisms used by cochlear implant users to recognize spoken words? Background The Neighborhood Activation Model has been shown to be a reasonable model of word recognition for listeners with normal hearing. The Neighborhood Activation Model assumes that words are recognized in relation to other similar-sounding words in a listener’s lexicon. The probability of correctly identifying a word is based on the phoneme perception probabilities from a listener’s closed-set consonant and vowel confusion matrices modified by the relative frequency of occurrence of the target word compared with similar-sounding words (neighbors). Common words with few similar-sounding neighbors are more likely to be selected as responses than less common words with many similar-sounding neighbors. Recent studies have shown that several of the assumptions of the Neighborhood Activation Model also hold true for cochlear implant users. Methods Closed-set consonant and vowel confusion matrices were obtained from 26 postlingually deafened adults who use cochlear implants. Confusion matrices were used to represent input errors to the Neighborhood Activation Model. Responses to the different stimuli were then generated by the Neighborhood Activation Model after incorporating the frequency of occurrence counts of the stimuli and their neighbors. Model outputs were compared with obtained performance measures on the Consonant-Vowel Nucleus-Consonant word test. Information transmission analysis was used to assess whether the Neighborhood Activation Model was able to successfully generate and predict word and individual phoneme recognition by cochlear implant users. Results The Neighborhood Activation Model predicted Consonant-Vowel Nucleus-Consonant test words at levels similar to those correctly identified by the cochlear implant users. The Neighborhood Activation Model also predicted phoneme feature information well. Conclusion The results obtained suggest that the Neighborhood Activation Model provides a reasonable explanation of word recognition by postlingually deafened adults after cochlear implantation. It appears that multichannel cochlear implants give cochlear implant users access to their mental lexicons in a manner that is similar to listeners with acoustic hearing. The lexical properties of the test stimuli used to assess performance are important to spoken-word recognition and should be included in further models of the word recognition process. PMID:12851554
Tracking speech comprehension in space and time.
Pulvermüller, Friedemann; Shtyrov, Yury; Ilmoniemi, Risto J; Marslen-Wilson, William D
2006-07-01
A fundamental challenge for the cognitive neuroscience of language is to capture the spatio-temporal patterns of brain activity that underlie critical functional components of the language comprehension process. We combine here psycholinguistic analysis, whole-head magnetoencephalography (MEG), the Mismatch Negativity (MMN) paradigm, and state-of-the-art source localization techniques (Equivalent Current Dipole and L1 Minimum-Norm Current Estimates) to locate the process of spoken word recognition at a specific moment in space and time. The magnetic MMN to words presented as rare "deviant stimuli" in an oddball paradigm among repetitive "standard" speech stimuli, peaked 100-150 ms after the information in the acoustic input, was sufficient for word recognition. The latency with which words were recognized corresponded to that of an MMN source in the left superior temporal cortex. There was a significant correlation (r = 0.7) of latency measures of word recognition in individual study participants with the latency of the activity peak of the superior temporal source. These results demonstrate a correspondence between the behaviorally determined recognition point for spoken words and the cortical activation in left posterior superior temporal areas. Both the MMN calculated in the classic manner, obtained by subtracting standard from deviant stimulus response recorded in the same experiment, and the identity MMN (iMMN), defined as the difference between the neuromagnetic responses to the same stimulus presented as standard and deviant stimulus, showed the same significant correlation with word recognition processes.
Semantic Neighborhood Effects for Abstract versus Concrete Words
Danguecan, Ashley N.; Buchanan, Lori
2016-01-01
Studies show that semantic effects may be task-specific, and thus, that semantic representations are flexible and dynamic. Such findings are critical to the development of a comprehensive theory of semantic processing in visual word recognition, which should arguably account for how semantic effects may vary by task. It has been suggested that semantic effects are more directly examined using tasks that explicitly require meaning processing relative to those for which meaning processing is not necessary (e.g., lexical decision task). The purpose of the present study was to chart the processing of concrete versus abstract words in the context of a global co-occurrence variable, semantic neighborhood density (SND), by comparing word recognition response times (RTs) across four tasks varying in explicit semantic demands: standard lexical decision task (with non-pronounceable non-words), go/no-go lexical decision task (with pronounceable non-words), progressive demasking task, and sentence relatedness task. The same experimental stimulus set was used across experiments and consisted of 44 concrete and 44 abstract words, with half of these being low SND, and half being high SND. In this way, concreteness and SND were manipulated in a factorial design using a number of visual word recognition tasks. A consistent RT pattern emerged across tasks, in which SND effects were found for abstract (but not necessarily concrete) words. Ultimately, these findings highlight the importance of studying interactive effects in word recognition, and suggest that linguistic associative information is particularly important for abstract words. PMID:27458422
Semantic Neighborhood Effects for Abstract versus Concrete Words.
Danguecan, Ashley N; Buchanan, Lori
2016-01-01
Studies show that semantic effects may be task-specific, and thus, that semantic representations are flexible and dynamic. Such findings are critical to the development of a comprehensive theory of semantic processing in visual word recognition, which should arguably account for how semantic effects may vary by task. It has been suggested that semantic effects are more directly examined using tasks that explicitly require meaning processing relative to those for which meaning processing is not necessary (e.g., lexical decision task). The purpose of the present study was to chart the processing of concrete versus abstract words in the context of a global co-occurrence variable, semantic neighborhood density (SND), by comparing word recognition response times (RTs) across four tasks varying in explicit semantic demands: standard lexical decision task (with non-pronounceable non-words), go/no-go lexical decision task (with pronounceable non-words), progressive demasking task, and sentence relatedness task. The same experimental stimulus set was used across experiments and consisted of 44 concrete and 44 abstract words, with half of these being low SND, and half being high SND. In this way, concreteness and SND were manipulated in a factorial design using a number of visual word recognition tasks. A consistent RT pattern emerged across tasks, in which SND effects were found for abstract (but not necessarily concrete) words. Ultimately, these findings highlight the importance of studying interactive effects in word recognition, and suggest that linguistic associative information is particularly important for abstract words.
Face and Word Recognition Can Be Selectively Affected by Brain Injury or Developmental Disorders.
Robotham, Ro J; Starrfelt, Randi
2017-01-01
Face and word recognition have traditionally been thought to rely on highly specialised and relatively independent cognitive processes. Some of the strongest evidence for this has come from patients with seemingly category-specific visual perceptual deficits such as pure prosopagnosia, a selective face recognition deficit, and pure alexia, a selective word recognition deficit. Together, the patterns of impaired reading with preserved face recognition and impaired face recognition with preserved reading constitute a double dissociation. The existence of these selective deficits has been questioned over the past decade. It has been suggested that studies describing patients with these pure deficits have failed to measure the supposedly preserved functions using sensitive enough measures, and that if tested using sensitive measurements, all patients with deficits in one visual category would also have deficits in the other. The implications of this would be immense, with most textbooks in cognitive neuropsychology requiring drastic revisions. In order to evaluate the evidence for dissociations, we review studies that specifically investigate whether face or word recognition can be selectively affected by acquired brain injury or developmental disorders. We only include studies published since 2004, as comprehensive reviews of earlier studies are available. Most of the studies assess the supposedly preserved functions using sensitive measurements. We found convincing evidence that reading can be preserved in acquired and developmental prosopagnosia and also evidence (though weaker) that face recognition can be preserved in acquired or developmental dyslexia, suggesting that face and word recognition are at least in part supported by independent processes.
ERIC Educational Resources Information Center
Sheehy, Kieron
2002-01-01
A comparison is made between a new technique (the Handle Technique), Integrated Picture Cueing, and a Word Alone Method. Results show using a new combination of teaching strategies enabled logographic symbols to be used effectively in teaching word recognition to 12 children with severe learning difficulties. (Contains references.) (Author/CR)
ERIC Educational Resources Information Center
Loucas, Tom; Riches, Nick; Baird, Gillian; Pickles, Andrew; Simonoff, Emily; Chandler, Susie; Charman, Tony
2013-01-01
Spoken word recognition, during gating, appears intact in specific language impairment (SLI). This study used gating to investigate the process in adolescents with autism spectrum disorders plus language impairment (ALI). Adolescents with ALI, SLI, and typical language development (TLD), matched on nonverbal IQ listened to gated words that varied…
Limited Role of Contextual Information in Adult Word Recognition. Technical Report No. 411.
ERIC Educational Resources Information Center
Durgunoglu, Aydin Y.
Recognizing a word in a meaningful text involves processes that combine information from many different sources, and both bottom-up processes (such as feature extraction and letter recognition) and top-down processes (contextual information) are thought to interact when skilled readers recognize words. Two similar experiments investigated word…
ERIC Educational Resources Information Center
Park, Denise Cortis; And Others
1983-01-01
Tested recognition memory for items and spatial location by varying picture and word stimuli across four slide quadrants. Results showed a pictorial superiority effect for item recognition and a greater ability to remember the spatial location of pictures versus words for both old and young adults (N=95). (WAS)
Age-of-Acquisition Effects in Visual Word Recognition: Evidence from Expert Vocabularies
ERIC Educational Resources Information Center
Stadthagen-Gonzalez, Hans; Bowers, Jeffrey S.; Damian, Markus F.
2004-01-01
Three experiments assessed the contributions of age-of-acquisition (AoA) and frequency to visual word recognition. Three databases were created from electronic journals in chemistry, psychology and geology in order to identify technical words that are extremely frequent in each discipline but acquired late in life. In Experiment 1, psychologists…
ERIC Educational Resources Information Center
Obregon, Mateo; Shillcock, Richard
2012-01-01
Recognition of a single word is an elemental task in innumerable cognitive psychology experiments, but involves unexpected complexity. We test a controversial claim that the human fovea is vertically divided, with each half projecting to either the contralateral or ipsilateral hemisphere, thereby influencing foveal word recognition. We report a…
The Temporal Dynamics of Spoken Word Recognition in Adverse Listening Conditions
ERIC Educational Resources Information Center
Brouwer, Susanne; Bradlow, Ann R.
2016-01-01
This study examined the temporal dynamics of spoken word recognition in noise and background speech. In two visual-world experiments, English participants listened to target words while looking at four pictures on the screen: a target (e.g. "candle"), an onset competitor (e.g. "candy"), a rhyme competitor (e.g.…
Using Constant Time Delay to Teach Braille Word Recognition
ERIC Educational Resources Information Center
Hooper, Jonathan; Ivy, Sarah; Hatton, Deborah
2014-01-01
Introduction: Constant time delay has been identified as an evidence-based practice to teach print sight words and picture recognition (Browder, Ahlbrim-Delzell, Spooner, Mims, & Baker, 2009). For the study presented here, we tested the effectiveness of constant time delay to teach new braille words. Methods: A single-subject multiple baseline…
Spoken Word Recognition of Chinese Words in Continuous Speech
ERIC Educational Resources Information Center
Yip, Michael C. W.
2015-01-01
The present study examined the role of positional probability of syllables played in recognition of spoken word in continuous Cantonese speech. Because some sounds occur more frequently at the beginning position or ending position of Cantonese syllables than the others, so these kinds of probabilistic information of syllables may cue the locations…
Is Syntactic-Category Processing Obligatory in Visual Word Recognition? Evidence from Chinese
ERIC Educational Resources Information Center
Wong, Andus Wing-Kuen; Chen, Hsuan-Chih
2012-01-01
Three experiments were conducted to investigate how syntactic-category and semantic information is processed in visual word recognition. The stimuli were two-character Chinese words in which semantic and syntactic-category ambiguities were factorially manipulated. A lexical decision task was employed in Experiment 1, whereas a semantic relatedness…
ERIC Educational Resources Information Center
Brochard, Renaud; Tassin, Maxime; Zagar, Daniel
2013-01-01
The present research aimed to investigate whether, as previously observed with pictures, background auditory rhythm would also influence visual word recognition. In a lexical decision task, participants were presented with bisyllabic visual words, segmented into two successive groups of letters, while an irrelevant strongly metric auditory…
Genetic and Environmental Influences on Individual Differences in Printed Word Recognition.
ERIC Educational Resources Information Center
Gayan, Javier; Olson, Richard K.
2003-01-01
Explored genetic and environmental etiologies of individual differences in printed word recognition and related skills in identical and fraternal twin 8- to 18-year-olds. Found evidence for moderate genetic influences common between IQ, phoneme awareness, and word-reading skills and for stronger IQ-independent genetic influences that were common…
L2 Gender Facilitation and Inhibition in Spoken Word Recognition
ERIC Educational Resources Information Center
Behney, Jennifer N.
2011-01-01
This dissertation investigates the role of grammatical gender facilitation and inhibition in second language (L2) learners' spoken word recognition. Native speakers of languages that have grammatical gender are sensitive to gender marking when hearing and recognizing a word. Gender facilitation refers to when a given noun that is preceded by an…
Conrad, Markus; Carreiras, Manuel; Tamm, Sascha; Jacobs, Arthur M
2009-04-01
Over the last decade, there has been increasing evidence for syllabic processing during visual word recognition. If syllabic effects prove to be independent from orthographic redundancy, this would seriously challenge the ability of current computational models to account for the processing of polysyllabic words. Three experiments are presented to disentangle effects of the frequency of syllabic units and orthographic segments in lexical decision. In Experiment 1 the authors obtained an inhibitory syllable frequency effect that was unaffected by the presence or absence of a bigram trough at the syllable boundary. In Experiments 2 and 3 an inhibitory effect of initial syllable frequency but a facilitative effect of initial bigram frequency emerged when manipulating 1 of the 2 measures and controlling for the other in Spanish words starting with consonant-vowel syllables. The authors conclude that effects of syllable frequency and letter-cluster frequency are independent and arise at different processing levels of visual word recognition. Results are discussed within the framework of an interactive activation model of visual word recognition. (c) 2009 APA, all rights reserved.
Parallel language activation and cognitive control during spoken word recognition in bilinguals
Blumenfeld, Henrike K.; Marian, Viorica
2013-01-01
Accounts of bilingual cognitive advantages suggest an associative link between cross-linguistic competition and inhibitory control. We investigate this link by examining English-Spanish bilinguals’ parallel language activation during auditory word recognition and nonlinguistic Stroop performance. Thirty-one English-Spanish bilinguals and 30 English monolinguals participated in an eye-tracking study. Participants heard words in English (e.g., comb) and identified corresponding pictures from a display that included pictures of a Spanish competitor (e.g., conejo, English rabbit). Bilinguals with higher Spanish proficiency showed more parallel language activation and smaller Stroop effects than bilinguals with lower Spanish proficiency. Across all bilinguals, stronger parallel language activation between 300–500ms after word onset was associated with smaller Stroop effects; between 633–767ms, reduced parallel language activation was associated with smaller Stroop effects. Results suggest that bilinguals who perform well on the Stroop task show increased cross-linguistic competitor activation during early stages of word recognition and decreased competitor activation during later stages of word recognition. Findings support the hypothesis that cross-linguistic competition impacts domain-general inhibition. PMID:24244842
Synthesis of Common Arabic Handwritings to Aid Optical Character Recognition Research.
Dinges, Laslo; Al-Hamadi, Ayoub; Elzobi, Moftah; El-Etriby, Sherif
2016-03-11
Document analysis tasks such as pattern recognition, word spotting or segmentation, require comprehensive databases for training and validation. Not only variations in writing style but also the used list of words is of importance in the case that training samples should reflect the input of a specific area of application. However, generation of training samples is expensive in the sense of manpower and time, particularly if complete text pages including complex ground truth are required. This is why there is a lack of such databases, especially for Arabic, the second most popular language. However, Arabic handwriting recognition involves different preprocessing, segmentation and recognition methods. Each requires particular ground truth or samples to enable optimal training and validation, which are often not covered by the currently available databases. To overcome this issue, we propose a system that synthesizes Arabic handwritten words and text pages and generates corresponding detailed ground truth. We use these syntheses to validate a new, segmentation based system that recognizes handwritten Arabic words. We found that a modification of an Active Shape Model based character classifiers-that we proposed earlier-improves the word recognition accuracy. Further improvements are achieved, by using a vocabulary of the 50,000 most common Arabic words for error correction.
Synthesis of Common Arabic Handwritings to Aid Optical Character Recognition Research
Dinges, Laslo; Al-Hamadi, Ayoub; Elzobi, Moftah; El-etriby, Sherif
2016-01-01
Document analysis tasks such as pattern recognition, word spotting or segmentation, require comprehensive databases for training and validation. Not only variations in writing style but also the used list of words is of importance in the case that training samples should reflect the input of a specific area of application. However, generation of training samples is expensive in the sense of manpower and time, particularly if complete text pages including complex ground truth are required. This is why there is a lack of such databases, especially for Arabic, the second most popular language. However, Arabic handwriting recognition involves different preprocessing, segmentation and recognition methods. Each requires particular ground truth or samples to enable optimal training and validation, which are often not covered by the currently available databases. To overcome this issue, we propose a system that synthesizes Arabic handwritten words and text pages and generates corresponding detailed ground truth. We use these syntheses to validate a new, segmentation based system that recognizes handwritten Arabic words. We found that a modification of an Active Shape Model based character classifiers—that we proposed earlier—improves the word recognition accuracy. Further improvements are achieved, by using a vocabulary of the 50,000 most common Arabic words for error correction. PMID:26978368
Danna, Jérémy; Massendari, Delphine; Furnari, Benjamin; Ducrot, Stéphanie
2018-06-13
Two eye-movement experiments were conducted to examine the effects of font type on the recognition of words presented in central vision, using a variable-viewing-position technique. Two main questions were addressed: (1) Is the optimal viewing position (OVP) for word recognition modulated by font type? (2) Is the cursive font more appropriate than the printed font in word recognition in children who exclusively write using a cursive script? In order to disentangle the role of perceptual difficulty associated with the cursive font and the impact of writing habits, we tested French adults (Experiment 1) and second-grade French children, the latter having exclusively learned to write in cursive (Experiment 2). Results revealed that the printed font is more appropriate than the cursive for recognizing words in both adults and children: adults were slightly less accurate in cursive than in printed stimuli recognition and children were slower to identify cursive stimuli than printed stimuli. Eye-movement measures also revealed that the OVP curves were flattened in cursive font in both adults and children. We concluded that the perceptual difficulty of the cursive font degrades word recognition by impacting the OVP stability. Copyright © 2018 Elsevier B.V. All rights reserved.
Brébion, Gildas; David, Anthony S; Bressan, Rodrigo A; Pilowsky, Lyn S
2007-01-01
The role of various types of slowing of processing speed, as well as the role of depressed mood, on each stage of verbal memory functioning in patients diagnosed with schizophrenia was investigated. Mixed lists of high- and low-frequency words were presented, and immediate and delayed free recall and recognition were required. Two levels of encoding were studied by contrasting the relatively automatic encoding of the high-frequency words and the more effortful encoding of the low-frequency words. Storage was studied by contrasting immediate and delayed recall. Retrieval was studied by contrasting free recall and recognition. Three tests of motor and cognitive processing speed were administered as well. Regression analyses involving the three processing speed measures revealed that cognitive speed was the only predictor of the recall and recognition of the low-frequency words. Furthermore, slowing in cognitive speed accounted for the deficit in recall and recognition of the low-frequency words relative to a healthy control group. Depressed mood was significantly associated with recognition of the low-frequency words. Neither processing speed nor depressed mood was associated with storage efficiency. It is concluded that both cognitive speed slowing and depressed mood impact on effortful encoding processes.
McArdle, Rachel; Wilson, Richard H
2008-06-01
To analyze the 50% correct recognition data that were from the Wilson et al (this issue) study and that were obtained from 24 listeners with normal hearing; also to examine whether acoustic, phonetic, or lexical variables can predict recognition performance for monosyllabic words presented in speech-spectrum noise. The specific variables are as follows: (a) acoustic variables (i.e., effective root-mean-square sound pressure level, duration), (b) phonetic variables (i.e., consonant features such as manner, place, and voicing for initial and final phonemes; vowel phonemes), and (c) lexical variables (i.e., word frequency, word familiarity, neighborhood density, neighborhood frequency). The descriptive, correlational study will examine the influence of acoustic, phonetic, and lexical variables on speech recognition in noise performance. Regression analysis demonstrated that 45% of the variance in the 50% point was accounted for by acoustic and phonetic variables whereas only 3% of the variance was accounted for by lexical variables. These findings suggest that monosyllabic word-recognition-in-noise is more dependent on bottom-up processing than on top-down processing. The results suggest that when speech-in-noise testing is used in a pre- and post-hearing-aid-fitting format, the use of monosyllabic words may be sensitive to changes in audibility resulting from amplification.
The Role of Morphology in Word Recognition of Hebrew as a Templatic Language
ERIC Educational Resources Information Center
Oganyan, Marina
2017-01-01
Research on recognition of complex words has primarily focused on affixational complexity in concatenative languages. This dissertation investigates both templatic and affixational complexity in Hebrew, a templatic language, with particular focus on the role of the root and template morphemes in recognition. It also explores the role of morphology…
Using Recall to Reduce False Recognition: Diagnostic and Disqualifying Monitoring
ERIC Educational Resources Information Center
Gallo, David A.
2004-01-01
Whether recall of studied words (e.g., parsley, rosemary, thyme) could reduce false recognition of related lures (e.g., basil) was investigated. Subjects studied words from several categories for a final recognition memory test. Half of the subjects were given standard test instructions, and half were instructed to use recall to reduce false…
Perceptual learning for speech in noise after application of binary time-frequency masks
Ahmadi, Mahnaz; Gross, Vauna L.; Sinex, Donal G.
2013-01-01
Ideal time-frequency (TF) masks can reject noise and improve the recognition of speech-noise mixtures. An ideal TF mask is constructed with prior knowledge of the target speech signal. The intelligibility of a processed speech-noise mixture depends upon the threshold criterion used to define the TF mask. The study reported here assessed the effect of training on the recognition of speech in noise after processing by ideal TF masks that did not restore perfect speech intelligibility. Two groups of listeners with normal hearing listened to speech-noise mixtures processed by TF masks calculated with different threshold criteria. For each group, a threshold criterion that initially produced word recognition scores between 0.56–0.69 was chosen for training. Listeners practiced with one set of TF-masked sentences until their word recognition performance approached asymptote. Perceptual learning was quantified by comparing word-recognition scores in the first and last training sessions. Word recognition scores improved with practice for all listeners with the greatest improvement observed for the same materials used in training. PMID:23464038
Winsler, Kurt; Holcomb, Phillip J; Midgley, Katherine J; Grainger, Jonathan
2017-01-01
Previous studies have shown that different spatial frequency information processing streams interact during the recognition of visual stimuli. However, it is a matter of debate as to the contributions of high and low spatial frequency (HSF and LSF) information for visual word recognition. This study examined the role of different spatial frequencies in visual word recognition using event-related potential (ERP) masked priming. EEG was recorded from 32 scalp sites in 30 English-speaking adults in a go/no-go semantic categorization task. Stimuli were white characters on a neutral gray background. Targets were uppercase five letter words preceded by a forward-mask (#######) and a 50 ms lowercase prime. Primes were either the same word (repeated) or a different word (un-repeated) than the subsequent target and either contained only high, only low, or full spatial frequency information. Additionally within each condition, half of the prime-target pairs were high lexical frequency, and half were low. In the full spatial frequency condition, typical ERP masked priming effects were found with an attenuated N250 (sub-lexical) and N400 (lexical-semantic) for repeated compared to un-repeated primes. For HSF primes there was a weaker N250 effect which interacted with lexical frequency, a significant reversal of the effect around 300 ms, and an N400-like effect for only high lexical frequency word pairs. LSF primes did not produce any of the classic ERP repetition priming effects, however they did elicit a distinct early effect around 200 ms in the opposite direction of typical repetition effects. HSF information accounted for many of the masked repetition priming ERP effects and therefore suggests that HSFs are more crucial for word recognition. However, LSFs did produce their own pattern of priming effects indicating that larger scale information may still play a role in word recognition.
Locus of word frequency effects in spelling to dictation: Still at the orthographic level!
Bonin, Patrick; Laroche, Betty; Perret, Cyril
2016-11-01
The present study was aimed at testing the locus of word frequency effects in spelling to dictation: Are they located at the level of spoken word recognition (Chua & Rickard Liow, 2014) or at the level of the orthographic output lexicon (Delattre, Bonin, & Barry, 2006)? Words that varied on objective word frequency and on phonological neighborhood density were orally presented to adults who had to write them down. Following the additive factors logic (Sternberg, 1969, 2001), if word frequency in spelling to dictation influences a processing level, that is, the orthographic output level, different from that influenced by phonological neighborhood density, that is, spoken word recognition, the impact of the 2 factors should be additive. In contrast, their influence should be overadditive if they act at the same processing level in spelling to dictation, namely the spoken word recognition level. We found that both factors had a reliable influence on the spelling latencies but did not interact. This finding is in line with an orthographic output locus hypothesis of word frequency effects in spelling to dictation. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Mendlewicz, L; Nef, F; Simon, Y
2001-01-01
Several studies have been carried out using the Stroop test in eating disorders. Some of these studies have brought to light the existence of cognitive and attention deficits linked principally to weight and to food in anorexic and bulimic patients. The aim of the current study is to replicate and to clarify the existence of cognitive and attention deficits in anorexic patients using the Stroop test and a word recognition test. The recognition test is made up of 160 words; 80 words from the previous Stroop experiment mixed at random and matched from a semantic point of view to 80 distractions. The recognition word test is carried out 2 or 3 days after the Stroop test. Thirty-two subjects took part in the study: 16 female patients hospitalised for anorexia nervosa and 16 normal females as controls. Our results do not enable us to confirm the existence of specific cognitive deficits in anorexic patients. Copyright 2001 S. Karger AG, Basel
Willits, Jon A.; Seidenberg, Mark S.; Saffran, Jenny R.
2014-01-01
What makes some words easy for infants to recognize, and other words difficult? We addressed this issue in the context of prior results suggesting that infants have difficulty recognizing verbs relative to nouns. In this work, we highlight the role played by the distributional contexts in which nouns and verbs occur. Distributional statistics predict that English nouns should generally be easier to recognize than verbs in fluent speech. However, there are situations in which distributional statistics provide similar support for verbs. The statistics for verbs that occur with the English morpheme –ing, for example, should facilitate verb recognition. In two experiments with 7.5- and 9.5-month-old infants, we tested the importance of distributional statistics for word recognition by varying the frequency of the contextual frames in which verbs occur. The results support the conclusion that distributional statistics are utilized by infant language learners and contribute to noun–verb differences in word recognition. PMID:24908342
Halpin, Chris; Rauch, Steven D
2012-01-01
Market surveys consistently show that only 22% of those with hearing loss own hearing aids. This is often ascribed to cosmetics, but is it possible that patients apply a different auditory criterion than do audiologists and manufacturers? We tabulated hearing aid ownership in a survey of 1000 consecutive patients. We separated hearing loss cases, with one cohort in which word recognition in quiet could improve with gain (vs. 40 dB HL) and another without such improvement but nonetheless with audiometric thresholds within the manufacturer's fitting ranges. Overall, we found that exactly 22% of hearing loss patients in this sample owned hearing aids; the same finding has been reported in many previous, well-accepted surveys. However, while all patients in the two cohorts experienced difficulty in noise, patients in the cohort without word recognition improvement were found to own hearing aids at a rate of 0.3%, while those patients whose word recognition could increase with level were found to own hearing aids at a rate of 50%. Results also coherently fit a logistic model where shift of the word recognition performance curve by level corresponded to the likelihood of ownership. In addition to the common attribution of low hearing aid usage to patient denial, cosmetic issues, price, or social stigma, these results provide one alternative explanation based on measurable improvement in word recognition performance. Copyright © 2011 S. Karger AG, Basel.
Response-related fMRI of veridical and false recognition of words.
Heun, Reinhard; Jessen, Frank; Klose, Uwe; Erb, Michael; Granath, Dirk-Oliver; Grodd, Wolfgang
2004-02-01
Studies on the relation between local cerebral activation and retrieval success usually compared high and low performance conditions, and thus showed performance-related activation of different brain areas. Only a few studies directly compared signal intensities of different response categories during retrieval. During verbal recognition, we recently observed increased parieto-occipital activation related to false alarms. The present study intends to replicate and extend this observation by investigating common and differential activation by veridical and false recognition. Fifteen healthy volunteers performed a verbal recognition paradigm using 160 learned target and 160 new distractor words. The subjects had to indicate whether they had learned the word before or not. Echo-planar MRI of blood-oxygen-level-dependent signal changes was performed during this recognition task. Words were classified post hoc according to the subjects' responses, i.e. hits, false alarms, correct rejections and misses. Response-related fMRI-analysis was used to compare activation associated with the subjects' recognition success, i.e. signal intensities related to the presentation of words were compared by the above-mentioned four response types. During recognition, all word categories showed increased bilateral activation of the inferior frontal gyrus, the inferior temporal gyrus, the occipital lobe and the brainstem in comparison with the control condition. Hits and false alarms activated several areas including the left medial and lateral parieto-occipital cortex in comparison with subjectively unknown items, i.e. correct rejections and misses. Hits showed more pronounced activation in the medial, false alarms in the lateral parts of the left parieto-occipital cortex. Veridical and false recognition show common as well as different areas of cerebral activation in the left parieto-occipital lobe: increased activation of the medial parietal cortex by hits may correspond to true recognition, increased activation of the parieto-occipital cortex by false alarms may correspond to familiarity decisions. Further studies are needed to investigate the reasons for false decisions in healthy subjects and patients with memory problems.
Clinical Strategies for Sampling Word Recognition Performance.
Schlauch, Robert S; Carney, Edward
2018-04-17
Computer simulation was used to estimate the statistical properties of searches for maximum word recognition ability (PB max). These involve presenting multiple lists and discarding all scores but that of the 1 list that produced the highest score. The simulations, which model limitations inherent in the precision of word recognition scores, were done to inform clinical protocols. A secondary consideration was a derivation of 95% confidence intervals for significant changes in score from phonemic scoring of a 50-word list. The PB max simulations were conducted on a "client" with flat performance intensity functions. The client's performance was assumed to be 60% initially and 40% for a second assessment. Thousands of estimates were obtained to examine the precision of (a) single lists and (b) multiple lists using a PB max procedure. This method permitted summarizing the precision for assessing a 20% drop in performance. A single 25-word list could identify only 58.4% of the cases in which performance fell from 60% to 40%. A single 125-word list identified 99.8% of the declines correctly. Presenting 3 or 5 lists to find PB max produced an undesirable finding: an increase in the word recognition score. A 25-word list produces unacceptably low precision for making clinical decisions. This finding holds in both single and multiple 25-word lists, as in a search for PB max. A table is provided, giving estimates of 95% critical ranges for successive presentations of a 50-word list analyzed by the number of phonemes correctly identified.
Wilson, Richard H; Sharrett, Kadie C
2017-01-01
Two previous experiments from our laboratory with 70 interrupted monosyllabic words demonstrated that recognition performance was influenced by the temporal location of the interruption pattern. The interruption pattern (10 interruptions/sec, 50% duty cycle) was always the same and referenced word onset; the only difference between the patterns was the temporal location of the on- and off-segments of the interruption cycle. In the first study, both young and older listeners obtained better recognition performances when the initial on-segment coincided with word onset than when the initial on-segment was delayed by 50 msec. The second experiment with 24 young listeners detailed recognition performance as the interruption pattern was incremented in 10-msec steps through the 0- to 90-msec onset range. Across the onset conditions, 95% of the functions were either flat or U-shaped. To define the effects that interruption pattern locations had on word recognition by older listeners with sensorineural hearing loss as the interruption pattern incremented, re: word onset, from 0 to 90 msec in 10-msec steps. A repeated-measures design with ten interruption patterns (onset conditions) and one uninterruption condition. Twenty-four older males (mean = 69.6 yr) with sensorineural hearing loss participated in two 1-hour sessions. The three-frequency pure-tone average was 24.0 dB HL and word recognition was ≥80% correct. Seventy consonant-vowel nucleus-consonant words formed the corpus of materials with 25 additional words used for practice. For each participant, the 700 interrupted stimuli (70 words by 10 onset conditions), the 70 words uninterrupted, and two practice lists each were randomized and recorded on compact disc in 33 tracks of 25 words each. The data were analyzed at the participant and word levels and compared to the results obtained earlier on 24 young listeners with normal hearing. The mean recognition performance on the 70 words uninterrupted was 91.0% with an overall mean performance on the ten interruption conditions of 63.2% (range: 57.9-69.3%), compared to 80.4% (range: 73.0-87.7%) obtained earlier on the young adults. The best performances were at the extremes of the onset conditions. Standard deviations ranged from 22.1% to 28.1% (24 participants) and from 9.2% to 12.8% (70 words). An arithmetic algorithm categorized the shapes of the psychometric functions across the ten onset conditions. With the older participants in the current study, 40% of the functions were flat, 41.4% were U-shaped, and 18.6% were inverted U-shaped, which compared favorably to the function shapes by the young listeners in the earlier study of 50.0%, 41.4%, and 8.6%, respectively. There were two words on which the older listeners had 40% better performances. Collectively, the data are orderly, but at the individual word or participant level, the data are somewhat volatile, which may reflect auditory processing differences between the participant groups. The diversity of recognition performances by the older listeners on the ten interruption conditions with each of the 70 words supports the notion that the term hearing loss is inclusive of processes well beyond the filtering produced by end-organ sensitivity deficits. American Academy of Audiology
Postprocessing for character recognition using pattern features and linguistic information
NASA Astrophysics Data System (ADS)
Yoshikawa, Takatoshi; Okamoto, Masayosi; Horii, Hiroshi
1993-04-01
We propose a new method of post-processing for character recognition using pattern features and linguistic information. This method corrects errors in the recognition of handwritten Japanese sentences containing Kanji characters. This post-process method is characterized by having two types of character recognition. Improving the accuracy of the character recognition rate of Japanese characters is made difficult by the large number of characters, and the existence of characters with similar patterns. Therefore, it is not practical for a character recognition system to recognize all characters in detail. First, this post-processing method generates a candidate character table by recognizing the simplest features of characters. Then, it selects words corresponding to the character from the candidate character table by referring to a word and grammar dictionary before selecting suitable words. If the correct character is included in the candidate character table, this process can correct an error, however, if the character is not included, it cannot correct an error. Therefore, if this method can presume a character does not exist in a candidate character table by using linguistic information (word and grammar dictionary). It then can verify a presumed character by character recognition using complex features. When this method is applied to an online character recognition system, the accuracy of character recognition improves 93.5% to 94.7%. This proved to be the case when it was used for the editorials of a Japanese newspaper (Asahi Shinbun).
NASA Technical Reports Server (NTRS)
1973-01-01
The development, construction, and test of a 100-word vocabulary near real time word recognition system are reported. Included are reasonable replacement of any one or all 100 words in the vocabulary, rapid learning of a new speaker, storage and retrieval of training sets, verbal or manual single word deletion, continuous adaptation with verbal or manual error correction, on-line verification of vocabulary as spoken, system modes selectable via verification display keyboard, relationship of classified word to neighboring word, and a versatile input/output interface to accommodate a variety of applications.
Conley, Colleen M; Derby, K Mark; Roberts-Gwinn, Michelle; Weber, Kimberly P; McLaughlin, T E
2004-01-01
This study compared the copy, cover, and compare method to a picture-word matching method for teaching sight word recognition. Participants were 5 kindergarten students with less than preprimer sight word vocabularies who were enrolled in a public school in the Pacific Northwest. A multielement design was used to evaluate the effects of the two interventions. Outcomes suggested that sight words taught using the copy, cover, and compare method resulted in better maintenance of word recognition when compared to the picture-matching intervention. Benefits to students and the practicality of employing the word-level teaching methods are discussed.
Caffeine Improves Left Hemisphere Processing of Positive Words
Kuchinke, Lars; Lux, Vanessa
2012-01-01
A positivity advantage is known in emotional word recognition in that positive words are consistently processed faster and with fewer errors compared to emotionally neutral words. A similar advantage is not evident for negative words. Results of divided visual field studies, where stimuli are presented in either the left or right visual field and are initially processed by the contra-lateral brain hemisphere, point to a specificity of the language-dominant left hemisphere. The present study examined this effect by showing that the intake of caffeine further enhanced the recognition performance of positive, but not negative or neutral stimuli compared to a placebo control group. Because this effect was only present in the right visual field/left hemisphere condition, and based on the close link between caffeine intake and dopaminergic transmission, this result points to a dopaminergic explanation of the positivity advantage in emotional word recognition. PMID:23144893
The Effect of Talker Variability on Word Recognition in Preschool Children
Ryalls, Brigette Oliver; Pisoni, David B.
2012-01-01
In a series of experiments, the authors investigated the effects of talker variability on children’s word recognition. In Experiment 1, when stimuli were presented in the clear, 3- and 5-year-olds were less accurate at identifying words spoken by multiple talkers than those spoken by a single talker when the multiple-talker list was presented first. In Experiment 2, when words were presented in noise, 3-, 4-, and 5-year-olds again performed worse in the multiple-talker condition than in the single-talker condition, this time regardless of order; processing multiple talkers became easier with age. Experiment 3 showed that both children and adults were slower to repeat words from multiple-talker than those from single-talker lists. More important, children (but not adults) matched acoustic properties of the stimuli (specifically, duration). These results provide important new information about the development of talker normalization in speech perception and spoken word recognition. PMID:9149923
Macedonia, Manuela; Mueller, Karsten
2016-01-01
Vocabulary learning in a second language is enhanced if learners enrich the learning experience with self-performed iconic gestures. This learning strategy is called enactment. Here we explore how enacted words are functionally represented in the brain and which brain regions contribute to enhance retention. After an enactment training lasting 4 days, participants performed a word recognition task in the functional Magnetic Resonance Imaging (fMRI) scanner. Data analysis suggests the participation of different and partially intertwined networks that are engaged in higher cognitive processes, i.e., enhanced attention and word recognition. Also, an experience-related network seems to map word representation. Besides core language regions, this latter network includes sensory and motor cortices, the basal ganglia, and the cerebellum. On the basis of its complexity and the involvement of the motor system, this sensorimotor network might explain superior retention for enactment. PMID:27445918
Word recognition materials for native speakers of Taiwan Mandarin.
Nissen, Shawn L; Harris, Richard W; Dukes, Alycia
2008-06-01
To select, digitally record, evaluate, and psychometrically equate word recognition materials that can be used to measure the speech perception abilities of native speakers of Taiwan Mandarin in quiet. Frequently used bisyllabic words produced by male and female talkers of Taiwan Mandarin were digitally recorded and subsequently evaluated using 20 native listeners with normal hearing at 10 intensity levels (-5 to 40 dB HL) in increments of 5 dB. Using logistic regression, 200 words with the steepest psychometric slopes were divided into 4 lists and 8 half-lists that were relatively equivalent in psychometric function slope. To increase auditory homogeneity of the lists, the intensity of words in each list was digitally adjusted so that the threshold of each list was equal to the midpoint between the mean thresholds of the male and female half-lists. Digital recordings of the word recognition lists and the associated clinical instructions are available on CD upon request.
Gow, David W; Olson, Bruna B
2015-07-01
Phonotactic frequency effects play a crucial role in a number of debates over language processing and representation. It is unclear however, whether these effects reflect prelexical sensitivity to phonotactic frequency, or lexical "gang effects" in speech perception. In this paper, we use Granger causality analysis of MR-constrained MEG/EEG data to understand how phonotactic frequency influences neural processing dynamics during auditory lexical decision. Effective connectivity analysis showed weaker feedforward influence from brain regions involved in acoustic-phonetic processing (superior temporal gyrus) to lexical areas (supramarginal gyrus) for high phonotactic frequency words, but stronger top-down lexical influence for the same items. Low entropy nonwords (nonwords judged to closely resemble real words) showed a similar pattern of interactions between brain regions involved in lexical and acoustic-phonetic processing. These results contradict the predictions of a feedforward model of phonotactic frequency facilitation, but support the predictions of a lexically mediated account.
Gow, David W.; Olson, Bruna B.
2015-01-01
Phonotactic frequency effects play a crucial role in a number of debates over language processing and representation. It is unclear however, whether these effects reflect prelexical sensitivity to phonotactic frequency, or lexical “gang effects” in speech perception. In this paper, we use Granger causality analysis of MR-constrained MEG/EEG data to understand how phonotactic frequency influences neural processing dynamics during auditory lexical decision. Effective connectivity analysis showed weaker feedforward influence from brain regions involved in acoustic-phonetic processing (superior temporal gyrus) to lexical areas (supramarginal gyrus) for high phonotactic frequency words, but stronger top-down lexical influence for the same items. Low entropy nonwords (nonwords judged to closely resemble real words) showed a similar pattern of interactions between brain regions involved in lexical and acoustic-phonetic processing. These results contradict the predictions of a feedforward model of phonotactic frequency facilitation, but support the predictions of a lexically mediated account. PMID:25883413
(Almost) Word for Word: As Voice Recognition Programs Improve, Students Reap the Benefits
ERIC Educational Resources Information Center
Smith, Mark
2006-01-01
Voice recognition software is hardly new--attempts at capturing spoken words and turning them into written text have been available to consumers for about two decades. But what was once an expensive and highly unreliable tool has made great strides in recent years, perhaps most recognized in programs such as Nuance's Dragon NaturallySpeaking…
The Effects of Environmental Context on Recognition Memory and Claims of Remembering
ERIC Educational Resources Information Center
Hockley, William E.
2008-01-01
Recognition memory for words was tested in same or different contexts using the remember/know response procedure. Context was manipulated by presenting words in different screen colors and locations and by presenting words against real-world photographs. Overall hit and false-alarm rates were higher for tests presented in an old context compared…
Investigating an Innovative Computer Application to Improve L2 Word Recognition from Speech
ERIC Educational Resources Information Center
Matthews, Joshua; O'Toole, John Mitchell
2015-01-01
The ability to recognise words from the aural modality is a critical aspect of successful second language (L2) listening comprehension. However, little research has been reported on computer-mediated development of L2 word recognition from speech in L2 learning contexts. This report describes the development of an innovative computer application…
The Effects of Semantic Transparency and Base Frequency on the Recognition of English Complex Words
ERIC Educational Resources Information Center
Xu, Joe; Taft, Marcus
2015-01-01
A visual lexical decision task was used to examine the interaction between base frequency (i.e., the cumulative frequencies of morphologically related forms) and semantic transparency for a list of derived words. Linear mixed effects models revealed that high base frequency facilitates the recognition of the complex word (i.e., a "base…
ERIC Educational Resources Information Center
Malins, Jeffrey G.; Joanisse, Marc F.
2010-01-01
We used eyetracking to examine how tonal versus segmental information influence spoken word recognition in Mandarin Chinese. Participants heard an auditory word and were required to identify its corresponding picture from an array that included the target item ("chuang2" "bed"), a phonological competitor (segmental: chuang1 "window"; cohort:…
Grimm, Robert; Cassani, Giovanni; Gillis, Steven; Daelemans, Walter
2017-01-01
Previous studies have suggested that children and adults form cognitive representations of co-occurring word sequences. We propose (1) that the formation of such multi-word unit (MWU) representations precedes and facilitates the formation of single-word representations in children and thus benefits word learning, and (2) that MWU representations facilitate adult word recognition and thus benefit lexical processing. Using a modified version of an existing computational model (McCauley and Christiansen, 2014), we extract MWUs from a corpus of child-directed speech (CDS) and a corpus of conversations among adults. We then correlate the number of MWUs within which each word appears with (1) age of first production and (2) adult reaction times on a word recognition task. In doing so, we take care to control for the effect of word frequency, as frequent words will naturally tend to occur in many MWUs. We also compare results to a baseline model which randomly groups words into sequences-and find that MWUs have a unique facilitatory effect on both response variables, suggesting that they benefit word learning in children and word recognition in adults. The effect is strongest on age of first production, implying that MWUs are comparatively more important for word learning than for adult lexical processing. We discuss possible underlying mechanisms and formulate testable predictions.
Grimm, Robert; Cassani, Giovanni; Gillis, Steven; Daelemans, Walter
2017-01-01
Previous studies have suggested that children and adults form cognitive representations of co-occurring word sequences. We propose (1) that the formation of such multi-word unit (MWU) representations precedes and facilitates the formation of single-word representations in children and thus benefits word learning, and (2) that MWU representations facilitate adult word recognition and thus benefit lexical processing. Using a modified version of an existing computational model (McCauley and Christiansen, 2014), we extract MWUs from a corpus of child-directed speech (CDS) and a corpus of conversations among adults. We then correlate the number of MWUs within which each word appears with (1) age of first production and (2) adult reaction times on a word recognition task. In doing so, we take care to control for the effect of word frequency, as frequent words will naturally tend to occur in many MWUs. We also compare results to a baseline model which randomly groups words into sequences—and find that MWUs have a unique facilitatory effect on both response variables, suggesting that they benefit word learning in children and word recognition in adults. The effect is strongest on age of first production, implying that MWUs are comparatively more important for word learning than for adult lexical processing. We discuss possible underlying mechanisms and formulate testable predictions. PMID:28450842
Beyond word recognition: understanding pediatric oral health literacy.
Richman, Julia Anne; Huebner, Colleen E; Leggott, Penelope J; Mouradian, Wendy E; Mancl, Lloyd A
2011-01-01
Parental oral health literacy is proposed to be an indicator of children's oral health. The purpose of this study was to test if word recognition, commonly used to assess health literacy, is an adequate measure of pediatric oral health literacy. This study evaluated 3 aspects of oral health literacy and parent-reported child oral health. A 3-part pediatric oral health literacy inventory was created to assess parents' word recognition, vocabulary knowledge, and comprehension of 35 terms used in pediatric dentistry. The inventory was administered to 45 English-speaking parents of children enrolled in Head Start. Parents' ability to read dental terms was not associated with vocabulary knowledge (r=0.29, P<.06) or comprehension (r=0.28, P>.06) of the terms. Vocabulary knowledge was strongly associated with comprehension (r=0.80, P<.001). Parent-reported child oral health status was not associated with word recognition, vocabulary knowledge, or comprehension; however parents reporting either excellent or fair/poor ratings had higher scores on all components of the inventory. Word recognition is an inadequate indicator of comprehension of pediatric oral health concepts; pediatric oral health literacy is a multifaceted construct. Parents with adequate reading ability may have difficulty understanding oral health information.
Meier, Beat; Rey-Mermet, Alodie; Rothen, Nicolas; Graf, Peter
2013-01-01
The goal of this study was to investigate recognition memory performance across the lifespan and to determine how estimates of recollection and familiarity contribute to performance. In each of three experiments, participants from five groups from 14 up to 85 years of age (children, young adults, middle-aged adults, young-old adults, and old-old adults) were presented with high- and low-frequency words in a study phase and were tested immediately afterwards and/or after a one day retention interval. The results showed that word frequency and retention interval affected recognition memory performance as well as estimates of recollection and familiarity. Across the lifespan, the trajectory of recognition memory followed an inverse u-shape function that was neither affected by word frequency nor by retention interval. The trajectory of estimates of recollection also followed an inverse u-shape function, and was especially pronounced for low-frequency words. In contrast, estimates of familiarity did not differ across the lifespan. The results indicate that age differences in recognition memory are mainly due to differences in processes related to recollection while the contribution of familiarity-based processes seems to be age-invariant. PMID:24198796
Wolfe, Jace; Morais Duke, Mila; Schafer, Erin; Cire, George; Menapace, Christine; O'Neill, Lori
2016-01-01
The objective of this study was to evaluate the potential improvement in word recognition in quiet and in noise obtained with use of a Bluetooth-compatible wireless hearing assistance technology (HAT) relative to the acoustic mobile telephone condition (e.g. the mobile telephone receiver held to the microphone of the sound processor). A two-way repeated measures design was used to evaluate differences in telephone word recognition obtained in quiet and in competing noise in the acoustic mobile telephone condition compared to performance obtained with use of the CI sound processor and a telephone HAT. Sixteen adult users of Nucleus cochlear implants and the Nucleus 6 sound processor were included in this study. Word recognition over the mobile telephone in quiet and in noise was significantly better with use of the wireless HAT compared to performance in the acoustic mobile telephone condition. Word recognition over the mobile telephone was better in quiet when compared to performance in noise. The results of this study indicate that use of a wireless HAT improves word recognition over the mobile telephone in quiet and in noise relative to performance in the acoustic mobile telephone condition for a group of adult cochlear implant recipients.
When fear forms memories: threat of shock and brain potentials during encoding and recognition.
Weymar, Mathias; Bradley, Margaret M; Hamm, Alfons O; Lang, Peter J
2013-03-01
The anticipation of highly aversive events is associated with measurable defensive activation, and both animal and human research suggests that stress-inducing contexts can facilitate memory. Here, we investigated whether encoding stimuli in the context of anticipating an aversive shock affects recognition memory. Event-related potentials (ERPs) were measured during a recognition test for words that were encoded in a font color that signaled threat or safety. At encoding, cues signaling threat of shock, compared to safety, prompted enhanced P2 and P3 components. Correct recognition of words encoded in the context of threat, compared to safety, was associated with an enhanced old-new ERP difference (500-700 msec; centro-parietal), and this difference was most reliable for emotional words. Moreover, larger old-new ERP differences when recognizing emotional words encoded in a threatening context were associated with better recognition, compared to words encoded in safety. Taken together, the data indicate enhanced memory for stimuli encoded in a context in which an aversive event is merely anticipated, which could assist in understanding effects of anxiety and stress on memory processes. Copyright © 2012 Elsevier Ltd. All rights reserved.
Age-Related Effects of Stimulus Type and Congruency on Inattentional Blindness.
Liu, Han-Hui
2018-01-01
Background: Most of the previous inattentional blindness (IB) studies focused on the factors that contributed to the detection of unattended stimuli. The age-related changes on IB have rarely been investigated across all age groups. In the current study, by using the dual-task IB paradigm, we aimed to explore the age-related effects of attended stimuli type and congruency between attended and unattended stimuli on IB. Methods: The current study recruited 111 participants (30 adolescents, 48 young adults, and 33 middle-aged adults) in the baseline recognition experiments and 341 participants (135 adolescents, 135 young adults, and 71 middle-aged adults) in the IB experiment. We applied the superimposed picture and word streams experimental paradigm to explore the age-related effects of attended stimuli type and congruency between attended and unattended stimuli on IB. An ANOVA was performed to analyze the results. Results: Participants across all age groups presented significantly lower recognition scores for both pictures and words in comparison with baseline recognition. Participants presented decreased recognition for unattended pictures or words from adolescents to young adults and middle-aged adults. When the pictures and words are congruent, all the participants showed significantly higher recognition scores for unattended stimuli in comparison with incongruent condition. Adolescents and young adults did not show recognition differences when primary tasks were attending pictures or words. Conclusion: The current findings showed that all participants presented better recognition scores for attended stimuli in comparison with unattended stimuli, and the recognition scores decreased from the adolescents to young and middle-aged adults. The findings partly supported the attention capacity models of IB.
Rau, Anne K; Moll, Kristina; Snowling, Margaret J; Landerl, Karin
2015-02-01
The current study investigated the time course of cross-linguistic differences in word recognition. We recorded eye movements of German and English children and adults while reading closely matched sentences, each including a target word manipulated for length and frequency. Results showed differential word recognition processes for both developing and skilled readers. Children of the two orthographies did not differ in terms of total word processing time, but this equal outcome was achieved quite differently. Whereas German children relied on small-unit processing early in word recognition, English children applied small-unit decoding only upon rereading-possibly when experiencing difficulties in integrating an unfamiliar word into the sentence context. Rather unexpectedly, cross-linguistic differences were also found in adults in that English adults showed longer processing times than German adults for nonwords. Thus, although orthographic consistency does play a major role in reading development, cross-linguistic differences are detectable even in skilled adult readers. Copyright © 2014 Elsevier Inc. All rights reserved.
Task-Dependent Masked Priming Effects in Visual Word Recognition
Kinoshita, Sachiko; Norris, Dennis
2012-01-01
A method used widely to study the first 250 ms of visual word recognition is masked priming: These studies have yielded a rich set of data concerning the processes involved in recognizing letters and words. In these studies, there is an implicit assumption that the early processes in word recognition tapped by masked priming are automatic, and masked priming effects should therefore be invariant across tasks. Contrary to this assumption, masked priming effects are modulated by the task goal: For example, only word targets show priming in the lexical decision task, but both words and non-words do in the same-different task; semantic priming effects are generally weak in the lexical decision task but are robust in the semantic categorization task. We explain how such task dependence arises within the Bayesian Reader account of masked priming (Norris and Kinoshita, 2008), and how the task dissociations can be used to understand the early processes in lexical access. PMID:22675316
The picture superiority effect in a cross-modality recognition task.
Stenbert, G; Radeborg, K; Hedman, L R
1995-07-01
Words and pictures were studied and recognition tests given in which each studied object was to be recognized in both word and picture format. The main dependent variable was the latency of the recognition decision. The purpose was to investigate the effects of study modality (word or picture), of congruence between study and test modalities, and of priming resulting from repeated testing. Experiments 1 and 2 used the same basic design, but the latter also varied retention interval. Experiment 3 added a manipulation of instructions to name studied objects, and Experiment 4 deviated from the others by presenting both picture and word referring to the same object together for study. The results showed that congruence between study and test modalities consistently facilitated recognition. Furthermore, items studied as pictures were more rapidly recognized than were items studied as words. With repeated testing, the second instance was affected by its predecessor, but the facilitating effect of picture-to-word priming exceeded that of word-to-picture priming. The finds suggest a two- stage recognition process, in which the first is based on perceptual familiarity and the second uses semantic links for a retrieval search. Common-code theories that grant privileged access to the semantic code for pictures or, alternatively, dual-code theories that assume mnemonic superiority for the image code are supported by the findings. Explanations of the picture superiority effect as resulting from dual encoding of pictures are not supported by the data.
Shen, Wei; Qu, Qingqing; Tong, Xiuhong
2018-05-01
The aim of this study was to investigate the extent to which phonological information mediates the visual attention shift to printed Chinese words in spoken word recognition by using an eye-movement technique with a printed-word paradigm. In this paradigm, participants are visually presented with four printed words on a computer screen, which include a target word, a phonological competitor, and two distractors. Participants are then required to select the target word using a computer mouse, and the eye movements are recorded. In Experiment 1, phonological information was manipulated at the full-phonological overlap; in Experiment 2, phonological information at the partial-phonological overlap was manipulated; and in Experiment 3, the phonological competitors were manipulated to share either fulloverlap or partial-overlap with targets directly. Results of the three experiments showed that the phonological competitor effects were observed at both the full-phonological overlap and partial-phonological overlap conditions. That is, phonological competitors attracted more fixations than distractors, which suggested that phonological information mediates the visual attention shift during spoken word recognition. More importantly, we found that the mediating role of phonological information varies as a function of the phonological similarity between target words and phonological competitors.
A System for Mailpiece ZIP Code Assignment through Contextual Analysis. Phase 2
1991-03-01
Segmentation Address Block Interpretation Automatic Feature Generation Word Recognition Feature Detection Word Verification Optical Character Recognition Directory...in the Phase III effort. 1.1 Motivation The United States Postal Service (USPS) deploys large numbers of optical character recognition (OCR) machines...4):208-218, November 1986. [2] Gronmeyer, L. K., Ruffin, B. W., Lybanon, M. A., Neely, P. L., and Pierce, S. E. An Overview of Optical Character Recognition (OCR
Hearing taboo words can result in early talker effects in word recognition for female listeners.
Tuft, Samantha E; MᶜLennan, Conor T; Krestar, Maura L
2018-02-01
Previous spoken word recognition research using the long-term repetition-priming paradigm found performance costs for stimuli mismatching in talker identity. That is, when words were repeated across the two blocks, and the identity of the talker changed reaction times (RTs) were slower than when the repeated words were spoken by the same talker. Such performance costs, or talker effects, followed a time course, occurring only when processing was relatively slow. More recent research suggests that increased explicit and implicit attention towards the talkers can result in talker effects even during relatively fast processing. The purpose of the current study was to examine whether word meaning would influence the pattern of talker effects in an easy lexical decision task and, if so, whether results would differ depending on whether the presentation of neutral and taboo words was mixed or blocked. Regardless of presentation, participants responded to taboo words faster than neutral words. Furthermore, talker effects for the female talker emerged when participants heard both taboo and neutral words (consistent with an attention-based hypothesis), but not for participants that heard only taboo or only neutral words (consistent with the time-course hypothesis). These findings have important implications for theoretical models of spoken word recognition.
Hino, Yasushi; Kusunose, Yuu; Miyamura, Shinobu; Lupker, Stephen J
2017-01-01
In most models of word processing, the degrees of consistency in the mappings between orthographic, phonological, and semantic representations are hypothesized to affect reading time. Following Hino, Miyamura, and Lupker's (2011) examination of the orthographic-phonological (O-P) and orthographic-semantic (O-S) consistency for 1,114 Japanese words (339 katakana and 775 kanji words), in the present research, we initially attempted to measure the phonological-orthographic (P-O) consistency for those same words. In contrast to the O-P and O-S consistencies, which were equivalent for kanji and katakana words, the P-O relationships were much more inconsistent for the kanji words than for the katakana words. The impact of kanji words' P-O consistency was then examined in both visual and auditory word recognition tasks. Although there was no effect of P-O consistency in the standard visual lexical-decision task, significant effects were detected in a lexical-decision task with auditory stimuli, in a perceptual identification task using masked visual stimuli, and in a lexical-decision task with degraded visual stimuli. The implications of these results are discussed in terms of the impact of P-O consistency in auditory and visual word recognition. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
English Listeners Use Suprasegmental Cues to Lexical Stress Early During Spoken-Word Recognition
Poellmann, Katja; Kong, Ying-Yee
2017-01-01
Purpose We used an eye-tracking technique to investigate whether English listeners use suprasegmental information about lexical stress to speed up the recognition of spoken words in English. Method In a visual world paradigm, 24 young English listeners followed spoken instructions to choose 1 of 4 printed referents on a computer screen (e.g., “Click on the word admiral”). Displays contained a critical pair of words (e.g., ˈadmiral–ˌadmiˈration) that were segmentally identical for their first 2 syllables but differed suprasegmentally in their 1st syllable: One word began with primary lexical stress, and the other began with secondary lexical stress. All words had phrase-level prominence. Listeners' relative proportion of eye fixations on these words indicated their ability to differentiate them over time. Results Before critical word pairs became segmentally distinguishable in their 3rd syllables, participants fixated target words more than their stress competitors, but only if targets had initial primary lexical stress. The degree to which stress competitors were fixated was independent of their stress pattern. Conclusions Suprasegmental information about lexical stress modulates the time course of spoken-word recognition. Specifically, suprasegmental information on the primary-stressed syllable of words with phrase-level prominence helps in distinguishing the word from phonological competitors with secondary lexical stress. PMID:28056135
The word-frequency paradox for recall/recognition occurs for pictures.
Karlsen, Paul Johan; Snodgrass, Joan Gay
2004-08-01
A yes-no recognition task and two recall tasks were conducted using pictures of high and low familiarity ratings. Picture familiarity had analogous effects to word frequency, and replicated the word-frequency paradox in recall and recognition. Low-familiarity pictures were more recognizable than high-familiarity pictures, pure lists of high-familiarity pictures were more recallable than pure lists of low-familiarity pictures, and there was no effect of familiarity for mixed lists. These results are consistent with the predictions of the Search of Associative Memory (SAM) model.
Shen, Wei; Qu, Qingqing; Li, Xingshan
2016-07-01
In the present study, we investigated whether the activation of semantic information during spoken word recognition can mediate visual attention's deployment to printed Chinese words. We used a visual-world paradigm with printed words, in which participants listened to a spoken target word embedded in a neutral spoken sentence while looking at a visual display of printed words. We examined whether a semantic competitor effect could be observed in the printed-word version of the visual-world paradigm. In Experiment 1, the relationship between the spoken target words and the printed words was manipulated so that they were semantically related (a semantic competitor), phonologically related (a phonological competitor), or unrelated (distractors). We found that the probability of fixations on semantic competitors was significantly higher than that of fixations on the distractors. In Experiment 2, the orthographic similarity between the spoken target words and their semantic competitors was manipulated to further examine whether the semantic competitor effect was modulated by orthographic similarity. We found significant semantic competitor effects regardless of orthographic similarity. Our study not only reveals that semantic information can affect visual attention, it also provides important new insights into the methodology employed to investigate the semantic processing of spoken words during spoken word recognition using the printed-word version of the visual-world paradigm.
Evaluation of a voice recognition system for the MOTAS pseudo pilot station function
NASA Technical Reports Server (NTRS)
Houck, J. A.
1982-01-01
The Langley Research Center has undertaken a technology development activity to provide a capability, the mission oriented terminal area simulation (MOTAS), wherein terminal area and aircraft systems studies can be performed. An experiment was conducted to evaluate state-of-the-art voice recognition technology and specifically, the Threshold 600 voice recognition system to serve as an aircraft control input device for the MOTAS pseudo pilot station function. The results of the experiment using ten subjects showed a recognition error of 3.67 percent for a 48-word vocabulary tested against a programmed vocabulary of 103 words. After the ten subjects retrained the Threshold 600 system for the words which were misrecognized or rejected, the recognition error decreased to 1.96 percent. The rejection rates for both cases were less than 0.70 percent. Based on the results of the experiment, voice recognition technology and specifically the Threshold 600 voice recognition system were chosen to fulfill this MOTAS function.
Visual recognition of permuted words
NASA Astrophysics Data System (ADS)
Rashid, Sheikh Faisal; Shafait, Faisal; Breuel, Thomas M.
2010-02-01
In current study we examine how letter permutation affects in visual recognition of words for two orthographically dissimilar languages, Urdu and German. We present the hypothesis that recognition or reading of permuted and non-permuted words are two distinct mental level processes, and that people use different strategies in handling permuted words as compared to normal words. A comparison between reading behavior of people in these languages is also presented. We present our study in context of dual route theories of reading and it is observed that the dual-route theory is consistent with explanation of our hypothesis of distinction in underlying cognitive behavior for reading permuted and non-permuted words. We conducted three experiments in lexical decision tasks to analyze how reading is degraded or affected by letter permutation. We performed analysis of variance (ANOVA), distribution free rank test, and t-test to determine the significance differences in response time latencies for two classes of data. Results showed that the recognition accuracy for permuted words is decreased 31% in case of Urdu and 11% in case of German language. We also found a considerable difference in reading behavior for cursive and alphabetic languages and it is observed that reading of Urdu is comparatively slower than reading of German due to characteristics of cursive script.
Talker and accent variability effects on spoken word recognition
NASA Astrophysics Data System (ADS)
Nyang, Edna E.; Rogers, Catherine L.; Nishi, Kanae
2003-04-01
A number of studies have shown that words in a list are recognized less accurately in noise and with longer response latencies when they are spoken by multiple talkers, rather than a single talker. These results have been interpreted as support for an exemplar-based model of speech perception, in which it is assumed that detailed information regarding the speaker's voice is preserved in memory and used in recognition, rather than being eliminated via normalization. In the present study, the effects of varying both accent and talker are investigated using lists of words spoken by (a) a single native English speaker, (b) six native English speakers, (c) three native English speakers and three Japanese-accented English speakers. Twelve /hVd/ words were mixed with multi-speaker babble at three signal-to-noise ratios (+10, +5, and 0 dB) to create the word lists. Native English-speaking listeners' percent-correct recognition for words produced by native English speakers across the three talker conditions (single talker native, multi-talker native, and multi-talker mixed native and non-native) and three signal-to-noise ratios will be compared to determine whether sources of speaker variability other than voice alone add to the processing demands imposed by simple (i.e., single accent) speaker variability in spoken word recognition.
Voice tracking and spoken word recognition in the presence of other voices
NASA Astrophysics Data System (ADS)
Litong-Palima, Marisciel; Violanda, Renante; Saloma, Caesar
2004-12-01
We study the human hearing process by modeling the hair cell as a thresholded Hopf bifurcator and compare our calculations with experimental results involving human subjects in two different multi-source listening tasks of voice tracking and spoken-word recognition. In the model, we observed noise suppression by destructive interference between noise sources which weakens the effective noise strength acting on the hair cell. Different success rate characteristics were observed for the two tasks. Hair cell performance at low threshold levels agree well with results from voice-tracking experiments while those of word-recognition experiments are consistent with a linear model of the hearing process. The ability of humans to track a target voice is robust against cross-talk interference unlike word-recognition performance which deteriorates quickly with the number of uncorrelated noise sources in the environment which is a response behavior that is associated with linear systems.
Emotionally enhanced memory for negatively arousing words: storage or retrieval advantage?
Nadarevic, Lena
2017-12-01
People typically remember emotionally negative words better than neutral words. Two experiments are reported that investigate whether emotionally enhanced memory (EEM) for negatively arousing words is based on a storage or retrieval advantage. Participants studied non-word-word pairs that either involved negatively arousing or neutral target words. Memory for these target words was tested by means of a recognition test and a cued-recall test. Data were analysed with a multinomial model that allows the disentanglement of storage and retrieval processes in the present recognition-then-cued-recall paradigm. In both experiments the multinomial analyses revealed no storage differences between negatively arousing and neutral words but a clear retrieval advantage for negatively arousing words in the cued-recall test. These findings suggest that EEM for negatively arousing words is driven by associative processes.
False memory and level of processing effect: an event-related potential study.
Beato, Maria Soledad; Boldini, Angela; Cadavid, Sara
2012-09-12
Event-related potentials (ERPs) were used to determine the effects of level of processing on true and false memory, using the Deese-Roediger-McDermott (DRM) paradigm. In the DRM paradigm, lists of words highly associated to a single nonpresented word (the 'critical lure') are studied and, in a subsequent memory test, critical lures are often falsely remembered. Lists with three critical lures per list were auditorily presented here to participants who studied them with either a shallow (saying whether the word contained the letter 'o') or a deep (creating a mental image of the word) processing task. Visual presentation modality was used on a final recognition test. True recognition of studied words was significantly higher after deep encoding, whereas false recognition of nonpresented critical lures was similar in both experimental groups. At the ERP level, true and false recognition showed similar patterns: no FN400 effect was found, whereas comparable left parietal and late right frontal old/new effects were found for true and false recognition in both experimental conditions. Items studied under shallow encoding conditions elicited more positive ERP than items studied under deep encoding conditions at a 1000-1500 ms interval. These ERP results suggest that true and false recognition share some common underlying processes. Differential effects of level of processing on true and false memory were found only at the behavioral level but not at the ERP level.
ERIC Educational Resources Information Center
Lavidor, Michal; Hayes, Adrian; Shillcock, Richard; Ellis, Andrew W.
2004-01-01
The split fovea theory proposes that visual word recognition of centrally presented words is mediated by the splitting of the foveal image, with letters to the left of fixation being projected to the right hemisphere (RH) and letters to the right of fixation being projected to the left hemisphere (LH). Two lexical decision experiments aimed to…
ERIC Educational Resources Information Center
Defeyter, Margaret Anne; Russo, Riccardo; McPartlin, Pamela Louise
2009-01-01
Items studied as pictures are better remembered than items studied as words even when test items are presented as words. The present study examined the development of this picture superiority effect in recognition memory. Four groups ranging in age from 7 to 20 years participated. They studied words and pictures, with test stimuli always presented…
ERIC Educational Resources Information Center
Kambara, Toshimune; Tsukiura, Takashi; Shigemune, Yayoi; Kanno, Akitake; Nouchi, Rui; Yomogida, Yukihito; Kawashima, Ryuta
2013-01-01
This study examined behavioral changes in 15-day learning of word-picture (WP) and word-sound (WS) associations, using meaningless stimuli. Subjects performed a learning task and two recognition tasks under the WP and WS conditions every day for 15 days. Two main findings emerged from this study. First, behavioral data of recognition accuracy and…
Genetic Influences on Early Word Recognition Abilities and Disabilities: A Study of 7-Year-Old Twins
ERIC Educational Resources Information Center
Harlaar, Nicole; Spinath, Frank M.; Dale, Philip S.; Plomin, Robert
2005-01-01
Background: A fundamental issue for child psychology concerns the origins of individual differences in early reading development. Method: A measure of word recognition, the Test of Word Reading Efficiency (TOWRE), was administered by telephone to a representative population sample of 3,909 same-sex and opposite-sex pairs of 7-year-old twins.…
ERIC Educational Resources Information Center
Vergara-Martinez, Marta; Perea, Manuel; Marin, Alejandro; Carreiras, Manuel
2011-01-01
Recent research suggests that there is a processing distinction between consonants and vowels in visual-word recognition. Here we conjointly examine the time course of consonants and vowels in processes of letter identity and letter position assignment. Event related potentials (ERPs) were recorded while participants read words and pseudowords in…
ERIC Educational Resources Information Center
Nobre, Alexandre de Pontes; de Salles, Jerusa Fumagalli
2016-01-01
The aim of this study was to investigate relations between lexical-semantic processing and two components of reading: visual word recognition and reading comprehension. Sixty-eight children from private schools in Porto Alegre, Brazil, from 7 to 12 years, were evaluated. Reading was assessed with a word/nonword reading task and a reading…
Re-Evaluating Split-Fovea Processing in Word Recognition: A Critical Assessment of Recent Research
ERIC Educational Resources Information Center
Jordan, Timothy R.; Paterson, Kevin B.
2009-01-01
In recent years, some researchers have proposed that a fundamental component of the word recognition process is that each fovea is divided precisely at its vertical midline and that information either side of this midline projects to different, contralateral hemispheres. Thus, when a word is fixated, all letters to the left of the point of…
ERIC Educational Resources Information Center
Wheat, Katherine L.; Cornelissen, Piers L.; Sack, Alexander T.; Schuhmann, Teresa; Goebel, Rainer; Blomert, Leo
2013-01-01
Magnetoencephalography (MEG) has shown pseudohomophone priming effects at Broca's area (specifically pars opercularis of left inferior frontal gyrus and precentral gyrus; LIFGpo/PCG) within [approximately]100 ms of viewing a word. This is consistent with Broca's area involvement in fast phonological access during visual word recognition. Here we…
Reading Habits, Perceptual Learning, and Recognition of Printed Words
ERIC Educational Resources Information Center
Nazir, Tatjana A.; Ben-Boutayab, Nadia; Decoppet, Nathalie; Deutsch, Avital; Frost, Ram
2004-01-01
The present work aims at demonstrating that visual training associated with the act of reading modifies the way we perceive printed words. As reading does not train all parts of the retina in the same way but favors regions on the side in the direction of scanning, visual word recognition should be better at retinal locations that are frequently…
ERIC Educational Resources Information Center
Faust, Miriam; Barak, Ofra; Chiarello, Christine
2006-01-01
The present study examined left (LH) and right (RH) hemisphere involvement in discourse processing by testing the ability of each hemisphere to use world knowledge in the form of script contexts for word recognition. Participants made lexical decisions to laterally presented target words preceded by centrally presented script primes (four…
Markopoulos, G; Rutherford, A; Cairns, C; Green, J
2010-08-01
Murnane and Phelps (1993) recommend word pair presentations in local environmental context (EC) studies to prevent associations being formed between successively presented items and their ECs and a consequent reduction in the EC effect. Two experiments were conducted to assess the veracity of this assumption. In Experiment 1, participants memorised single words or word pairs, or categorised them as natural or man made. Their free recall protocols were examined to assess any associations established between successively presented items. Fewest associations were observed when the item-specific encoding task (i.e., natural or man made categorisation of word referents) was applied to single words. These findings were examined further in Experiment 2, where the influence of encoding instructions and stimulus presentation on local EC dependent recognition memory was examined. Consistent with recognition dual-process signal detection model predictions and findings (e.g., Macken, 2002; Parks & Yonelinas, 2008), recollection sensitivity, but not familiarity sensitivity, was found to be local EC dependent. However, local EC dependent recognition was observed only after item-specific encoding instructions, irrespective of stimulus presentation. These findings and the existing literature suggest that the use of single word presentations and item-specific encoding enhances local EC dependent recognition.
Ragland, J Daniel; Gur, Ruben C; Valdez, Jeffrey N; Loughead, James; Elliott, Mark; Kohler, Christian; Kanes, Stephen; Siegel, Steven J; Moelter, Stephen T; Gur, Raquel E
2005-10-01
Patients with schizophrenia improve episodic memory accuracy when given organizational strategies through levels-of-processing paradigms. This study tested if improvement is accompanied by normalized frontotemporal function. Event-related blood-oxygen-level-dependent functional magnetic resonance imaging (fMRI) was used to measure activation during shallow (perceptual) and deep (semantic) word encoding and recognition in 14 patients with schizophrenia and 14 healthy comparison subjects. Despite slower and less accurate overall word classification, the patients showed normal levels-of-processing effects, with faster and more accurate recognition of deeply processed words. These effects were accompanied by left ventrolateral prefrontal activation during encoding in both groups, although the thalamus, hippocampus, and lingual gyrus were overactivated in the patients. During word recognition, the patients showed overactivation in the left frontal pole and had a less robust right prefrontal response. Evidence of normal levels-of-processing effects and left prefrontal activation suggests that patients with schizophrenia can form and maintain semantic representations when they are provided with organizational cues and can improve their word encoding and retrieval. Areas of overactivation suggest residual inefficiencies. Nevertheless, the effect of teaching organizational strategies on episodic memory and brain function is a worthwhile topic for future interventional studies.
Bernard, Jean-Baptiste; Aguilar, Carlos; Castet, Eric
2016-01-01
Reading speed is dramatically reduced when readers cannot use their central vision. This is because low visual acuity and crowding negatively impact letter recognition in the periphery. In this study, we designed a new font (referred to as the Eido font) in order to reduce inter-letter similarity and consequently to increase peripheral letter recognition performance. We tested this font by running five experiments that compared the Eido font with the standard Courier font. Letter spacing and x-height were identical for the two monospaced fonts. Six normally-sighted subjects used exclusively their peripheral vision to run two aloud reading tasks (with eye movements), a letter recognition task (without eye movements), a word recognition task (without eye movements) and a lexical decision task. Results show that reading speed was not significantly different between the Eido and the Courier font when subjects had to read single sentences with a round simulated gaze-contingent central scotoma (10° diameter). In contrast, Eido significantly decreased perceptual errors in peripheral crowded letter recognition (-30% errors on average for letters briefly presented at 6° eccentricity) and in peripheral word recognition (-32% errors on average for words briefly presented at 6° eccentricity). PMID:27074013
Bernard, Jean-Baptiste; Aguilar, Carlos; Castet, Eric
2016-01-01
Reading speed is dramatically reduced when readers cannot use their central vision. This is because low visual acuity and crowding negatively impact letter recognition in the periphery. In this study, we designed a new font (referred to as the Eido font) in order to reduce inter-letter similarity and consequently to increase peripheral letter recognition performance. We tested this font by running five experiments that compared the Eido font with the standard Courier font. Letter spacing and x-height were identical for the two monospaced fonts. Six normally-sighted subjects used exclusively their peripheral vision to run two aloud reading tasks (with eye movements), a letter recognition task (without eye movements), a word recognition task (without eye movements) and a lexical decision task. Results show that reading speed was not significantly different between the Eido and the Courier font when subjects had to read single sentences with a round simulated gaze-contingent central scotoma (10° diameter). In contrast, Eido significantly decreased perceptual errors in peripheral crowded letter recognition (-30% errors on average for letters briefly presented at 6° eccentricity) and in peripheral word recognition (-32% errors on average for words briefly presented at 6° eccentricity).
Congruent bodily arousal promotes the constructive recognition of emotional words.
Kever, Anne; Grynberg, Delphine; Vermeulen, Nicolas
2017-08-01
Considerable research has shown that bodily states shape affect and cognition. Here, we examined whether transient states of bodily arousal influence the categorization speed of high arousal, low arousal, and neutral words. Participants realized two blocks of a constructive recognition task, once after a cycling session (increased arousal), and once after a relaxation session (reduced arousal). Results revealed overall faster response times for high arousal compared to low arousal words, and for positive compared to negative words. Importantly, low arousal words were categorized significantly faster after the relaxation than after the cycling, suggesting that a decrease in bodily arousal promotes the recognition of stimuli matching one's current arousal state. These findings highlight the importance of the arousal dimension in emotional processing, and suggest the presence of arousal-congruency effects. Copyright © 2017 Elsevier Inc. All rights reserved.
Juhasz, Barbara J
2016-11-14
Recording eye movements provides information on the time-course of word recognition during reading. Juhasz and Rayner [Juhasz, B. J., & Rayner, K. (2003). Investigating the effects of a set of intercorrelated variables on eye fixation durations in reading. Journal of Experimental Psychology: Learning, Memory and Cognition, 29, 1312-1318] examined the impact of five word recognition variables, including familiarity and age-of-acquisition (AoA), on fixation durations. All variables impacted fixation durations, but the time-course differed. However, the study focused on relatively short, morphologically simple words. Eye movements are also informative for examining the processing of morphologically complex words such as compound words. The present study further examined the time-course of lexical and semantic variables during morphological processing. A total of 120 English compound words that varied in familiarity, AoA, semantic transparency, lexeme meaning dominance, sensory experience rating (SER), and imageability were selected. The impact of these variables on fixation durations was examined when length, word frequency, and lexeme frequencies were controlled in a regression model. The most robust effects were found for familiarity and AoA, indicating that a reader's experience with compound words significantly impacts compound recognition. These results provide insight into semantic processing of morphologically complex words during reading.
Influences of emotion on context memory while viewing film clips.
Anderson, Lisa; Shimamura, Arthur P
2005-01-01
Participants listened to words while viewing film clips (audio off). Film clips were classified as neutral, positively valenced, negatively valenced, and arousing. Memory was assessed in three ways: recall of film content, recall of words, and context recognition. In the context recognition test, participants were presented a word and determined which film clip was showing when the word was originally presented. In two experiments, context memory performance was disrupted when words were presented during negatively valenced film clips, whereas it was enhanced when words were presented during arousing film clips. Free recall of words presented during the negatively valenced films was also disrupted. These findings suggest multiple influences of emotion on memory performance.
Speed discrimination predicts word but not pseudo-word reading rate in adults and children
Main, Keith L.; Pestilli, Franco; Mezer, Aviv; Yeatman, Jason; Martin, Ryan; Phipps, Stephanie; Wandell, Brian
2014-01-01
Word familiarity may affect magnocellular processes of word recognition. To explore this idea, we measured reading rate, speed-discrimination, and contrast detection thresholds in adults and children with a wide range of reading abilities. We found that speed-discrimination thresholds are higher in children than in adults and are correlated with age. Speed discrimination thresholds are also correlated with reading rate, but only for words, not for pseudo-words. Conversely, we found no correlation between contrast sensitivity and reading rate and no correlation between speed discrimination thresholds WASI subtest scores. These findings support the position that reading rate is influenced by magnocellular circuitry attuned to the recognition of familiar word-forms. PMID:25278418
Roman, Adrienne S; Pisoni, David B; Kronenberger, William G; Faulkner, Kathleen F
Noise-vocoded speech is a valuable research tool for testing experimental hypotheses about the effects of spectral degradation on speech recognition in adults with normal hearing (NH). However, very little research has utilized noise-vocoded speech with children with NH. Earlier studies with children with NH focused primarily on the amount of spectral information needed for speech recognition without assessing the contribution of neurocognitive processes to speech perception and spoken word recognition. In this study, we first replicated the seminal findings reported by ) who investigated effects of lexical density and word frequency on noise-vocoded speech perception in a small group of children with NH. We then extended the research to investigate relations between noise-vocoded speech recognition abilities and five neurocognitive measures: auditory attention (AA) and response set, talker discrimination, and verbal and nonverbal short-term working memory. Thirty-one children with NH between 5 and 13 years of age were assessed on their ability to perceive lexically controlled words in isolation and in sentences that were noise-vocoded to four spectral channels. Children were also administered vocabulary assessments (Peabody Picture Vocabulary test-4th Edition and Expressive Vocabulary test-2nd Edition) and measures of AA (NEPSY AA and response set and a talker discrimination task) and short-term memory (visual digit and symbol spans). Consistent with the findings reported in the original ) study, we found that children perceived noise-vocoded lexically easy words better than lexically hard words. Words in sentences were also recognized better than the same words presented in isolation. No significant correlations were observed between noise-vocoded speech recognition scores and the Peabody Picture Vocabulary test-4th Edition using language quotients to control for age effects. However, children who scored higher on the Expressive Vocabulary test-2nd Edition recognized lexically easy words better than lexically hard words in sentences. Older children perceived noise-vocoded speech better than younger children. Finally, we found that measures of AA and short-term memory capacity were significantly correlated with a child's ability to perceive noise-vocoded isolated words and sentences. First, we successfully replicated the major findings from the ) study. Because familiarity, phonological distinctiveness and lexical competition affect word recognition, these findings provide additional support for the proposal that several foundational elementary neurocognitive processes underlie the perception of spectrally degraded speech. Second, we found strong and significant correlations between performance on neurocognitive measures and children's ability to recognize words and sentences noise-vocoded to four spectral channels. These findings extend earlier research suggesting that perception of spectrally degraded speech reflects early peripheral auditory processes, as well as additional contributions of executive function, specifically, selective attention and short-term memory processes in spoken word recognition. The present findings suggest that AA and short-term memory support robust spoken word recognition in children with NH even under compromised and challenging listening conditions. These results are relevant to research carried out with listeners who have hearing loss, because they are routinely required to encode, process, and understand spectrally degraded acoustic signals.
Roman, Adrienne S.; Pisoni, David B.; Kronenberger, William G.; Faulkner, Kathleen F.
2016-01-01
Objectives Noise-vocoded speech is a valuable research tool for testing experimental hypotheses about the effects of spectral-degradation on speech recognition in adults with normal hearing (NH). However, very little research has utilized noise-vocoded speech with children with NH. Earlier studies with children with NH focused primarily on the amount of spectral information needed for speech recognition without assessing the contribution of neurocognitive processes to speech perception and spoken word recognition. In this study, we first replicated the seminal findings reported by Eisenberg et al. (2002) who investigated effects of lexical density and word frequency on noise-vocoded speech perception in a small group of children with NH. We then extended the research to investigate relations between noise-vocoded speech recognition abilities and five neurocognitive measures: auditory attention and response set, talker discrimination and verbal and nonverbal short-term working memory. Design Thirty-one children with NH between 5 and 13 years of age were assessed on their ability to perceive lexically controlled words in isolation and in sentences that were noise-vocoded to four spectral channels. Children were also administered vocabulary assessments (PPVT-4 and EVT-2) and measures of auditory attention (NEPSY Auditory Attention (AA) and Response Set (RS) and a talker discrimination task (TD)) and short-term memory (visual digit and symbol spans). Results Consistent with the findings reported in the original Eisenberg et al. (2002) study, we found that children perceived noise-vocoded lexically easy words better than lexically hard words. Words in sentences were also recognized better than the same words presented in isolation. No significant correlations were observed between noise-vocoded speech recognition scores and the PPVT-4 using language quotients to control for age effects. However, children who scored higher on the EVT-2 recognized lexically easy words better than lexically hard words in sentences. Older children perceived noise-vocoded speech better than younger children. Finally, we found that measures of auditory attention and short-term memory capacity were significantly correlated with a child’s ability to perceive noise-vocoded isolated words and sentences. Conclusions First, we successfully replicated the major findings from the Eisenberg et al. (2002) study. Because familiarity, phonological distinctiveness and lexical competition affect word recognition, these findings provide additional support for the proposal that several foundational elementary neurocognitive processes underlie the perception of spectrally-degraded speech. Second, we found strong and significant correlations between performance on neurocognitive measures and children’s ability to recognize words and sentences noise-vocoded to four spectral channels. These findings extend earlier research suggesting that perception of spectrally-degraded speech reflects early peripheral auditory processes as well as additional contributions of executive function, specifically, selective attention and short-term memory processes in spoken word recognition. The present findings suggest that auditory attention and short-term memory support robust spoken word recognition in children with NH even under compromised and challenging listening conditions. These results are relevant to research carried out with listeners who have hearing loss, since they are routinely required to encode, process and understand spectrally-degraded acoustic signals. PMID:28045787
Effects of Bilateral Eye Movements on Gist Based False Recognition in the DRM Paradigm
ERIC Educational Resources Information Center
Parker, Andrew; Dagnall, Neil
2007-01-01
The effects of saccadic bilateral (horizontal) eye movements on gist based false recognition was investigated. Following exposure to lists of words related to a critical but non-studied word participants were asked to engage in 30s of bilateral vs. vertical vs. no eye movements. Subsequent testing of recognition memory revealed that those who…
Smith, Sherri L; Pichora-Fuller, M Kathleen; Alexander, Genevieve
The purpose of this study was to develop the Word Auditory Recognition and Recall Measure (WARRM) and to conduct the inaugural evaluation of the performance of younger adults with normal hearing, older adults with normal to near-normal hearing, and older adults with pure-tone hearing loss on the WARRM. The WARRM is a new test designed for concurrently assessing word recognition and auditory working memory performance in adults who may have pure-tone hearing loss. The test consists of 100 monosyllabic words based on widely used speech-recognition test materials. The 100 words are presented in recall set sizes of 2, 3, 4, 5, and 6 items, with 5 trials in each set size. The WARRM yields a word-recognition score and a recall score. The WARRM was administered to all participants in three listener groups under two processing conditions in a mixed model (between-subjects, repeated measures) design. The between-subjects factor was group, with 48 younger listeners with normal audiometric thresholds (younger listeners with normal hearing [YNH]), 48 older listeners with normal thresholds through 3000 Hz (older listeners with normal hearing [ONH]), and 48 older listeners with sensorineural hearing loss (older listeners with hearing loss [OHL]). The within-subjects factor was WARRM processing condition (no additional task or with an alphabet judgment task). The associations between results on the WARRM test and results on a battery of other auditory and memory measures were examined. Word-recognition performance on the WARRM was not affected by processing condition or set size and was near ceiling for the YNH and ONH listeners (99 and 98%, respectively) with both groups performing significantly better than the OHL listeners (83%). The recall results were significantly better for the YNH, ONH, and OHL groups with no processing (93, 84, and 75%, respectively) than with the alphabet processing (86, 77, and 70%). In both processing conditions, recall was best for YNH, followed by ONH, and worst for OHL listeners. WARRM recall scores were significantly correlated with other memory measures. In addition, WARRM recall scores were correlated with results on the Words-In-Noise (WIN) test for the OHL listeners in the no processing condition and for ONH listeners in the alphabet processing condition. Differences in the WIN and recall scores of these groups are consistent with the interpretation that the OHL listeners found listening to be sufficiently demanding to affect recall even in the no processing condition, whereas the ONH group listeners did not find it so demanding until the additional alphabet processing task was added. These findings demonstrate the feasibility of incorporating an auditory memory test into a word-recognition test to obtain measures of both word recognition and working memory simultaneously. The correlation of WARRM recall with scores from other memory measures is evidence of construct validity. The observation of correlations between the WIN thresholds with each of the older groups and recall scores in certain processing conditions suggests that recall depends on listeners' word-recognition abilities in noise in combination with the processing demands of the task. The recall score provides additional information beyond the pure-tone audiogram and word-recognition scores that may help rehabilitative audiologists assess the listening abilities of patients with hearing loss.
Conceptually based vocabulary intervention: second graders' development of vocabulary words.
Dimling, Lisa M
2010-01-01
An instructional strategy was investigated that addressed the needs of deaf and hard of hearing students through a conceptually based sign language vocabulary intervention. A single-subject multiple-baseline design was used to determine the effects of the vocabulary intervention on word recognition, production, and comprehension. Six students took part in the 30-minute intervention over 6-8 weeks, learning 12 new vocabulary words each week by means of the three intervention components: (a) word introduction, (b) word activity (semantic mapping), and (c) practice. Results indicated that the vocabulary intervention successfully improved all students' recognition, production, and comprehension of the vocabulary words and phrases.
Mark My Words: Tone of Voice Changes Affective Word Representations in Memory
Schirmer, Annett
2010-01-01
The present study explored the effect of speaker prosody on the representation of words in memory. To this end, participants were presented with a series of words and asked to remember the words for a subsequent recognition test. During study, words were presented auditorily with an emotional or neutral prosody, whereas during test, words were presented visually. Recognition performance was comparable for words studied with emotional and neutral prosody. However, subsequent valence ratings indicated that study prosody changed the affective representation of words in memory. Compared to words with neutral prosody, words with sad prosody were later rated as more negative and words with happy prosody were later rated as more positive. Interestingly, the participants' ability to remember study prosody failed to predict this effect, suggesting that changes in word valence were implicit and associated with initial word processing rather than word retrieval. Taken together these results identify a mechanism by which speakers can have sustained effects on listener attitudes towards word referents. PMID:20169154
Usage of semantic representations in recognition memory.
Nishiyama, Ryoji; Hirano, Tetsuji; Ukita, Jun
2017-11-01
Meanings of words facilitate false acceptance as well as correct rejection of lures in recognition memory tests, depending on the experimental context. This suggests that semantic representations are both directly and indirectly (i.e., mediated by perceptual representations) used in remembering. Studies using memory conjunction errors (MCEs) paradigms, in which the lures consist of component parts of studied words, have reported semantic facilitation of rejection of the lures. However, attending to components of the lures could potentially cause this. Therefore, we investigated whether semantic overlap of lures facilitates MCEs using Japanese Kanji words in which a whole-word image is more concerned in reading. Experiments demonstrated semantic facilitation of MCEs in a delayed recognition test (Experiment 1), and in immediate recognition tests in which participants were prevented from using phonological or orthographic representations (Experiment 2), and the salient effect on individuals with high semantic memory capacities (Experiment 3). Additionally, analysis of the receiver operating characteristic suggested that this effect is attributed to familiarity-based memory judgement and phantom recollection. These findings indicate that semantic representations can be directly used in remembering, even when perceptual representations of studied words are available.
The influence of speech rate and accent on access and use of semantic information.
Sajin, Stanislav M; Connine, Cynthia M
2017-04-01
Circumstances in which the speech input is presented in sub-optimal conditions generally lead to processing costs affecting spoken word recognition. The current study indicates that some processing demands imposed by listening to difficult speech can be mitigated by feedback from semantic knowledge. A set of lexical decision experiments examined how foreign accented speech and word duration impact access to semantic knowledge in spoken word recognition. Results indicate that when listeners process accented speech, the reliance on semantic information increases. Speech rate was not observed to influence semantic access, except in the setting in which unusually slow accented speech was presented. These findings support interactive activation models of spoken word recognition in which attention is modulated based on speech demands.
Emotion words and categories: evidence from lexical decision.
Scott, Graham G; O'Donnell, Patrick J; Sereno, Sara C
2014-05-01
We examined the categorical nature of emotion word recognition. Positive, negative, and neutral words were presented in lexical decision tasks. Word frequency was additionally manipulated. In Experiment 1, "positive" and "negative" categories of words were implicitly indicated by the blocked design employed. A significant emotion-frequency interaction was obtained, replicating past research. While positive words consistently elicited faster responses than neutral words, only low frequency negative words demonstrated a similar advantage. In Experiments 2a and 2b, explicit categories ("positive," "negative," and "household" items) were specified to participants. Positive words again elicited faster responses than did neutral words. Responses to negative words, however, were no different than those to neutral words, regardless of their frequency. The overall pattern of effects indicates that positive words are always facilitated, frequency plays a greater role in the recognition of negative words, and a "negative" category represents a somewhat disparate set of emotions. These results support the notion that emotion word processing may be moderated by distinct systems.
Brébion, Gildas; Larøi, Frank; Van der Linden, Martial
2010-10-01
Hallucinations in patients with schizophrenia have been associated with a liberal response bias in signal detection and recognition tasks and with various types of source-memory error. We investigated the associations of hallucination proneness with free-recall intrusions and false recognitions of words in a nonclinical sample. A total of 81 healthy individuals were administered a verbal memory task involving free recall and recognition of one nonorganizable and one semantically organizable list of words. Hallucination proneness was assessed by means of a self-rating scale. Global hallucination proneness was associated with free-recall intrusions in the nonorganizable list and with a response bias reflecting tendency to make false recognitions of nontarget words in both types of list. The verbal hallucination score was associated with more intrusions and with a reduced tendency to make false recognitions of words. The associations between global hallucination proneness and two types of verbal memory error in a nonclinical sample corroborate those observed in patients with schizophrenia and suggest that common cognitive mechanisms underlie hallucinations in psychiatric and nonclinical individuals.
Improving language models for radiology speech recognition.
Paulett, John M; Langlotz, Curtis P
2009-02-01
Speech recognition systems have become increasingly popular as a means to produce radiology reports, for reasons both of efficiency and of cost. However, the suboptimal recognition accuracy of these systems can affect the productivity of the radiologists creating the text reports. We analyzed a database of over two million de-identified radiology reports to determine the strongest determinants of word frequency. Our results showed that body site and imaging modality had a similar influence on the frequency of words and of three-word phrases as did the identity of the speaker. These findings suggest that the accuracy of speech recognition systems could be significantly enhanced by further tailoring their language models to body site and imaging modality, which are readily available at the time of report creation.
Pictures, images, and recollective experience.
Dewhurst, S A; Conway, M A
1994-09-01
Five experiments investigated the influence of picture processing on recollective experience in recognition memory. Subjects studied items that differed in visual or imaginal detail, such as pictures versus words and high-imageability versus low-imageability words, and performed orienting tasks that directed processing either toward a stimulus as a word or toward a stimulus as a picture or image. Standard effects of imageability (e.g., the picture superiority effect and memory advantages following imagery) were obtained only in recognition judgments that featured recollective experience and were eliminated or reversed when recognition was not accompanied by recollective experience. It is proposed that conscious recollective experience in recognition memory is cued by attributes of retrieved memories such as sensory-perceptual attributes and records of cognitive operations performed at encoding.
ERIC Educational Resources Information Center
Laxen, Jannika; Lavaur, Jean-Marc
2010-01-01
This study aims to examine the influence of multiple translations of a word on bilingual processing in three translation recognition experiments during which French-English bilinguals had to decide whether two words were translations of each other or not. In the first experiment, words with only one translation were recognized as translations…
Wilson, Richard H
2015-04-01
In 1940, a cooperative effort by the radio networks and Bell Telephone produced the volume unit (vu) meter that has been the mainstay instrument for monitoring the level of speech signals in commercial broadcasting and research laboratories. With the use of computers, today the amplitude of signals can be quantified easily using the root mean square (rms) algorithm. Researchers had previously reported that amplitude estimates of sentences and running speech were 4.8 dB higher when measured with a vu meter than when calculated with rms. This study addresses the vu-rms relation as applied to the carrier phrase and target word paradigm used to assess word-recognition abilities, the premise being that by definition the word-recognition paradigm is a special and different case from that described previously. The purpose was to evaluate the vu and rms amplitude relations for the carrier phrases and target words commonly used to assess word-recognition abilities. In addition, the relations with the target words between rms level and recognition performance were examined. Descriptive and correlational. Two recoded versions of the Northwestern University Auditory Test No. 6 were evaluated, the Auditec of St. Louis (Auditec) male speaker and the Department of Veterans Affairs (VA) female speaker. Using both visual and auditory cues from a waveform editor, the temporal onsets and offsets were defined for each carrier phrase and each target word. The rms amplitudes for those segments then were computed and expressed in decibels with reference to the maximum digitization range. The data were maintained for each of the four Northwestern University Auditory Test No. 6 word lists. Descriptive analyses were used with linear regressions used to evaluate the reliability of the measurement technique and the relation between the rms levels of the target words and recognition performances. Although there was a 1.3 dB difference between the calibration tones, the mean levels of the carrier phrases for the two recordings were -14.8 dB (Auditec) and -14.1 dB (VA) with standard deviations <1 dB. For the target words, the mean amplitudes were -19.9 dB (Auditec) and -18.3 dB (VA) with standard deviations ranging from 1.3 to 2.4 dB. The mean durations for the carrier phrases of both recordings were 593-594 msec, with the mean durations of the target words a little different, 509 msec (Auditec) and 528 msec (VA). Random relations were observed between the recognition performances and rms levels of the target words. Amplitude and temporal data for the individual words are provided. The rms levels of the carrier phrases closely approximated (±1 dB) the rms levels of the calibration tones, both of which were set to 0 vu (dB). The rms levels of the target words were 5-6 dB below the levels of the carrier phrases and were substantially more variable than the levels of the carrier phrases. The relation between the rms levels of the target words and recognition performances on the words was random. American Academy of Audiology.
Handwritten Word Recognition Using Multi-view Analysis
NASA Astrophysics Data System (ADS)
de Oliveira, J. J.; de A. Freitas, C. O.; de Carvalho, J. M.; Sabourin, R.
This paper brings a contribution to the problem of efficiently recognizing handwritten words from a limited size lexicon. For that, a multiple classifier system has been developed that analyzes the words from three different approximation levels, in order to get a computational approach inspired on the human reading process. For each approximation level a three-module architecture composed of a zoning mechanism (pseudo-segmenter), a feature extractor and a classifier is defined. The proposed application is the recognition of the Portuguese handwritten names of the months, for which a best recognition rate of 97.7% was obtained, using classifier combination.
Bonin, Patrick; Méot, Alain; Bugaiska, Aurélia
2018-02-12
Words that correspond to a potential sensory experience-concrete words-have long been found to possess a processing advantage over abstract words in various lexical tasks. We collected norms of concreteness for a set of 1,659 French words, together with other psycholinguistic norms that were not available for these words-context availability, emotional valence, and arousal-but which are important if we are to achieve a better understanding of the meaning of concreteness effects. We then investigated the relationships of concreteness with these newly collected variables, together with other psycholinguistic variables that were already available for this set of words (e.g., imageability, age of acquisition, and sensory experience ratings). Finally, thanks to the variety of psychological norms available for this set of words, we decided to test further the embodied account of concreteness effects in visual-word recognition, championed by Kousta, Vigliocco, Vinson, Andrews, and Del Campo (Journal of Experimental Psychology: General, 140, 14-34, 2011). Similarly, we investigated the influences of concreteness in three word recognition tasks-lexical decision, progressive demasking, and word naming-using a multiple regression approach, based on the reaction times available in Chronolex (Ferrand, Brysbaert, Keuleers, New, Bonin, Méot, Pallier, Frontiers in Psychology, 2; 306, 2011). The norms can be downloaded as supplementary material provided with this article.
Faces are special but not too special: Spared face recognition in amnesia is based on familiarity
Aly, Mariam; Knight, Robert T.; Yonelinas, Andrew P.
2014-01-01
Most current theories of human memory are material-general in the sense that they assume that the medial temporal lobe (MTL) is important for retrieving the details of prior events, regardless of the specific type of materials. Recent studies of amnesia have challenged the material-general assumption by suggesting that the MTL may be necessary for remembering words, but is not involved in remembering faces. We examined recognition memory for faces and words in a group of amnesic patients, which included hypoxic patients and patients with extensive left or right MTL lesions. Recognition confidence judgments were used to plot receiver operating characteristics (ROCs) in order to more fully quantify recognition performance and to estimate the contributions of recollection and familiarity. Consistent with the extant literature, an analysis of overall recognition accuracy showed that the patients were impaired at word memory but had spared face memory. However, the ROC analysis indicated that the patients were generally impaired at high confidence recognition responses for faces and words, and they exhibited significant recollection impairments for both types of materials. Familiarity for faces was preserved in all patients, but extensive left MTL damage impaired familiarity for words. These results suggest that face recognition may appear to be spared because performance tends to rely heavily on familiarity, a process that is relatively well preserved in amnesia. The findings challenge material-general theories of memory, and suggest that both material and process are important determinants of memory performance in amnesia, and different types of materials may depend more or less on recollection and familiarity. PMID:20833190
Sullivan, Jessica R.; Assmann, Peter F.; Hossain, Shaikat; Schafer, Erin C.
2017-01-01
Two experiments explored the role of differences in voice gender in the recognition of speech masked by a competing talker in cochlear implant simulations. Experiment 1 confirmed that listeners with normal hearing receive little benefit from differences in voice gender between a target and masker sentence in four- and eight-channel simulations, consistent with previous findings that cochlear implants deliver an impoverished representation of the cues for voice gender. However, gender differences led to small but significant improvements in word recognition with 16 and 32 channels. Experiment 2 assessed the benefits of perceptual training on the use of voice gender cues in an eight-channel simulation. Listeners were assigned to one of four groups: (1) word recognition training with target and masker differing in gender; (2) word recognition training with same-gender target and masker; (3) gender recognition training; or (4) control with no training. Significant improvements in word recognition were observed from pre- to post-test sessions for all three training groups compared to the control group. These improvements were maintained at the late session (one week following the last training session) for all three groups. There was an overall improvement in masked word recognition performance provided by gender mismatch following training, but the amount of benefit did not differ as a function of the type of training. The training effects observed here are consistent with a form of rapid perceptual learning that contributes to the segregation of competing voices but does not specifically enhance the benefits provided by voice gender cues. PMID:28372046
Holistic word processing in dyslexia
Conway, Aisling; Misra, Karuna
2017-01-01
People with dyslexia have difficulty learning to read and many lack fluent word recognition as adults. In a novel task that borrows elements of the ‘word superiority’ and ‘word inversion’ paradigms, we investigate whether holistic word recognition is impaired in dyslexia. In Experiment 1 students with dyslexia and controls judged the similarity of pairs of 6- and 7-letter words or pairs of words whose letters had been partially jumbled. The stimuli were presented in both upright and inverted form with orthographic regularity and orientation randomized from trial to trial. While both groups showed sensitivity to orthographic regularity, both word inversion and letter jumbling were more detrimental to skilled than dyslexic readers supporting the idea that the latter may read in a more analytic fashion. Experiment 2 employed the same task but using shorter, 4- and 5-letter words and a design where orthographic regularity and stimuli orientation was held constant within experimental blocks to encourage the use of either holistic or analytic processing. While there was no difference in reaction time between the dyslexic and control groups for inverted stimuli, the students with dyslexia were significantly slower than controls for upright stimuli. These findings suggest that holistic word recognition, which is largely based on the detection of orthographic regularity, is impaired in dyslexia. PMID:29121046
Loukusa, Soile; Mäkinen, Leena; Kuusikko-Gauffin, Sanna; Ebeling, Hanna; Moilanen, Irma
2014-01-01
Social perception skills, such as understanding the mind and emotions of others, affect children's communication abilities in real-life situations. In addition to autism spectrum disorder (ASD), there is increasing knowledge that children with specific language impairment (SLI) also demonstrate difficulties in their social perception abilities. To compare the performance of children with SLI, ASD and typical development (TD) in social perception tasks measuring Theory of Mind (ToM) and emotion recognition. In addition, to evaluate the association between social perception tasks and language tests measuring word-finding abilities, knowledge of grammatical morphology and verbal working memory. Children with SLI (n = 18), ASD (n = 14) and TD (n = 25) completed two NEPSY-II subtests measuring social perception abilities: (1) Affect Recognition and (2) ToM (includes Verbal and non-verbal Contextual tasks). In addition, children's word-finding abilities were measured with the TWF-2, grammatical morphology by using the Grammatical Closure subtest of ITPA, and verbal working memory by using subtests of Sentence Repetition or Word List Interference (chosen according the child's age) of the NEPSY-II. Children with ASD scored significantly lower than children with SLI or TD on the NEPSY-II Affect Recognition subtest. Both SLI and ASD groups scored significantly lower than TD children on Verbal tasks of the ToM subtest of NEPSY-II. However, there were no significant group differences on non-verbal Contextual tasks of the ToM subtest of the NEPSY-II. Verbal tasks of the ToM subtest were correlated with the Grammatical Closure subtest and TWF-2 in children with SLI. In children with ASD correlation between TWF-2 and ToM: Verbal tasks was moderate, almost achieving statistical significance, but no other correlations were found. Both SLI and ASD groups showed difficulties in tasks measuring verbal ToM but differences were not found in tasks measuring non-verbal Contextual ToM. The association between Verbal ToM tasks and language tests was stronger in children with SLI than in children with ASD. There is a need for further studies in order to understand interaction between different areas of language and cognitive development. © 2014 Royal College of Speech and Language Therapists.
Influences of spoken word planning on speech recognition.
Roelofs, Ardi; Ozdemir, Rebecca; Levelt, Willem J M
2007-09-01
In 4 chronometric experiments, influences of spoken word planning on speech recognition were examined. Participants were shown pictures while hearing a tone or a spoken word presented shortly after picture onset. When a spoken word was presented, participants indicated whether it contained a prespecified phoneme. When the tone was presented, they indicated whether the picture name contained the phoneme (Experiment 1) or they named the picture (Experiment 2). Phoneme monitoring latencies for the spoken words were shorter when the picture name contained the prespecified phoneme compared with when it did not. Priming of phoneme monitoring was also obtained when the phoneme was part of spoken nonwords (Experiment 3). However, no priming of phoneme monitoring was obtained when the pictures required no response in the experiment, regardless of monitoring latency (Experiment 4). These results provide evidence that an internal phonological pathway runs from spoken word planning to speech recognition and that active phonological encoding is a precondition for engaging the pathway. 2007 APA
When Is All Understood and Done? The Psychological Reality of the Recognition Point
ERIC Educational Resources Information Center
Bolte, Jens; Uhe, Mechtild
2004-01-01
Using lexical decision, the effects of primes of different length on spoken word recognition were evaluated in three partial repetition priming experiments. Prime length was determined via gating (Experiments 1a and 2a). It was shorter than, equivalent to, or longer than the recognition point (RP), or a complete word. In Experiments 1b and 1c,…
Word length and lexical activation: longer is better.
Pitt, Mark A; Samuel, Arthur G
2006-10-01
Many models of spoken word recognition posit the existence of lexical and sublexical representations, with excitatory and inhibitory mechanisms used to affect the activation levels of such representations. Bottom-up evidence provides excitatory input, and inhibition from phonetically similar representations leads to lexical competition. In such a system, long words should produce stronger lexical activation than short words, for 2 reasons: Long words provide more bottom-up evidence than short words, and short words are subject to greater inhibition due to the existence of more similar words. Four experiments provide evidence for this view. In addition, reaction-time-based partitioning of the data shows that long words generate greater activation that is available both earlier and for a longer time than is the case for short words. As a result, lexical influences on phoneme identification are extremely robust for long words but are quite fragile and condition-dependent for short words. Models of word recognition must consider words of all lengths to capture the true dynamics of lexical activation. Copyright 2006 APA.
Lexical association and false memory for words in two cultures.
Lee, Yuh-shiow; Chiang, Wen-Chi; Hung, Hsu-Ching
2008-01-01
This study examined the relationship between language experience and false memory produced by the DRM paradigm. The word lists used in Stadler, et al. (Memory & Cognition, 27, 494-500, 1999) were first translated into Chinese. False recall and false recognition for critical non-presented targets were then tested on a group of Chinese users. The average co-occurrence rate of the list word and the critical word was calculated based on two large Chinese corpuses. List-level analyses revealed that the correlation between the American and Taiwanese participants was significant only in false recognition. More importantly, the co-occurrence rate was significantly correlated with false recall and recognition of Taiwanese participants, and not of American participants. In addition, the backward association strength based on Nelson et al. (The University of South Florida word association, rhyme and word fragment norms, 1999) was significantly correlated with false recall of American participants and not of Taiwanese participants. Results are discussed in terms of the relationship between language experiences and lexical association in creating false memory for word lists.
The influence of talker and foreign-accent variability on spoken word identification.
Bent, Tessa; Holt, Rachael Frush
2013-03-01
In spoken word identification and memory tasks, stimulus variability from numerous sources impairs performance. In the current study, the influence of foreign-accent variability on spoken word identification was evaluated in two experiments. Experiment 1 used a between-subjects design to test word identification in noise in single-talker and two multiple-talker conditions: multiple talkers with the same accent and multiple talkers with different accents. Identification performance was highest in the single-talker condition, but there was no difference between the single-accent and multiple-accent conditions. Experiment 2 further explored word recognition for multiple talkers in single-accent versus multiple-accent conditions using a mixed design. A detriment to word recognition was observed in the multiple-accent condition compared to the single-accent condition, but the effect differed across the language backgrounds tested. These results demonstrate that the processing of foreign-accent variation may influence word recognition in ways similar to other sources of variability (e.g., speaking rate or style) in that the inclusion of multiple foreign accents can result in a small but significant performance decrement beyond the multiple-talker effect.
The coupling of emotion and cognition in the eye: introducing the pupil old/new effect.
Võ, Melissa L-H; Jacobs, Arthur M; Kuchinke, Lars; Hofmann, Markus; Conrad, Markus; Schacht, Annekathrin; Hutzler, Florian
2008-01-01
The study presented here investigated the effects of emotional valence on the memory for words by assessing both memory performance and pupillary responses during a recognition memory task. Participants had to make speeded judgments on whether a word presented in the test phase of the experiment had already been presented ("old") or not ("new"). An emotion-induced recognition bias was observed: Words with emotional content not only produced a higher amount of hits, but also elicited more false alarms than neutral words. Further, we found a distinct pupil old/new effect characterized as an elevated pupillary response to hits as opposed to correct rejections. Interestingly, this pupil old/new effect was clearly diminished for emotional words. We therefore argue that the pupil old/new effect is not only able to mirror memory retrieval processes, but also reflects modulation by an emotion-induced recognition bias.
The Bayesian reader: explaining word recognition as an optimal Bayesian decision process.
Norris, Dennis
2006-04-01
This article presents a theory of visual word recognition that assumes that, in the tasks of word identification, lexical decision, and semantic categorization, human readers behave as optimal Bayesian decision makers. This leads to the development of a computational model of word recognition, the Bayesian reader. The Bayesian reader successfully simulates some of the most significant data on human reading. The model accounts for the nature of the function relating word frequency to reaction time and identification threshold, the effects of neighborhood density and its interaction with frequency, and the variation in the pattern of neighborhood density effects seen in different experimental tasks. Both the general behavior of the model and the way the model predicts different patterns of results in different tasks follow entirely from the assumption that human readers approximate optimal Bayesian decision makers. ((c) 2006 APA, all rights reserved).
White, Corey N.; Kapucu, Aycan; Bruno, Davide; Rotello, Caren M.; Ratcliff, Roger
2014-01-01
Recognition memory studies often find that emotional items are more likely than neutral items to be labeled as studied. Previous work suggests this bias is driven by increased memory strength/familiarity for emotional items. We explored strength and bias interpretations of this effect with the conjecture that emotional stimuli might seem more familiar because they share features with studied items from the same category. Categorical effects were manipulated in a recognition task by presenting lists with a small, medium, or large proportion of emotional words. The liberal memory bias for emotional words was only observed when a medium or large proportion of categorized words were presented in the lists. Similar, though weaker, effects were observed with categorized words that were not emotional (animal names). These results suggest that liberal memory bias for emotional items may be largely driven by effects of category membership. PMID:24303902
White, Corey N; Kapucu, Aycan; Bruno, Davide; Rotello, Caren M; Ratcliff, Roger
2014-01-01
Recognition memory studies often find that emotional items are more likely than neutral items to be labelled as studied. Previous work suggests this bias is driven by increased memory strength/familiarity for emotional items. We explored strength and bias interpretations of this effect with the conjecture that emotional stimuli might seem more familiar because they share features with studied items from the same category. Categorical effects were manipulated in a recognition task by presenting lists with a small, medium or large proportion of emotional words. The liberal memory bias for emotional words was only observed when a medium or large proportion of categorised words were presented in the lists. Similar, though weaker, effects were observed with categorised words that were not emotional (animal names). These results suggest that liberal memory bias for emotional items may be largely driven by effects of category membership.
Horowitz-Kraus, Tzipi; Farah, Rola; DiFrancesco, Mark; Vannest, Jennifer
2017-02-01
Story listening in children relies on brain regions supporting speech perception, auditory word recognition, syntax, semantics, and discourse abilities, along with the ability to attend and process information (part of executive functions). Speed-of-processing is an early-developed executive function. We used functional and structural magnetic resonance imaging (MRI) to demonstrate the relationship between story listening and speed-of-processing in preschool-age children. Eighteen participants performed story-listening tasks during MRI scans. Functional and structural connectivity analysis was performed using the speed-of-processing scores as regressors. Activation in the superior frontal gyrus during story listening positively correlated with speed-of-processing scores. This region was functionally connected with the superior temporal gyrus, insula, and hippocampus. Fractional anisotropy in the inferior frontooccipital fasciculus, which connects the superior frontal and temporal gyri, was positively correlated with speed-of-processing scores. Our results suggest that speed-of-processing skills in preschool-age children are reflected in functional activation and connectivity during story listening and may act as a biomarker for future academic abilities. Georg Thieme Verlag KG Stuttgart · New York.
The time course of spoken word learning and recognition: studies with artificial lexicons.
Magnuson, James S; Tanenhaus, Michael K; Aslin, Richard N; Dahan, Delphine
2003-06-01
The time course of spoken word recognition depends largely on the frequencies of a word and its competitors, or neighbors (similar-sounding words). However, variability in natural lexicons makes systematic analysis of frequency and neighbor similarity difficult. Artificial lexicons were used to achieve precise control over word frequency and phonological similarity. Eye tracking provided time course measures of lexical activation and competition (during spoken instructions to perform visually guided tasks) both during and after word learning, as a function of word frequency, neighbor type, and neighbor frequency. Apparent shifts from holistic to incremental competitor effects were observed in adults and neural network simulations, suggesting such shifts reflect general properties of learning rather than changes in the nature of lexical representations.
The effect of normative context variability on recognition memory.
Steyvers, Mark; Malmberg, Kenneth J
2003-09-01
According to some theories of recognition memory (e.g., S. Dennis & M. S. Humphreys, 2001), the number of different contexts in which words appear determines how memorable individual occurrences of words will be: A word that occurs in a small number of different contexts should be better recognized than a word that appears in a larger number of different contexts. To empirically test this prediction, a normative measure is developed, referred to here as context variability, that estimates the number of different contexts in which words appear in everyday life. These findings confirm the prediction that words low in context variability are better recognized (on average) than words that are high in context variability. (c) 2003 APA, all rights reserved
Auditory word recognition: extrinsic and intrinsic effects of word frequency.
Connine, C M; Titone, D; Wang, J
1993-01-01
Two experiments investigated the influence of word frequency in a phoneme identification task. Speech voicing continua were constructed so that one endpoint was a high-frequency word and the other endpoint was a low-frequency word (e.g., best-pest). Experiment 1 demonstrated that ambiguous tokens were labeled such that a high-frequency word was formed (intrinsic frequency effect). Experiment 2 manipulated the frequency composition of the list (extrinsic frequency effect). A high-frequency list bias produced an exaggerated influence of frequency; a low-frequency list bias showed a reverse frequency effect. Reaction time effects were discussed in terms of activation and postaccess decision models of frequency coding. The results support a late use of frequency in auditory word recognition.
Famous talker effects in spoken word recognition.
Maibauer, Alisa M; Markis, Teresa A; Newell, Jessica; McLennan, Conor T
2014-01-01
Previous work has demonstrated that talker-specific representations affect spoken word recognition relatively late during processing. However, participants in these studies were listening to unfamiliar talkers. In the present research, we used a long-term repetition-priming paradigm and a speeded-shadowing task and presented listeners with famous talkers. In Experiment 1, half the words were spoken by Barack Obama, and half by Hillary Clinton. Reaction times (RTs) to repeated words were shorter than those to unprimed words only when repeated by the same talker. However, in Experiment 2, using nonfamous talkers, RTs to repeated words were shorter than those to unprimed words both when repeated by the same talker and when repeated by a different talker. Taken together, the results demonstrate that talker-specific details can affect the perception of spoken words relatively early during processing when words are spoken by famous talkers.
Processing Stages Underlying Word Recognition in the Anteroventral Temporal Lobe
Halgren, Eric; Wang, Chunmao; Schomer, Donald L.; Knake, Susanne; Marinkovic, Ksenija; Wu, Julian; Ulbert, Istvan
2006-01-01
The anteroventral temporal lobe integrates visual, lexical, semantic and mnestic aspects of word-processing, through its reciprocal connections with the ventral visual stream, language areas, and the hippocampal formation. We used linear microelectrode arrays to probe population synaptic currents and neuronal firing in different cortical layers of the anteroventral temporal lobe, during semantic judgments with implicit priming, and overt word recognition. Since different extrinsic and associative inputs preferentially target different cortical layers, this method can help reveal the sequence and nature of local processing stages at a higher resolution than was previously possible. The initial response in inferotemporal and perirhinal cortices is a brief current sink beginning at ~120ms, and peaking at ~170ms. Localization of this initial sink to middle layers suggests that it represents feedforward input from lower visual areas, and simultaneously increased firing implies that it represents excitatory synaptic currents. Until ~800ms, the main focus of transmembrane current sinks alternates between middle and superficial layers, with the superficial focus becoming increasingly dominant after ~550ms. Since superficial layers are the target of local and feedback associative inputs, this suggests an alternation in predominant synaptic input between feedforward and feedback modes. Word repetition does not affect the initial perirhinal and inferotemporal middle layer sink, but does decrease later activity. Entorhinal activity begins later (~200ms), with greater apparent excitatory postsynaptic currents and multiunit activity in neocortically-projecting than hippocampal-projecting layers. In contrast to perirhinal and entorhinal responses, entorhinal responses are larger to repeated words during memory retrieval. These results identify a sequence of physiological activation, beginning with a sharp activation from lower level visual areas carrying specific information to middle layers. This is followed by feedback and associative interactions involving upper cortical layers, which are abbreviated to repeated words. Following bottom-up and associative stages, top-down recollective processes may be driven by entorhinal cortex. Word processing involves a systematic sequence of fast feedforward information transfer from visual areas to anteroventral temporal cortex, followed by prolonged interactions of this feedforward information with local associations, and feedback mnestic information from the medial temporal lobe. PMID:16488158
Soares, Ana Paula; Perea, Manuel; Comesaña, Montserrat
2014-01-01
Recent research with skilled adult readers has consistently revealed an advantage of consonants over vowels in visual-word recognition (i.e., the so-called "consonant bias"). Nevertheless, little is known about how early in development the consonant bias emerges. This work aims to address this issue by studying the relative contribution of consonants and vowels at the early stages of visual-word recognition in developing readers (2(nd) and 4(th) Grade children) and skilled adult readers (college students) using a masked priming lexical decision task. Target words starting either with a consonant or a vowel were preceded by a briefly presented masked prime (50 ms) that could be the same as the target (e.g., pirata-PIRATA [pirate-PIRATE]), a consonant-preserving prime (e.g., pureto-PIRATA), a vowel-preserving prime (e.g., gicala-PIRATA), or an unrelated prime (e.g., bocelo -PIRATA). Results revealed significant priming effects for the identity and consonant-preserving conditions in adult readers and 4(th) Grade children, whereas 2(nd) graders only showed priming for the identity condition. In adult readers, the advantage of consonants was observed both for words starting with a consonant or a vowel, while in 4(th) graders this advantage was restricted to words with an initial consonant. Thus, the present findings suggest that a Consonant/Vowel skeleton should be included in future (developmental) models of visual-word recognition and reading.
A Bridge between Pictures and Print.
ERIC Educational Resources Information Center
Jeffree, Dorothy
1981-01-01
The experiment investigated the feasibility of bridging the gap between the recognition of pictures and the recognition of words in four mentally handicapped adolescents by adapting a modified version of symbol accentuation (in which a printed word looks like the object it represents). (SB)
Measuring Reading Performance Informally.
ERIC Educational Resources Information Center
Powell, William R.
To improve the accuracy of the informal reading inventory (IRI), a differential set of criteria is necessary for both word recognition and comprehension scores for different levels and reading conditions. In initial evaluation, word recognition scores should reflect only errors of insertions, omissions, mispronunciations, substitiutions, unkown…
Microcomputers and Preschoolers.
ERIC Educational Resources Information Center
Evans, Dina
Preschool children can benefit by working with microcomputers. Thinking skills are enhanced by software games that focus on logic, memory, problem solving, and pattern recognition. Counting, sequencing, and matching games develop mathematics skills, and word games focusing on basic letter symbol and word recognition develop language skills.…
Speech as a pilot input medium
NASA Technical Reports Server (NTRS)
Plummer, R. P.; Coler, C. R.
1977-01-01
The speech recognition system under development is a trainable pattern classifier based on a maximum-likelihood technique. An adjustable uncertainty threshold allows the rejection of borderline cases for which the probability of misclassification is high. The syntax of the command language spoken may be used as an aid to recognition, and the system adapts to changes in pronunciation if feedback from the user is available. Words must be separated by .25 second gaps. The system runs in real time on a mini-computer (PDP 11/10) and was tested on 120,000 speech samples from 10- and 100-word vocabularies. The results of these tests were 99.9% correct recognition for a vocabulary consisting of the ten digits, and 99.6% recognition for a 100-word vocabulary of flight commands, with a 5% rejection rate in each case. With no rejection, the recognition accuracies for the same vocabularies were 99.5% and 98.6% respectively.
Early access to abstract representations in developing readers: evidence from masked priming.
Perea, Manuel; Mallouh, Reem Abu; Carreiras, Manuel
2013-07-01
A commonly shared assumption in the field of visual-word recognition is that retinotopic representations are rapidly converted into abstract representations. Here we examine the role of visual form vs. abstract representations during the early stages of word processing - as measured by masked priming - in young children (3rd and 6th Graders) and adult readers. To maximize the chances of detecting an effect of visual form, we employed a language with a very intricate orthography, Arabic. If visual form plays a role in the early stages of processing, greater benefit would be expected from related primes that have the same visual form (in terms of the ligation pattern between a word's letters) as the target word (e.g.- [ktz b-ktA b] - note that the three initial letters are connected in prime and target) than for those that do not (- [ktxb-ktA b]). Results showed that the magnitude of priming effect relative to an unrelated condition (e.g. -) was remarkably similar for both types of prime. Thus, despite the visual complexity of Arabic orthography, there is fast access to the abstract letter representations not only in adult readers by also in developing readers. © 2013 Blackwell Publishing Ltd.
Miwa, Koji; Libben, Gary; Dijkstra, Ton; Baayen, Harald
2014-01-01
This lexical decision study with eye tracking of Japanese two-kanji-character words investigated the order in which a whole two-character word and its morphographic constituents are activated in the course of lexical access, the relative contributions of the left and the right characters in lexical decision, the depth to which semantic radicals are processed, and how nonlinguistic factors affect lexical processes. Mixed-effects regression analyses of response times and subgaze durations (i.e., first-pass fixation time spent on each of the two characters) revealed joint contributions of morphographic units at all levels of the linguistic structure with the magnitude and the direction of the lexical effects modulated by readers' locus of attention in a left-to-right preferred processing path. During the early time frame, character effects were larger in magnitude and more robust than radical and whole-word effects, regardless of the font size and the type of nonwords. Extending previous radical-based and character-based models, we propose a task/decision-sensitive character-driven processing model with a level-skipping assumption: Connections from the feature level bypass the lower radical level and link up directly to the higher character level.
Effects of age of acquisition on brain activation during Chinese character recognition.
Weekes, Brendan Stuart; Chan, Alice H D; Tan, Li Hai
2008-01-01
The age of acquisition of a word (AoA) has a specific effect on brain activation during word identification in English and German. However, the neural locus of AoA effects differs across studies. According to Hernandez and Fiebach [Hernandez, A., & Fiebach, C. (2006). The brain bases of reading late-learned words: Evidence from functional MRI. Visual Cognition, 13(8), 1027-1043], the effects of AoA on brain activation depend on the predictability of the connections between input (orthography) and output (phonology) in a lexical network. We tested this hypothesis by examining AoA effects in a non-alphabetic script with relatively arbitrary mappings between orthography and phonology--Chinese. Our results showed that the effects of AoA in Chinese speakers are located in brain regions that are spatially distinctive including the bilateral middle temporal gyrus and the left inferior parietal cortex. An additional finding was that word frequency had an independent effect on brain activation in the right middle occipital gyrus only. We conclude that spatially distinctive effects of AoA on neural activity depend on the predictability of the mappings between orthography and phonology and reflect a division of labour towards greater lexical-semantic retrieval in non-alphabetic scripts.
Immediate effects of anticipatory coarticulation in spoken-word recognition
Salverda, Anne Pier; Kleinschmidt, Dave; Tanenhaus, Michael K.
2014-01-01
Two visual-world experiments examined listeners’ use of pre word-onset anticipatory coarticulation in spoken-word recognition. Experiment 1 established the shortest lag with which information in the speech signal influences eye-movement control, using stimuli such as “The … ladder is the target”. With a neutral token of the definite article preceding the target word, saccades to the referent were not more likely than saccades to an unrelated distractor until 200–240 ms after the onset of the target word. In Experiment 2, utterances contained definite articles which contained natural anticipatory coarticulation pertaining to the onset of the target word (“ The ladder … is the target”). A simple Gaussian classifier was able to predict the initial sound of the upcoming target word from formant information from the first few pitch periods of the article’s vowel. With these stimuli, effects of speech on eye-movement control began about 70 ms earlier than in Experiment 1, suggesting rapid use of anticipatory coarticulation. The results are interpreted as support for “data explanation” approaches to spoken-word recognition. Methodological implications for visual-world studies are also discussed. PMID:24511179
Semantic size does not matter: "bigger" words are not recognized faster.
Kang, Sean H K; Yap, Melvin J; Tse, Chi-Shing; Kurby, Christopher A
2011-06-01
Sereno, O'Donnell, and Sereno (2009) reported that words are recognized faster in a lexical decision task when their referents are physically large than when they are small, suggesting that "semantic size" might be an important variable that should be considered in visual word recognition research and modelling. We sought to replicate their size effect, but failed to find a significant latency advantage in lexical decision for "big" words (cf. "small" words), even though we used the same word stimuli as Sereno et al. and had almost three times as many subjects. We also examined existing data from visual word recognition megastudies (e.g., English Lexicon Project) and found that semantic size is not a significant predictor of lexical decision performance after controlling for the standard lexical variables. In summary, the null results from our lab experiment--despite a much larger subject sample size than Sereno et al.--converged with our analysis of megastudy lexical decision performance, leading us to conclude that semantic size does not matter for word recognition. Discussion focuses on why semantic size (unlike some other semantic variables) is unlikely to play a role in lexical decision.
Adlof, Suzanne M; Patten, Hannah
2017-03-01
This study examined the unique and shared variance that nonword repetition and vocabulary knowledge contribute to children's ability to learn new words. Multiple measures of word learning were used to assess recall and recognition of phonological and semantic information. Fifty children, with a mean age of 8 years (range 5-12 years), completed experimental assessments of word learning and norm-referenced assessments of receptive and expressive vocabulary knowledge and nonword repetition skills. Hierarchical multiple regression analyses examined the variance in word learning that was explained by vocabulary knowledge and nonword repetition after controlling for chronological age. Together with chronological age, nonword repetition and vocabulary knowledge explained up to 44% of the variance in children's word learning. Nonword repetition was the stronger predictor of phonological recall, phonological recognition, and semantic recognition, whereas vocabulary knowledge was the stronger predictor of verbal semantic recall. These findings extend the results of past studies indicating that both nonword repetition skill and existing vocabulary knowledge are important for new word learning, but the relative influence of each predictor depends on the way word learning is measured. Suggestions for further research involving typically developing children and children with language or reading impairments are discussed.
The impact of task demand on visual word recognition.
Yang, J; Zevin, J
2014-07-11
The left occipitotemporal cortex has been found sensitive to the hierarchy of increasingly complex features in visually presented words, from individual letters to bigrams and morphemes. However, whether this sensitivity is a stable property of the brain regions engaged by word recognition is still unclear. To address the issue, the current study investigated whether different task demands modify this sensitivity. Participants viewed real English words and stimuli with hierarchical word-likeness while performing a lexical decision task (i.e., to decide whether each presented stimulus is a real word) and a symbol detection task. General linear model and independent component analysis indicated strong activation in the fronto-parietal and temporal regions during the two tasks. Furthermore, the bilateral inferior frontal gyrus and insula showed significant interaction effects between task demand and stimulus type in the pseudoword condition. The occipitotemporal cortex showed strong main effects for task demand and stimulus type, but no sensitivity to the hierarchical word-likeness was found. These results suggest that different task demands on semantic, phonological and orthographic processes can influence the involvement of the relevant regions during visual word recognition. Copyright © 2014 IBRO. Published by Elsevier Ltd. All rights reserved.
Normative data for the Maryland CNC Test.
Mendel, Lisa Lucks; Mustain, William D; Magro, Jessica
2014-09-01
The Maryland consonant-vowel nucleus-consonant (CNC) Test is routinely used in Veterans Administration medical centers, yet there is a paucity of published normative data for this test. The purpose of this study was to provide information on the means and distribution of word-recognition scores on the Maryland CNC Test as a function of degree of hearing loss for a veteran population. A retrospective, descriptive design was conducted. The sample consisted of records from veterans who had Compensation and Pension (C&P) examinations at a Veterans Administration medical center (N = 1,760 ears). Audiometric records of veterans who had C&P examinations during a 10 yr period were reviewed, and the pure-tone averages (PTA4) at four frequencies (1000, 2000, 3000, and 4000 Hz) were documented. The maximum word-recognition score (PBmax) was determined from the performance-intensity functions obtained using the Maryland CNC Test. Correlations were made between PBmax and PTA4. A wide range of word-recognition scores were obtained at all levels of PTA4 for this population. In addition, a strong negative correlation between the PBmax and the PTA4 was observed, indicating that as PTA4 increased, PBmax decreased. Word-recognition scores decreased significantly as hearing loss increased beyond a mild hearing loss. Although threshold was influenced by age, no statistically significant relationship was found between word-recognition score and the age of the participants. RESULTS from this study provide normative data in table and figure format to assist audiologists in interpreting patient results on the Maryland CNC test for a veteran population. These results provide a quantitative method for audiologists to use to interpret word-recognition scores based on pure-tone hearing loss. American Academy of Audiology.
Santaniello, G; Ferré, P; Rodríguez-Gómez, P; Poch, C; Eva, M Moreno; Hinojosa, J A
2018-06-15
Evidence from prior studies has shown an advantage in recognition memory for emotional compared to neutral words. Whether this advantage is short-lived or rather extends over longer periods, as well as whether the effect depends on words' valence (i.e., positive or negative), remains unknown. In the present ERP/EEG study, we investigated this issue by manipulating the lag distance (LAG-2, LAG-8 and LAG-16) between the presentation of old and new words in an online recognition memory task. LAG differences were observed at behavior, ERPs and in the theta frequency band. In line with previous studies, negative words were associated with faster reaction times, higher hit rates and increased amplitude in a positive ERP component between 386 and 564 ms compared to positive and neutral words. Remarkably, the interaction of LAG by EMOTION revealed that negative words were associated with better performance and larger ERPs amplitudes only at LAG-2. Also in the LAG-2 condition, emotional words (i.e., positive and negative words) induced a stronger desynchronization in the beta band between 386 and 542 ms compared to neutral words. These early enhanced memory effects for emotional words are discussed in terms of the Negative Emotional Valence Enhances Recapitulation (NEVER) model and the mobilization-minimization hypothesis. Copyright © 2018 Elsevier Ltd. All rights reserved.
[Effect of divided attention on explicit and implicit aspects of recall].
Wippich, W; Schmitt, R; Mecklenbräuker, S
1989-01-01
If subjects have to form word images before spelling a word from the image, results of a repetition of the spelling test reveal a reliable priming effect: Old words can be spelled faster than comparable control words, reflecting a form of implicit memory. We investigated whether this kind of repetition priming remains stable under conditions of divided attention in the study phase. The subjects had to spell meaningful words, meaningless non-words, and non-words that were meaningful with a backward spelling direction (troper, for example). In the testing stage, recognition judgments as a form of explicit memory were required, too. Divided attention in the study phase had a negative effect on explicit memory, as revealed by performance on the recognition task, but had little effect on implicit memory, as revealed by performance on the repetition of the spelling test. A further dissociation between implicit and explicit memory showed up as meaningful words were recognized much better than non-words, whereas implicit memory was uninfluenced by the meaningfulness variable. The disadvantage of backward spellings was not reduced with non-words (like troper) spelled backwards. Finally, we analyzed the relations between spelling times and recognition judgments and found a pattern of dependency for non-words only. Generally, the results are discussed within processing-oriented approaches to implicit memory with a special emphasis on controversial findings concerning the role of attention in different expressions of memory.
Batterink, Laura; Neville, Helen
2011-11-01
The vast majority of word meanings are learned simply by extracting them from context rather than by rote memorization or explicit instruction. Although this skill is remarkable, little is known about the brain mechanisms involved. In the present study, ERPs were recorded as participants read stories in which pseudowords were presented multiple times, embedded in consistent, meaningful contexts (referred to as meaning condition, M+) or inconsistent, meaningless contexts (M-). Word learning was then assessed implicitly using a lexical decision task and explicitly through recall and recognition tasks. Overall, during story reading, M- words elicited a larger N400 than M+ words, suggesting that participants were better able to semantically integrate M+ words than M- words throughout the story. In addition, M+ words whose meanings were subsequently correctly recognized and recalled elicited a more positive ERP in a later time window compared with M+ words whose meanings were incorrectly remembered, consistent with the idea that the late positive component is an index of encoding processes. In the lexical decision task, no behavioral or electrophysiological evidence for implicit priming was found for M+ words. In contrast, during the explicit recognition task, M+ words showed a robust N400 effect. The N400 effect was dependent upon recognition performance, such that only correctly recognized M+ words elicited an N400. This pattern of results provides evidence that the explicit representations of word meanings can develop rapidly, whereas implicit representations may require more extensive exposure or more time to emerge.
Ragland, J. Daniel; Gur, Ruben C.; Valdez, Jeffrey N.; Loughead, James; Elliott, Mark; Kohler, Christian; Kanes, Stephen; Siegel, Steven J.; Moelter, Stephen T.; Gur, Raquel E.
2015-01-01
Objective Patients with schizophrenia improve episodic memory accuracy when given organizational strategies through levels-of-processing paradigms. This study tested if improvement is accompanied by normalized frontotemporal function. Method Event-related blood-oxygen-level-dependent functional magnetic resonance imaging (fMRI) was used to measure activation during shallow (perceptual) and deep (semantic) word encoding and recognition in 14 patients with schizophrenia and 14 healthy comparison subjects. Results Despite slower and less accurate overall word classification, the patients showed normal levels-of-processing effects, with faster and more accurate recognition of deeply processed words. These effects were accompanied by left ventrolateral prefrontal activation during encoding in both groups, although the thalamus, hippocampus, and lingual gyrus were overactivated in the patients. During word recognition, the patients showed overactivation in the left frontal pole and had a less robust right prefrontal response. Conclusions Evidence of normal levels-of-processing effects and left prefrontal activation suggests that patients with schizophrenia can form and maintain semantic representations when they are provided with organizational cues and can improve their word encoding and retrieval. Areas of overactivation suggest residual inefficiencies. Nevertheless, the effect of teaching organizational strategies on episodic memory and brain function is a worthwhile topic for future interventional studies. PMID:16199830
NASA Astrophysics Data System (ADS)
Collison, Elizabeth A.; Munson, Benjamin; Carney, Arlene E.
2002-05-01
Recent research has attempted to identify the factors that predict speech perception performance among users of cochlear implants (CIs). Studies have found that approximately 20%-60% of the variance in speech perception scores can be accounted for by factors including duration of deafness, etiology, type of device, and length of implant use, leaving approximately 50% of the variance unaccounted for. The current study examines the extent to which vocabulary size and nonverbal cognitive ability predict CI listeners' spoken word recognition. Fifteen postlingually deafened adults with nucleus or clarion CIs were given standardized assessments of nonverbal cognitive ability and expressive vocabulary size: the Expressive Vocabulary Test, the Test of Nonverbal Intelligence-III, and the Woodcock-Johnson-III Test of Cognitive Ability, Verbal Comprehension subtest. Two spoken word recognition tasks were administered. In the first, listeners identified isophonemic CVC words. In the second, listeners identified gated words varying in lexical frequency and neighborhood density. Analyses will examine the influence of lexical frequency and neighborhood density on the uniqueness point in the gating task, as well as relationships among nonverbal cognitive ability, vocabulary size, and the two spoken word recognition measures. [Work supported by NIH Grant P01 DC00110 and by the Lions 3M Hearing Foundation.
Facilitating Word Recognition and Spelling Using Word Boxes and Word Sort Phonic Procedures.
ERIC Educational Resources Information Center
Joseph, Laurice M.
2002-01-01
Word boxes and word sorts are two word study phonics approaches that involve teaching phonemic awareness, making letter-sound associations, and teaching spelling through the use of well-established behavioral principles. The current study examines the effectiveness of word boxes and word sorts. Findings revealed that word boxes and word sort…
Constructivism in Reading Education.
ERIC Educational Resources Information Center
Stanovich, Keith
1994-01-01
In the development of word recognition skills, self-discovery may not be the most efficacious mode of learning, and it may be useful to isolate or fractionate cognitive components. Successful intervention directed at word recognition involves exogenous constructivism, in which explicit instruction and teacher-directed strategy teaching are not…
Adaptive Changes in Grain-Size in Morphological Processing
ERIC Educational Resources Information Center
Lee, Chang H.
2008-01-01
Substantial neurobiological data indicate that the dominant cortical region for printed-word recognition shifts from a temporo-parietal (dorsal) to an occipito-temporal (ventral) locus with increasing recognition experience. The circuits also have different characteristic speeds of response and word preferences. Previous evidence suggested that…
Prosodic Phonological Representations Early in Visual Word Recognition
ERIC Educational Resources Information Center
Ashby, Jane; Martin, Andrea E.
2008-01-01
Two experiments examined the nature of the phonological representations used during visual word recognition. We tested whether a minimality constraint (R. Frost, 1998) limits the complexity of early representations to a simple string of phonemes. Alternatively, readers might activate elaborated representations that include prosodic syllable…
Ahmad, Fahad N; Hockley, William E
2017-09-01
We examined whether processing fluency contributes to associative recognition of unitized pre-experimental associations. In Experiments 1A and 1B, we minimized perceptual fluency by presenting each word of pairs on separate screens at both study and test, yet the compound word (CW) effect (i.e., hit and false-alarm rates greater for CW pairs with no difference in discrimination) did not reduce. In Experiments 2A and 2B, conceptual fluency was examined by comparing transparent (e.g., hand bag) and opaque (e.g., rag time) CW pairs in lexical decision and associative recognition tasks. Lexical decision was faster for transparent CWs (Experiment 2A) but in associative recognition, the CW effect did not differ by CW pair type (Experiment 2B). In Experiments 3A and 3B, we examined whether priming that increases processing fluency would influence the CW effect. In Experiment 3A, CW and non-compound word pairs were preceded with matched and mismatched primes at test in an associative recognition task. In Experiment 3B, only transparent and opaque CW pairs were presented. Results showed that presenting matched versus mismatched primes at test did not influence the CW effect. The CW effect in yes-no associative recognition is due to reliance on enhanced familiarity of unitized CW pairs.
Noldy, N E; Stelmack, R M; Campbell, K B
1990-07-01
Event-related potentials were recorded under conditions of intentional or incidental learning of pictures and words, and during the subsequent recognition memory test for these stimuli. Intentionally learned pictures were remembered better than incidentally learned pictures and intentionally learned words, which, in turn, were remembered better than incidentally learned words. In comparison to pictures that were ignored, the pictures that were attended were characterized by greater positive amplitude frontally at 250 ms and centro-parietally at 350 ms and by greater negativity at 450 ms at parietal and occipital sites. There were no effects of attention on the waveforms elicited by words. These results support the view that processing becomes automatic for words, whereas the processing of pictures involves additional effort or allocation of attentional resources. The N450 amplitude was greater for words than for pictures during both acquisition (intentional items) and recognition phases (hit and correct rejection categories for intentional items, hit category for incidental items). Because pictures are better remembered than words, the greater late positive wave (600 ms) elicited by the pictures than the words during the acquisition phase is also consistent with the association between P300 and better memory that has been reported.
Klein, Audrey A; Nelson, Lindsay M; Anker, Justin J
2013-03-01
Though studies have examined attentional bias for alcohol-related information among alcohol-dependent individuals, few have examined memory bias. This study examined attention and recognition memory biases for alcohol-related information among patients recently admitted to residential alcohol treatment (n=100; 40% female). Participants completed a computerized attentional task wherein they classified a centrally-presented digit as odd or even. On some trials, an alcohol word, neutral word, or anagram was presented along with the digit. On these dual trials participants first classified the digit and then classified the other stimulus as a word or nonword. Participants took longer to classify digits that appeared with alcohol words compared to neutral words; suggesting the alcohol words distracted them from processing the digit. In a subsequent recognition memory test, participants showed significantly higher hit rates (i.e., correctly classifying an old item as old) and false alarm rates (i.e., incorrectly classifying a new item as old) to the alcohol words compared to the neutral words, and they also showed a more liberal response bias to alcohol words. The findings suggest that alcohol-dependent individuals exhibit both attention and memory bias for alcohol-related information. Copyright © 2012 Elsevier Ltd. All rights reserved.
Effects of blocking and presentation on the recognition of word and nonsense syllables in noise
NASA Astrophysics Data System (ADS)
Benkí, José R.
2003-10-01
Listener expectations may have significant effects on spoken word recognition, modulating word similarity effects from the lexicon. This study investigates the effect of blocking by lexical status on the recognition of word and nonsense syllables in noise. 240 phonemically matched word and nonsense CVC syllables [Boothroyd and Nittrouer, J. Acoust. Soc. Am. 84, 101-108 (1988)] were presented to listeners at different S/N ratios for identification. In the mixed condition, listeners were presented with blocks containing both words and nonwords, while listeners in the blocked condition were presented with the trials in blocks containing either words or nonwords. The targets were presented in isolation with 50 ms of preceding and following noise. Preliminary results indicate no effect of blocking on accuracy for either word or nonsense syllables; results from neighborhood density analyses will be presented. Consistent with previous studies, a j-factor analysis indicates that words are perceived as containing at least 0.5 fewer independent units than nonwords in both conditions. Relative to previous work on syllables presented in a frame sentence [Benkí, J. Acoust. Soc. Am. 113, 1689-1705 (2003)], initial consonants were perceived significantly less accurately, while vowels and final consonants were perceived at comparable rates.
Brysbaert, Marc; Keuleers, Emmanuel; New, Boris
2011-01-01
In this Perspective Article we assess the usefulness of Google's new word frequencies for word recognition research (lexical decision and word naming). We find that, despite the massive corpus on which the Google estimates are based (131 billion words from books published in the United States alone), the Google American English frequencies explain 11% less of the variance in the lexical decision times from the English Lexicon Project (Balota et al., 2007) than the SUBTLEX-US word frequencies, based on a corpus of 51 million words from film and television subtitles. Further analyses indicate that word frequencies derived from recent books (published after 2000) are better predictors of word processing times than frequencies based on the full corpus, and that word frequencies based on fiction books predict word processing times better than word frequencies based on the full corpus. The most predictive word frequencies from Google still do not explain more of the variance in word recognition times of undergraduate students and old adults than the subtitle-based word frequencies. PMID:21713191
Intelligibility of emotional speech in younger and older adults.
Dupuis, Kate; Pichora-Fuller, M Kathleen
2014-01-01
Little is known about the influence of vocal emotions on speech understanding. Word recognition accuracy for stimuli spoken to portray seven emotions (anger, disgust, fear, sadness, neutral, happiness, and pleasant surprise) was tested in younger and older listeners. Emotions were presented in either mixed (heterogeneous emotions mixed in a list) or blocked (homogeneous emotion blocked in a list) conditions. Three main hypotheses were tested. First, vocal emotion affects word recognition accuracy; specifically, portrayals of fear enhance word recognition accuracy because listeners orient to threatening information and/or distinctive acoustical cues such as high pitch mean and variation. Second, older listeners recognize words less accurately than younger listeners, but the effects of different emotions on intelligibility are similar across age groups. Third, blocking emotions in list results in better word recognition accuracy, especially for older listeners, and reduces the effect of emotion on intelligibility because as listeners develop expectations about vocal emotion, the allocation of processing resources can shift from emotional to lexical processing. Emotion was the within-subjects variable: all participants heard speech stimuli consisting of a carrier phrase followed by a target word spoken by either a younger or an older talker, with an equal number of stimuli portraying each of seven vocal emotions. The speech was presented in multi-talker babble at signal to noise ratios adjusted for each talker and each listener age group. Listener age (younger, older), condition (mixed, blocked), and talker (younger, older) were the main between-subjects variables. Fifty-six students (Mage= 18.3 years) were recruited from an undergraduate psychology course; 56 older adults (Mage= 72.3 years) were recruited from a volunteer pool. All participants had clinically normal pure-tone audiometric thresholds at frequencies ≤3000 Hz. There were significant main effects of emotion, listener age group, and condition on the accuracy of word recognition in noise. Stimuli spoken in a fearful voice were the most intelligible, while those spoken in a sad voice were the least intelligible. Overall, word recognition accuracy was poorer for older than younger adults, but there was no main effect of talker, and the pattern of the effects of different emotions on intelligibility did not differ significantly across age groups. Acoustical analyses helped elucidate the effect of emotion and some intertalker differences. Finally, all participants performed better when emotions were blocked. For both groups, performance improved over repeated presentations of each emotion in both blocked and mixed conditions. These results are the first to demonstrate a relationship between vocal emotion and word recognition accuracy in noise for younger and older listeners. In particular, the enhancement of intelligibility by emotion is greatest for words spoken to portray fear and presented heterogeneously with other emotions. Fear may have a specialized role in orienting attention to words heard in noise. This finding may be an auditory counterpart to the enhanced detection of threat information in visual displays. The effect of vocal emotion on word recognition accuracy is preserved in older listeners with good audiograms and both age groups benefit from blocking and the repetition of emotions.
Meyer, Ted A.; Pisoni, David B.
2012-01-01
Objective The Phonetically Balanced Kindergarten (PBK) Test (Haskins, Reference Note 2) has been used for almost 50 yr to assess spoken word recognition performance in children with hearing impairments. The test originally consisted of four lists of 50 words, but only three of the lists (lists 1, 3, and 4) were considered “equivalent” enough to be used clinically with children. Our goal was to determine if the lexical properties of the different PBK lists could explain any differences between the three “equivalent” lists and the fourth PBK list (List 2) that has not been used in clinical testing. Design Word frequency and lexical neighborhood frequency and density measures were obtained from a computerized database for all of the words on the four lists from the PBK Test as well as the words from a single PB-50 (Egan, 1948) word list. Results The words in the “easy” PBK list (List 2) were of higher frequency than the words in the three “equivalent” lists. Moreover, the lexical neighborhoods of the words on the “easy” list contained fewer phonetically similar words than the neighborhoods of the words on the other three “equivalent” lists. Conclusions It is important for researchers to consider word frequency and lexical neighborhood frequency and density when constructing word lists for testing speech perception. The results of this computational analysis of the PBK Test provide additional support for the proposal that spoken words are recognized “relationally” in the context of other phonetically similar words in the lexicon. Implications of using open-set word recognition tests with children with hearing impairments are discussed with regard to the specific vocabulary and information processing demands of the PBK Test. PMID:10466571
Burk, Matthew H; Humes, Larry E; Amos, Nathan E; Strauser, Lauren E
2006-06-01
The objective of this study was to evaluate the effectiveness of a training program for hearing-impaired listeners to improve their speech-recognition performance within a background noise when listening to amplified speech. Both noise-masked young normal-hearing listeners, used to model the performance of elderly hearing-impaired listeners, and a group of elderly hearing-impaired listeners participated in the study. Of particular interest was whether training on an isolated word list presented by a standardized talker can generalize to everyday speech communication across novel talkers. Word-recognition performance was measured for both young normal-hearing (n = 16) and older hearing-impaired (n = 7) adults. Listeners were trained on a set of 75 monosyllabic words spoken by a single female talker over a 9- to 14-day period. Performance for the familiar (trained) talker was measured before and after training in both open-set and closed-set response conditions. Performance on the trained words of the familiar talker were then compared with those same words spoken by three novel talkers and to performance on a second set of untrained words presented by both the familiar and unfamiliar talkers. The hearing-impaired listeners returned 6 mo after their initial training to examine retention of the trained words as well as their ability to transfer any knowledge gained from word training to sentences containing both trained and untrained words. Both young normal-hearing and older hearing-impaired listeners performed significantly better on the word list in which they were trained versus a second untrained list presented by the same talker. Improvements on the untrained words were small but significant, indicating some generalization to novel words. The large increase in performance on the trained words, however, was maintained across novel talkers, pointing to the listener's greater focus on lexical memorization of the words rather than a focus on talker-specific acoustic characteristics. On return in 6 mo, listeners performed significantly better on the trained words relative to their initial baseline performance. Although the listeners performed significantly better on trained versus untrained words in isolation, once the trained words were embedded in sentences, no improvement in recognition over untrained words within the same sentences was shown. Older hearing-impaired listeners were able to significantly improve their word-recognition abilities through training with one talker and to the same degree as young normal-hearing listeners. The improved performance was maintained across talkers and across time. This might imply that training a listener using a standardized list and talker may still provide benefit when these same words are presented by novel talkers outside the clinic. However, training on isolated words was not sufficient to transfer to fluent speech for the specific sentence materials used within this study. Further investigation is needed regarding approaches to improve a hearing aid user's speech understanding in everyday communication situations.
Orthographic effects in spoken word recognition: Evidence from Chinese.
Qu, Qingqing; Damian, Markus F
2017-06-01
Extensive evidence from alphabetic languages demonstrates a role of orthography in the processing of spoken words. Because alphabetic systems explicitly code speech sounds, such effects are perhaps not surprising. However, it is less clear whether orthographic codes are involuntarily accessed from spoken words in languages with non-alphabetic systems, in which the sound-spelling correspondence is largely arbitrary. We investigated the role of orthography via a semantic relatedness judgment task: native Mandarin speakers judged whether or not spoken word pairs were related in meaning. Word pairs were either semantically related, orthographically related, or unrelated. Results showed that relatedness judgments were made faster for word pairs that were semantically related than for unrelated word pairs. Critically, orthographic overlap on semantically unrelated word pairs induced a significant increase in response latencies. These findings indicate that orthographic information is involuntarily accessed in spoken-word recognition, even in a non-alphabetic language such as Chinese.
NASA Technical Reports Server (NTRS)
1973-01-01
The users manual for the word recognition computer program contains flow charts of the logical diagram, the memory map for templates, the speech analyzer card arrangement, minicomputer input/output routines, and assembly language program listings.
Recognition Memory for Braille or Spoken Words: An fMRI study in Early Blind
Burton, Harold; Sinclair, Robert J.; Agato, Alvin
2012-01-01
We examined cortical activity in early blind during word recognition memory. Nine participants were blind at birth and one by 1.5 yrs. In an event-related design, we studied blood oxygen level-dependent responses to studied (“old”) compared to novel (“new”) words. Presentation mode was in Braille or spoken. Responses were larger for identified “new” words read with Braille in bilateral lower and higher tier visual areas and primary somatosensory cortex. Responses to spoken “new” words were larger in bilateral primary and accessory auditory cortex. Auditory cortex was unresponsive to Braille words and occipital cortex responded to spoken words but not differentially with “old”/“new” recognition. Left dorsolateral prefrontal cortex had larger responses to “old” words only with Braille. Larger occipital cortex responses to “new” Braille words suggested verbal memory based on the mechanism of recollection. A previous report in sighted noted larger responses for “new” words studied in association with pictures that created a distinctiveness heuristic source factor which enhanced recollection during remembering. Prior behavioral studies in early blind noted an exceptional ability to recall words. Utilization of this skill by participants in the current study possibly engendered recollection that augmented remembering “old” words. A larger response when identifying “new” words possibly resulted from exhaustive recollecting the sensory properties of “old” words in modality appropriate sensory cortices. The uniqueness of a memory role for occipital cortex is in its cross-modal responses to coding tactile properties of Braille. The latter possibly reflects a “sensory echo” that aids recollection. PMID:22251836
Recognition memory for Braille or spoken words: an fMRI study in early blind.
Burton, Harold; Sinclair, Robert J; Agato, Alvin
2012-02-15
We examined cortical activity in early blind during word recognition memory. Nine participants were blind at birth and one by 1.5years. In an event-related design, we studied blood oxygen level-dependent responses to studied ("old") compared to novel ("new") words. Presentation mode was in Braille or spoken. Responses were larger for identified "new" words read with Braille in bilateral lower and higher tier visual areas and primary somatosensory cortex. Responses to spoken "new" words were larger in bilateral primary and accessory auditory cortex. Auditory cortex was unresponsive to Braille words and occipital cortex responded to spoken words but not differentially with "old"/"new" recognition. Left dorsolateral prefrontal cortex had larger responses to "old" words only with Braille. Larger occipital cortex responses to "new" Braille words suggested verbal memory based on the mechanism of recollection. A previous report in sighted noted larger responses for "new" words studied in association with pictures that created a distinctiveness heuristic source factor which enhanced recollection during remembering. Prior behavioral studies in early blind noted an exceptional ability to recall words. Utilization of this skill by participants in the current study possibly engendered recollection that augmented remembering "old" words. A larger response when identifying "new" words possibly resulted from exhaustive recollecting the sensory properties of "old" words in modality appropriate sensory cortices. The uniqueness of a memory role for occipital cortex is in its cross-modal responses to coding tactile properties of Braille. The latter possibly reflects a "sensory echo" that aids recollection. Copyright © 2011 Elsevier B.V. All rights reserved.
Studies in automatic speech recognition and its application in aerospace
NASA Astrophysics Data System (ADS)
Taylor, Michael Robinson
Human communication is characterized in terms of the spectral and temporal dimensions of speech waveforms. Electronic speech recognition strategies based on Dynamic Time Warping and Markov Model algorithms are described and typical digit recognition error rates are tabulated. The application of Direct Voice Input (DVI) as an interface between man and machine is explored within the context of civil and military aerospace programmes. Sources of physical and emotional stress affecting speech production within military high performance aircraft are identified. Experimental results are reported which quantify fundamental frequency and coarse temporal dimensions of male speech as a function of the vibration, linear acceleration and noise levels typical of aerospace environments; preliminary indications of acoustic phonetic variability reported by other researchers are summarized. Connected whole-word pattern recognition error rates are presented for digits spoken under controlled Gz sinusoidal whole-body vibration. Correlations are made between significant increases in recognition error rate and resonance of the abdomen-thorax and head subsystems of the body. The phenomenon of vibrato style speech produced under low frequency whole-body Gz vibration is also examined. Interactive DVI system architectures and avionic data bus integration concepts are outlined together with design procedures for the efficient development of pilot-vehicle command and control protocols.
The time course of morphological processing during spoken word recognition in Chinese.
Shen, Wei; Qu, Qingqing; Ni, Aiping; Zhou, Junyi; Li, Xingshan
2017-12-01
We investigated the time course of morphological processing during spoken word recognition using the printed-word paradigm. Chinese participants were asked to listen to a spoken disyllabic compound word while simultaneously viewing a printed-word display. Each visual display consisted of three printed words: a semantic associate of the first constituent of the compound word (morphemic competitor), a semantic associate of the whole compound word (whole-word competitor), and an unrelated word (distractor). Participants were directed to detect whether the spoken target word was on the visual display. Results indicated that both the morphemic and whole-word competitors attracted more fixations than the distractor. More importantly, the morphemic competitor began to diverge from the distractor immediately at the acoustic offset of the first constituent, which was earlier than the whole-word competitor. These results suggest that lexical access to the auditory word is incremental and morphological processing (i.e., semantic access to the first constituent) that occurs at an early processing stage before access to the representation of the whole word in Chinese.
Strand, Julia F
2014-03-01
A widely agreed-upon feature of spoken word recognition is that multiple lexical candidates in memory are simultaneously activated in parallel when a listener hears a word, and that those candidates compete for recognition (Luce, Goldinger, Auer, & Vitevitch, Perception 62:615-625, 2000; Luce & Pisoni, Ear and Hearing 19:1-36, 1998; McClelland & Elman, Cognitive Psychology 18:1-86, 1986). Because the presence of those competitors influences word recognition, much research has sought to quantify the processes of lexical competition. Metrics that quantify lexical competition continuously are more effective predictors of auditory and visual (lipread) spoken word recognition than are the categorical metrics traditionally used (Feld & Sommers, Speech Communication 53:220-228, 2011; Strand & Sommers, Journal of the Acoustical Society of America 130:1663-1672, 2011). A limitation of the continuous metrics is that they are somewhat computationally cumbersome and require access to existing speech databases. This article describes the Phi-square Lexical Competition Database (Phi-Lex): an online, searchable database that provides access to multiple metrics of auditory and visual (lipread) lexical competition for English words, available at www.juliastrand.com/phi-lex .
Van der Haegen, Lise; Brysbaert, Marc; Davis, Colin J
2009-02-01
It has recently been shown that interhemispheric communication is needed for the processing of foveally presented words. In this study, we examine whether the integration of information happens at an early stage, before word recognition proper starts, or whether the integration is part of the recognition process itself. Two lexical decision experiments are reported in which words were presented at different fixation positions. In Experiment 1, a masked form priming task was used with primes that had two adjacent letters transposed. The results showed that although the fixation position had a substantial influence on the transposed letter priming effect, the priming was not smaller when the transposed letters were sent to different hemispheres than when they were projected to the same hemisphere. In Experiment 2, stimuli were presented that either had high frequency hemifield competitors or could be identified unambiguously on the basis of the information in one hemifield. Again, the lexical decision times did not vary as a function of hemifield competitors. These results are consistent with the early integration account, as presented in the SERIOL model of visual word recognition.
Gallo, David A; Kensinger, Elizabeth A; Schacter, Daniel L
2006-01-01
According to the distinctiveness heuristic, subjects rely more on detailed recollections (and less on familiarity) when memory is tested for pictures relative to words, leading to reduced false recognition. If so, then neural regions that have been implicated in effortful postretrieval monitoring (e.g., dorsolateral prefrontal cortex) might be recruited less heavily when trying to remember pictures. We tested this prediction with the criterial recollection task. Subjects studied black words, paired with either the same word in red font or a corresponding colored picture. Red words were repeated at study to equate recognition hits for red words and pictures. During fMRI scanning, alternating red word memory tests and picture memory tests were given, using only white words as test stimuli (say "yes" only if you recollect a corresponding red word or picture, respectively). These tests were designed so that subjects had to rely on memory for the criterial information. Replicating prior behavioral work, we found enhanced rejection of lures on the picture test compared to the red word test, indicating that subjects had used a distinctiveness heuristic. Critically, dorsolateral prefrontal activity was reduced when rejecting familiar lures on the picture test, relative to the red word test. These findings indicate that reducing false recognition via the distinctiveness heuristic is not heavily dependent on frontally mediated postretrieval monitoring processes.
Optimizing estimation of hemispheric dominance for language using magnetic source imaging
Passaro, Antony D.; Rezaie, Roozbeh; Moser, Dana C.; Li, Zhimin; Dias, Nadeeka; Papanicolaou, Andrew C.
2011-01-01
The efficacy of magnetoencephalography (MEG) as an alternative to invasive methods for investigating the cortical representation of language has been explored in several studies. Recently, studies comparing MEG to the gold standard Wada procedure have found inconsistent and often less-than accurate estimates of laterality across various MEG studies. Here we attempted to address this issue among normal right-handed adults (N=12) by supplementing a well-established MEG protocol involving word recognition and the single dipole method with a sentence comprehension task and a beamformer approach localizing neural oscillations. Beamformer analysis of word recognition and sentence comprehension tasks revealed a desynchronization in the 10–18 Hz range, localized to the temporo-parietal cortices. Inspection of individual profiles of localized desynchronization (10–18 Hz) revealed left hemispheric dominance in 91.7% and 83.3% of individuals during the word recognition and sentence comprehension tasks, respectively. In contrast, single dipole analysis yielded lower estimates, such that activity in temporal language regions was left-lateralized in 66.7% and 58.3% of individuals during word recognition and sentence comprehension, respectively. The results obtained from the word recognition task and localization of oscillatory activity using a beamformer appear to be in line with general estimates of left hemispheric dominance for language in normal right-handed individuals. Furthermore, the current findings support the growing notion that changes in neural oscillations underlie critical components of linguistic processing. PMID:21890118
Re-examination of the role of the human acoustic stapedius reflex
NASA Astrophysics Data System (ADS)
Phillips, Dennis P.; Stuart, Andrew; Carpenter, Michael
2002-05-01
The ``rollover'' seen in the word recognition performance scores of patients with Bell's palsy (facial nerve paralysis) has historically been taken as an indicator of the role of the stapedius reflex in the protection from upward spread of masking. Bell's palsy, however, may be a polyneuropathy, so it is not clear that the poor word recognition performance at high levels is necessarily attributable specifically to impaired facial nerve function. The present article reports two new experiments that probe whether an isolated impairment of the stapedius reflex can produce rollover in word recognition performance-intensity functions. In experiment 1, performance-intensity functions for monosyllabic speech materials were obtained from ten normal listeners under two listening conditions: normal and low-frequency augmented to offset the effects of the stapedius reflex on the transmission of low-frequency vibrations to the cochlea. There was no effect of the spectral augmentation on word recognition for stimulus levels up to 107 dB SPL. In experiment 2, six patients who had undergone stapedectomy were tested for rollover using performance-intensity functions. None of the patients showed rollover in their performance-intensity functions, even at stimulus levels in excess of 100 dB HL. These data suggest that if the stapedius reflex has a role in protection from upward spread of masking, then this role is inconsequential for word recognition in quiet.
A novel probabilistic framework for event-based speech recognition
NASA Astrophysics Data System (ADS)
Juneja, Amit; Espy-Wilson, Carol
2003-10-01
One of the reasons for unsatisfactory performance of the state-of-the-art automatic speech recognition (ASR) systems is the inferior acoustic modeling of low-level acoustic-phonetic information in the speech signal. An acoustic-phonetic approach to ASR, on the other hand, explicitly targets linguistic information in the speech signal, but such a system for continuous speech recognition (CSR) is not known to exist. A probabilistic and statistical framework for CSR based on the idea of the representation of speech sounds by bundles of binary valued articulatory phonetic features is proposed. Multiple probabilistic sequences of linguistically motivated landmarks are obtained using binary classifiers of manner phonetic features-syllabic, sonorant and continuant-and the knowledge-based acoustic parameters (APs) that are acoustic correlates of those features. The landmarks are then used for the extraction of knowledge-based APs for source and place phonetic features and their binary classification. Probabilistic landmark sequences are constrained using manner class language models for isolated or connected word recognition. The proposed method could overcome the disadvantages encountered by the early acoustic-phonetic knowledge-based systems that led the ASR community to switch to systems highly dependent on statistical pattern analysis methods and probabilistic language or grammar models.
Kim, Min-Beom; Chung, Won-Ho; Choi, Jeesun; Hong, Sung Hwa; Cho, Yang-Sun; Park, Gyuseok; Lee, Sangmin
2014-06-01
The object was to evaluate speech perception improvement through Bluetooth-implemented hearing aids in hearing-impaired adults. Thirty subjects with bilateral symmetric moderate sensorineural hearing loss participated in this study. A Bluetooth-implemented hearing aid was fitted unilaterally in all study subjects. Objective speech recognition score and subjective satisfaction were measured with a Bluetooth-implemented hearing aid to replace the acoustic connection from either a cellular phone or a loudspeaker system. In each system, participants were assigned to 4 conditions: wireless speech signal transmission into hearing aid (wireless mode) in quiet or noisy environment and conventional speech signal transmission using external microphone of hearing aid (conventional mode) in quiet or noisy environment. Also, participants completed questionnaires to investigate subjective satisfaction. Both cellular phone and loudspeaker system situation, participants showed improvements in sentence and word recognition scores with wireless mode compared to conventional mode in both quiet and noise conditions (P < .001). Participants also reported subjective improvements, including better sound quality, less noise interference, and better accuracy naturalness, when using the wireless mode (P < .001). Bluetooth-implemented hearing aids helped to improve subjective and objective speech recognition performances in quiet and noisy environments during the use of electronic audio devices.
A single dual-stream framework for syntactic computations in music and language.
Musso, Mariacristina; Weiller, Cornelius; Horn, Andreas; Glauche, Volkmer; Umarova, Roza; Hennig, Jürgen; Schneider, Albrecht; Rijntjes, Michel
2015-08-15
This study is the first to compare in the same subjects the specific spatial distribution and the functional and anatomical connectivity of the neuronal resources that activate and integrate syntactic representations during music and language processing. Combining functional magnetic resonance imaging with functional connectivity and diffusion tensor imaging-based probabilistic tractography, we examined the brain network involved in the recognition and integration of words and chords that were not hierarchically related to the preceding syntax; that is, those deviating from the universal principles of grammar and tonal relatedness. This kind of syntactic processing in both domains was found to rely on a shared network in the left hemisphere centered on the inferior part of the inferior frontal gyrus (IFG), including pars opercularis and pars triangularis, and on dorsal and ventral long association tracts connecting this brain area with temporo-parietal regions. Language processing utilized some adjacent left hemispheric IFG and middle temporal regions more than music processing, and music processing also involved right hemisphere regions not activated in language processing. Our data indicate that a dual-stream system with dorsal and ventral long association tracts centered on a functionally and structurally highly differentiated left IFG is pivotal for domain-general syntactic competence over a broad range of elements including words and chords. Copyright © 2015 Elsevier Inc. All rights reserved.
Adaptive false memory: Imagining future scenarios increases false memories in the DRM paradigm.
Dewhurst, Stephen A; Anderson, Rachel J; Grace, Lydia; van Esch, Lotte
2016-10-01
Previous research has shown that rating words for their relevance to a future scenario enhances memory for those words. The current study investigated the effect of future thinking on false memory using the Deese/Roediger-McDermott (DRM) procedure. In Experiment 1, participants rated words from 6 DRM lists for relevance to a past or future event (with or without planning) or in terms of pleasantness. In a surprise recall test, levels of correct recall did not vary between the rating tasks, but the future rating conditions led to significantly higher levels of false recall than the past and pleasantness conditions did. Experiment 2 found that future rating led to higher levels of false recognition than did past and pleasantness ratings but did not affect correct recognition. The effect in false recognition was, however, eliminated when DRM items were presented in random order. Participants in Experiment 3 were presented with both DRM lists and lists of unrelated words. Future rating increased levels of false recognition for DRM lures but did not affect correct recognition for DRM or unrelated lists. The findings are discussed in terms of the view that false memories can be associated with adaptive memory functions.
Nilakantan, Aneesha S; Voss, Joel L; Weintraub, Sandra; Mesulam, M-Marsel; Rogalski, Emily J
2017-06-01
Primary progressive aphasia (PPA) is clinically defined by an initial loss of language function and preservation of other cognitive abilities, including episodic memory. While PPA primarily affects the left-lateralized perisylvian language network, some clinical neuropsychological tests suggest concurrent initial memory loss. The goal of this study was to test recognition memory of objects and words in the visual and auditory modality to separate language-processing impairments from retentive memory in PPA. Individuals with non-semantic PPA had longer reaction times and higher false alarms for auditory word stimuli compared to visual object stimuli. Moreover, false alarms for auditory word recognition memory were related to cortical thickness within the left inferior frontal gyrus and left temporal pole, while false alarms for visual object recognition memory was related to cortical thickness within the right-temporal pole. This pattern of results suggests that specific vulnerability in processing verbal stimuli can hinder episodic memory in PPA, and provides evidence for differential contributions of the left and right temporal poles in word and object recognition memory. Copyright © 2017 Elsevier Ltd. All rights reserved.
Do age-related word retrieval difficulties appear (or disappear) in connected speech?
Kavé, Gitit; Goral, Mira
2017-09-01
We conducted a comprehensive literature review of studies of word retrieval in connected speech in healthy aging and reviewed relevant aphasia research that could shed light on the aging literature. Four main hypotheses guided the review: (1) Significant retrieval difficulties would lead to reduced output in connected speech. (2) Significant retrieval difficulties would lead to a more limited lexical variety in connected speech. (3) Significant retrieval difficulties would lead to an increase in word substitution errors and in pronoun use as well as to greater dysfluency and hesitation in connected speech. (4) Retrieval difficulties on tests of single-word production would be associated with measures of word retrieval in connected speech. Studies on aging did not confirm these four hypotheses, unlike studies on aphasia that generally did. The review suggests that future research should investigate how context facilitates word production in old age.
Infant word recognition: Insights from TRACE simulations☆
Mayor, Julien; Plunkett, Kim
2014-01-01
The TRACE model of speech perception (McClelland & Elman, 1986) is used to simulate results from the infant word recognition literature, to provide a unified, theoretical framework for interpreting these findings. In a first set of simulations, we demonstrate how TRACE can reconcile apparently conflicting findings suggesting, on the one hand, that consonants play a pre-eminent role in lexical acquisition (Nespor, Peña & Mehler, 2003; Nazzi, 2005), and on the other, that there is a symmetry in infant sensitivity to vowel and consonant mispronunciations of familiar words (Mani & Plunkett, 2007). In a second series of simulations, we use TRACE to simulate infants’ graded sensitivity to mispronunciations of familiar words as reported by White and Morgan (2008). An unexpected outcome is that TRACE fails to demonstrate graded sensitivity for White and Morgan’s stimuli unless the inhibitory parameters in TRACE are substantially reduced. We explore the ramifications of this finding for theories of lexical development. Finally, TRACE mimics the impact of phonological neighbourhoods on early word learning reported by Swingley and Aslin (2007). TRACE offers an alternative explanation of these findings in terms of mispronunciations of lexical items rather than imputing word learning to infants. Together these simulations provide an evaluation of Developmental (Jusczyk, 1993) and Familiarity (Metsala, 1999) accounts of word recognition by infants and young children. The findings point to a role for both theoretical approaches whereby vocabulary structure and content constrain infant word recognition in an experience-dependent fashion, and highlight the continuity in the processes and representations involved in lexical development during the second year of life. PMID:24493907
Infant word recognition: Insights from TRACE simulations.
Mayor, Julien; Plunkett, Kim
2014-02-01
The TRACE model of speech perception (McClelland & Elman, 1986) is used to simulate results from the infant word recognition literature, to provide a unified, theoretical framework for interpreting these findings. In a first set of simulations, we demonstrate how TRACE can reconcile apparently conflicting findings suggesting, on the one hand, that consonants play a pre-eminent role in lexical acquisition (Nespor, Peña & Mehler, 2003; Nazzi, 2005), and on the other, that there is a symmetry in infant sensitivity to vowel and consonant mispronunciations of familiar words (Mani & Plunkett, 2007). In a second series of simulations, we use TRACE to simulate infants' graded sensitivity to mispronunciations of familiar words as reported by White and Morgan (2008). An unexpected outcome is that TRACE fails to demonstrate graded sensitivity for White and Morgan's stimuli unless the inhibitory parameters in TRACE are substantially reduced. We explore the ramifications of this finding for theories of lexical development. Finally, TRACE mimics the impact of phonological neighbourhoods on early word learning reported by Swingley and Aslin (2007). TRACE offers an alternative explanation of these findings in terms of mispronunciations of lexical items rather than imputing word learning to infants. Together these simulations provide an evaluation of Developmental (Jusczyk, 1993) and Familiarity (Metsala, 1999) accounts of word recognition by infants and young children. The findings point to a role for both theoretical approaches whereby vocabulary structure and content constrain infant word recognition in an experience-dependent fashion, and highlight the continuity in the processes and representations involved in lexical development during the second year of life.
Functions of graphemic and phonemic codes in visual word-recognition.
Meyer, D E; Schvaneveldt, R W; Ruddy, M G
1974-03-01
Previous investigators have argued that printed words are recognized directly from visual representations and/or phonological representations obtained through phonemic recoding. The present research tested these hypotheses by manipulating graphemic and phonemic relations within various pairs of letter strings. Ss in two experiments classified the pairs as words or nonwords. Reaction times and error rates were relatively small for word pairs (e.g., BRIBE-TRIBE) that were both graphemically, and phonemically similar. Graphemic similarity alone inhibited performance on other word pairs (e.g., COUCH-TOUCH). These and other results suggest that phonological representations play a significant role in visual word recognition and that there is a dependence between successive phonemic-encoding operations. An encoding-bias model is proposed to explain the data.
Pisoni, David B.; Cleary, Miranda
2012-01-01
Large individual differences in spoken word recognition performance have been found in deaf children after cochlear implantation. Recently, Pisoni and Geers (2000) reported that simple forward digit span measures of verbal working memory were significantly correlated with spoken word recognition scores even after potentially confounding variables were statistically controlled for. The present study replicates and extends these initial findings to the full set of 176 participants in the CID cochlear implant study. The pooled data indicate that despite statistical “partialling-out” of differences in chronological age, communication mode, duration of deafness, duration of device use, age at onset of deafness, number of active electrodes, and speech feature discrimination, significant correlations still remain between digit span and several measures of spoken word recognition. Strong correlations were also observed between speaking rate and both forward and backward digit span, a result that is similar to previously reported findings in normalhearing adults and children. The results suggest that perhaps as much as 20% of the currently unexplained variance in spoken word recognition scores may be independently accounted for by individual differences in cognitive factors related to the speed and efficiency with which phonological and lexical representations of spoken words are maintained in and retrieved from working memory. A smaller percentage, perhaps about 7% of the currently unexplained variance in spoken word recognition scores, may be accounted for in terms of working memory capacity. We discuss how these relationships may arise and their contribution to subsequent speech and language development in prelingually deaf children who use cochlear implants. PMID:12612485
Soares, Ana Paula; Perea, Manuel; Comesaña, Montserrat
2014-01-01
Recent research with skilled adult readers has consistently revealed an advantage of consonants over vowels in visual-word recognition (i.e., the so-called “consonant bias”). Nevertheless, little is known about how early in development the consonant bias emerges. This work aims to address this issue by studying the relative contribution of consonants and vowels at the early stages of visual-word recognition in developing readers (2nd and 4th Grade children) and skilled adult readers (college students) using a masked priming lexical decision task. Target words starting either with a consonant or a vowel were preceded by a briefly presented masked prime (50 ms) that could be the same as the target (e.g., pirata-PIRATA [pirate-PIRATE]), a consonant-preserving prime (e.g., pureto-PIRATA), a vowel-preserving prime (e.g., gicala-PIRATA), or an unrelated prime (e.g., bocelo -PIRATA). Results revealed significant priming effects for the identity and consonant-preserving conditions in adult readers and 4th Grade children, whereas 2nd graders only showed priming for the identity condition. In adult readers, the advantage of consonants was observed both for words starting with a consonant or a vowel, while in 4th graders this advantage was restricted to words with an initial consonant. Thus, the present findings suggest that a Consonant/Vowel skeleton should be included in future (developmental) models of visual-word recognition and reading. PMID:24523917
A multistream model of visual word recognition.
Allen, Philip A; Smith, Albert F; Lien, Mei-Ching; Kaut, Kevin P; Canfield, Angie
2009-02-01
Four experiments are reported that test a multistream model of visual word recognition, which associates letter-level and word-level processing channels with three known visual processing streams isolated in macaque monkeys: the magno-dominated (MD) stream, the interblob-dominated (ID) stream, and the blob-dominated (BD) stream (Van Essen & Anderson, 1995). We show that mixing the color of adjacent letters of words does not result in facilitation of response times or error rates when the spatial-frequency pattern of a whole word is familiar. However, facilitation does occur when the spatial-frequency pattern of a whole word is not familiar. This pattern of results is not due to different luminance levels across the different-colored stimuli and the background because isoluminant displays were used. Also, the mixed-case, mixed-hue facilitation occurred when different display distances were used (Experiments 2 and 3), so this suggests that image normalization can adjust independently of object size differences. Finally, we show that this effect persists in both spaced and unspaced conditions (Experiment 4)--suggesting that inappropriate letter grouping by hue cannot account for these results. These data support a model of visual word recognition in which lower spatial frequencies are processed first in the more rapid MD stream. The slower ID and BD streams may process some lower spatial frequency information in addition to processing higher spatial frequency information, but these channels tend to lose the processing race to recognition unless the letter string is unfamiliar to the MD stream--as with mixed-case presentation.
Syntactic Predictability in the Recognition of Carefully and Casually Produced Speech
ERIC Educational Resources Information Center
Viebahn, Malte C.; Ernestus, Mirjam; McQueen, James M.
2015-01-01
The present study investigated whether the recognition of spoken words is influenced by how predictable they are given their syntactic context and whether listeners assign more weight to syntactic predictability when acoustic-phonetic information is less reliable. Syntactic predictability was manipulated by varying the word order of past…
ERIC Educational Resources Information Center
Guan, Connie Qun; Liu, Ying; Chan, Derek Ho Leung; Ye, Feifei; Perfetti, Charles A.
2011-01-01
Learning to write words may strengthen orthographic representations and thus support word-specific recognition processes. This hypothesis applies especially to Chinese because its writing system encourages character-specific recognition that depends on accurate representation of orthographic form. We report 2 studies that test this hypothesis in…
Recognition Memory for Words and Faces Before and After Temporal Lobectomy.
ERIC Educational Resources Information Center
Naugle, Richard I.; And Others
1994-01-01
To assess the sensitivity of Warrington's Recognition Memory Test "words" and "faces" subtests to lateralized temporal lobe seizure foci and effects of epilepsy surgery, 27 left- and 39 right-temporal lobectomy patients were tested before and after surgery. Conditions under which misclassification is most likely are discussed.…
Lexical Competition in Non-Native Spoken-Word Recognition
ERIC Educational Resources Information Center
Weber, Andrea; Cutler, Anne
2004-01-01
Four eye-tracking experiments examined lexical competition in non-native spoken-word recognition. Dutch listeners hearing English fixated longer on distractor pictures with names containing vowels that Dutch listeners are likely to confuse with vowels in a target picture name ("pencil," given target "panda") than on less confusable distractors…
Hemispheric Differences in Indexical Specificity Effects in Spoken Word Recognition
ERIC Educational Resources Information Center
Gonzalez, Julio; McLennan, Conor T.
2007-01-01
Variability in talker identity, one type of indexical variation, has demonstrable effects on the speed and accuracy of spoken word recognition. Furthermore, neuropsychological evidence suggests that indexical and linguistic information may be represented and processed differently in the 2 cerebral hemispheres, and is consistent with findings from…
Context and Spoken Word Recognition in a Novel Lexicon
ERIC Educational Resources Information Center
Revill, Kathleen Pirog; Tanenhaus, Michael K.; Aslin, Richard N.
2008-01-01
Three eye movement studies with novel lexicons investigated the role of semantic context in spoken word recognition, contrasting 3 models: restrictive access, access-selection, and continuous integration. Actions directed at novel shapes caused changes in motion (e.g., looming, spinning) or state (e.g., color, texture). Across the experiments,…
Visual Speech Primes Open-Set Recognition of Spoken Words
ERIC Educational Resources Information Center
Buchwald, Adam B.; Winters, Stephen J.; Pisoni, David B.
2009-01-01
Visual speech perception has become a topic of considerable interest to speech researchers. Previous research has demonstrated that perceivers neurally encode and use speech information from the visual modality, and this information has been found to facilitate spoken word recognition in tasks such as lexical decision (Kim, Davis, & Krins,…
Collaborative Efforts to Promote Emergent Literacy and Efficient Word Recognition Skills
ERIC Educational Resources Information Center
Roth, Froma P.; Troia, Gary A.
2006-01-01
In this article, 3 models of collaboration between speech-language pathologists and classroom teachers are discussed to promote emergent literacy and accurate and fluent word recognition. These models are demonstration lessons, team teaching, and consultation. A number of instructional principles are presented for emergent literacy and decoding…
Short-Term and Long-Term Effects on Visual Word Recognition
ERIC Educational Resources Information Center
Protopapas, Athanassios; Kapnoula, Efthymia C.
2016-01-01
Effects of lexical and sublexical variables on visual word recognition are often treated as homogeneous across participants and stable over time. In this study, we examine the modulation of frequency, length, syllable and bigram frequency, orthographic neighborhood, and graphophonemic consistency effects by (a) individual differences, and (b) item…
ERIC Educational Resources Information Center
Lehtonen, Minna; Hulten, Annika; Rodriguez-Fornells, Antoni; Cunillera, Toni; Tuomainen, Jyrki; Laine, Matti
2012-01-01
We investigated the behavioral and brain responses (ERPs) of bilingual word recognition to three fundamental psycholinguistic factors, frequency, morphology, and lexicality, in early bilinguals vs. monolinguals. Earlier behavioral studies have reported larger frequency effects in bilinguals' nondominant vs. dominant language and in some studies…
Word Recognition: Theoretical Issues and Instructional Hints.
ERIC Educational Resources Information Center
Smith, Edward E.; Kleiman, Glenn M.
Research on adult readers' word recognition skills is used in this paper to develop a general information processing model of reading. Stages of the model include feature extraction, interpretation, lexical access, working memory, and integration. Of those stages, particular attention is given to the units of interpretation, speech recoding and…
Separating Speed from Accuracy in Beginning Reading Development
ERIC Educational Resources Information Center
Juul, Holger; Poulsen, Mads; Elbro, Carsten
2014-01-01
Phoneme awareness, letter knowledge, and rapid automatized naming (RAN) are well-known kindergarten predictors of later word recognition skills, but it is not clear whether they predict developments in accuracy or speed, or both. The present longitudinal study of 172 Danish beginning readers found that speed of word recognition mainly developed…
Leal, Stephanie L; Noche, Jessica A; Murray, Elizabeth A; Yassa, Michael A
2017-01-01
While aging is generally associated with episodic memory decline, not all older adults exhibit memory loss. Furthermore, emotional memories are not subject to the same extent of forgetting and appear preserved in aging. We conducted high-resolution fMRI during a task involving pattern separation of emotional information in older adults with and without age-related memory impairment (characterized by performance on a word-list learning task: low performers: LP vs. high performers: HP). We found signals consistent with emotional pattern separation in hippocampal dentate (DG)/CA3 in HP but not in LP individuals, suggesting a deficit in emotional pattern separation. During false recognition, we found increased DG/CA3 activity in LP individuals, suggesting that hyperactivity may be associated with overgeneralization. We additionally observed a selective deficit in basolateral amygdala-lateral entorhinal cortex-DG/CA3 functional connectivity in LP individuals during pattern separation of negative information. During negative false recognition, LP individuals showed increased medial temporal lobe functional connectivity, consistent with overgeneralization. Overall, these results suggest a novel mechanistic account of individual differences in emotional memory alterations exhibited in aging. Copyright © 2016 Elsevier Inc. All rights reserved.
Leal, Stephanie L.; Noche, Jessica A.; Murray, Elizabeth A.; Yassa, Michael A.
2018-01-01
While aging is generally associated with episodic memory decline, not all older adults exhibit memory loss. Furthermore, emotional memories are not subject to the same extent of forgetting and appear preserved in aging. We conducted high-resolution fMRI during a task involving pattern separation of emotional information in older adults with and without age-related memory impairment (characterized by performance on a word-list learning task: low performers: LP vs. high performers: HP). We found signals consistent with emotional pattern separation in hippocampal dentate (DG)/CA3 in HP but not in LP individuals, suggesting a deficit in emotional pattern separation. During false recognition, we found increased DG/CA3 activity in LP individuals, suggesting that hyperactivity may be associated with overgeneralization. We additionally observed a selective deficit in basolateral amygdala—lateral entorhinal cortex—DG/CA3 functional connectivity in LP individuals during pattern separation of negative information. During negative false recognition, LP individuals showed increased medial temporal lobe functional connectivity, consistent with overgeneralization. Overall, these results suggest a novel mechanistic account of individual differences in emotional memory alterations exhibited in aging. PMID:27723500
Sommers, M S; Kirk, K I; Pisoni, D B
1997-04-01
The purpose of the present studies was to assess the validity of using closed-set response formats to measure two cognitive processes essential for recognizing spoken words---perceptual normalization (the ability to accommodate acoustic-phonetic variability) and lexical discrimination (the ability to isolate words in the mental lexicon). In addition, the experiments were designed to examine the effects of response format on evaluation of these two abilities in normal-hearing (NH), noise-masked normal-hearing (NMNH), and cochlear implant (CI) subject populations. The speech recognition performance of NH, NMNH, and CI listeners was measured using both open- and closed-set response formats under a number of experimental conditions. To assess talker normalization abilities, identification scores for words produced by a single talker were compared with recognition performance for items produced by multiple talkers. To examine lexical discrimination, performance for words that are phonetically similar to many other words (hard words) was compared with scores for items with few phonetically similar competitors (easy words). Open-set word identification for all subjects was significantly poorer when stimuli were produced in lists with multiple talkers compared with conditions in which all of the words were spoken by a single talker. Open-set word recognition also was better for lexically easy compared with lexically hard words. Closed-set tests, in contrast, failed to reveal the effects of either talker variability or lexical difficulty even when the response alternatives provided were systematically selected to maximize confusability with target items. These findings suggest that, although closed-set tests may provide important information for clinical assessment of speech perception, they may not adequately evaluate a number of cognitive processes that are necessary for recognizing spoken words. The parallel results obtained across all subject groups indicate that NH, NMNH, and CI listeners engage similar perceptual operations to identify spoken words. Implications of these findings for the design of new test batteries that can provide comprehensive evaluations of the individual capacities needed for processing spoken language are discussed.
Image jitter enhances visual performance when spatial resolution is impaired.
Watson, Lynne M; Strang, Niall C; Scobie, Fraser; Love, Gordon D; Seidel, Dirk; Manahilov, Velitchko
2012-09-06
Visibility of low-spatial frequency stimuli improves when their contrast is modulated at 5 to 10 Hz compared with stationary stimuli. Therefore, temporal modulations of visual objects could enhance the performance of low vision patients who primarily perceive images of low-spatial frequency content. We investigated the effect of retinal-image jitter on word recognition speed and facial emotion recognition in subjects with central visual impairment. Word recognition speed and accuracy of facial emotion discrimination were measured in volunteers with AMD under stationary and jittering conditions. Computer-driven and optoelectronic approaches were used to induce retinal-image jitter with duration of 100 or 166 ms and amplitude within the range of 0.5 to 2.6° visual angle. Word recognition speed was also measured for participants with simulated (Bangerter filters) visual impairment. Text jittering markedly enhanced word recognition speed for people with severe visual loss (101 ± 25%), while for those with moderate visual impairment, this effect was weaker (19 ± 9%). The ability of low vision patients to discriminate the facial emotions of jittering images improved by a factor of 2. A prototype of optoelectronic jitter goggles produced similar improvement in facial emotion discrimination. Word recognition speed in participants with simulated visual impairment was enhanced for interjitter intervals over 100 ms and reduced for shorter intervals. Results suggest that retinal-image jitter with optimal frequency and amplitude is an effective strategy for enhancing visual information processing in the absence of spatial detail. These findings will enable the development of novel tools to improve the quality of life of low vision patients.
Batterink, Laura; Neville, Helen
2011-01-01
The vast majority of word meanings are learned simply by extracting them from context, rather than by rote memorization or explicit instruction. Although this skill is remarkable, little is known about the brain mechanisms involved. In the present study, ERPs were recorded as participants read stories in which pseudowords were presented multiple times, embedded in consistent, meaningful contexts (referred to as meaning condition, M+) or inconsistent, meaningless contexts (M−). Word learning was then assessed implicitly using a lexical decision task and explicitly through recall and recognition tasks. Overall, during story reading, M− words elicited a larger N400 than M+ words, suggesting that participants were better able to semantically integrate M+ words than M− words throughout the story. In addition, M+ words whose meanings were subsequently correctly recognized and recalled elicited a more positive ERP in a later time-window compared to M+ words whose meanings were incorrectly remembered, consistent with the idea that the late positive component (LPC) is an index of encoding processes. In the lexical decision task, no behavioral or electrophysiological evidence for implicit priming was found for M+ words. In contrast, during the explicit recognition task, M+ words showed a robust N400 effect. The N400 effect was dependent upon recognition performance, such that only correctly recognized M+ words elicited an N400. This pattern of results provides evidence that the explicit representations of word meanings can develop rapidly, while implicit representations may require more extensive exposure or more time to emerge. PMID:21452941
Patten, Hannah
2017-01-01
Purpose This study examined the unique and shared variance that nonword repetition and vocabulary knowledge contribute to children's ability to learn new words. Multiple measures of word learning were used to assess recall and recognition of phonological and semantic information. Method Fifty children, with a mean age of 8 years (range 5–12 years), completed experimental assessments of word learning and norm-referenced assessments of receptive and expressive vocabulary knowledge and nonword repetition skills. Hierarchical multiple regression analyses examined the variance in word learning that was explained by vocabulary knowledge and nonword repetition after controlling for chronological age. Results Together with chronological age, nonword repetition and vocabulary knowledge explained up to 44% of the variance in children's word learning. Nonword repetition was the stronger predictor of phonological recall, phonological recognition, and semantic recognition, whereas vocabulary knowledge was the stronger predictor of verbal semantic recall. Conclusions These findings extend the results of past studies indicating that both nonword repetition skill and existing vocabulary knowledge are important for new word learning, but the relative influence of each predictor depends on the way word learning is measured. Suggestions for further research involving typically developing children and children with language or reading impairments are discussed. PMID:28241284
HMM-based lexicon-driven and lexicon-free word recognition for online handwritten Indic scripts.
Bharath, A; Madhvanath, Sriganesh
2012-04-01
Research for recognizing online handwritten words in Indic scripts is at its early stages when compared to Latin and Oriental scripts. In this paper, we address this problem specifically for two major Indic scripts--Devanagari and Tamil. In contrast to previous approaches, the techniques we propose are largely data driven and script independent. We propose two different techniques for word recognition based on Hidden Markov Models (HMM): lexicon driven and lexicon free. The lexicon-driven technique models each word in the lexicon as a sequence of symbol HMMs according to a standard symbol writing order derived from the phonetic representation. The lexicon-free technique uses a novel Bag-of-Symbols representation of the handwritten word that is independent of symbol order and allows rapid pruning of the lexicon. On handwritten Devanagari word samples featuring both standard and nonstandard symbol writing orders, a combination of lexicon-driven and lexicon-free recognizers significantly outperforms either of them used in isolation. In contrast, most Tamil word samples feature the standard symbol order, and the lexicon-driven recognizer outperforms the lexicon free one as well as their combination. The best recognition accuracies obtained for 20,000 word lexicons are 87.13 percent for Devanagari when the two recognizers are combined, and 91.8 percent for Tamil using the lexicon-driven technique.
Clustering of Farsi sub-word images for whole-book recognition
NASA Astrophysics Data System (ADS)
Soheili, Mohammad Reza; Kabir, Ehsanollah; Stricker, Didier
2015-01-01
Redundancy of word and sub-word occurrences in large documents can be effectively utilized in an OCR system to improve recognition results. Most OCR systems employ language modeling techniques as a post-processing step; however these techniques do not use important pictorial information that exist in the text image. In case of large-scale recognition of degraded documents, this information is even more valuable. In our previous work, we proposed a subword image clustering method for the applications dealing with large printed documents. In our clustering method, the ideal case is when all equivalent sub-word images lie in one cluster. To overcome the issues of low print quality, the clustering method uses an image matching algorithm for measuring the distance between two sub-word images. The measured distance with a set of simple shape features were used to cluster all sub-word images. In this paper, we analyze the effects of adding more shape features on processing time, purity of clustering, and the final recognition rate. Previously published experiments have shown the efficiency of our method on a book. Here we present extended experimental results and evaluate our method on another book with totally different font face. Also we show that the number of the new created clusters in a page can be used as a criteria for assessing the quality of print and evaluating preprocessing phases.
Page, M. P. A.; Norris, D.
2009-01-01
We briefly review the considerable evidence for a common ordering mechanism underlying both immediate serial recall (ISR) tasks (e.g. digit span, non-word repetition) and the learning of phonological word forms. In addition, we discuss how recent work on the Hebb repetition effect is consistent with the idea that learning in this task is itself a laboratory analogue of the sequence-learning component of phonological word-form learning. In this light, we present a unifying modelling framework that seeks to account for ISR and Hebb repetition effects, while being extensible to word-form learning. Because word-form learning is performed in the service of later word recognition, our modelling framework also subsumes a mechanism for word recognition from continuous speech. Simulations of a computational implementation of the modelling framework are presented and are shown to be in accordance with data from the Hebb repetition paradigm. PMID:19933143
ERP correlates of recognition memory in Autism Spectrum Disorder.
Massand, Esha; Bowler, Dermot M; Mottron, Laurent; Hosein, Anthony; Jemel, Boutheina
2013-09-01
Recognition memory in autism spectrum disorder (ASD) tends to be undiminished compared to that of typically developing (TD) individuals (Bowler et al. 2007), but it is still unknown whether memory in ASD relies on qualitatively similar or different neurophysiology. We sought to explore the neural activity underlying recognition by employing the old/new word repetition event-related potential effect. Behavioural recognition performance was comparable across both groups, and demonstrated superior recognition for low frequency over high frequency words. However, the ASD group showed a parietal rather than anterior onset (300-500 ms), and diminished right frontal old/new effects (800-1500 ms) relative to TD individuals. This study shows that undiminished recognition performance results from a pattern of differing functional neurophysiology in ASD.
Effects of post-encoding stress on performance in the DRM false memory paradigm
Pardilla-Delgado, Enmanuelle; Alger, Sara E.; Cunningham, Tony J.; Kinealy, Brian
2016-01-01
Numerous studies have investigated how stress impacts veridical memory, but how stress influences false memory formation remains poorly understood. In order to target memory consolidation specifically, a psychosocial stress (TSST) or control manipulation was administered following encoding of 15 neutral, semantically related word lists (DRM false memory task) and memory was tested 24 h later. Stress decreased recognition of studied words, while increasing false recognition of semantically related lure words. Moreover, while control subjects remembered true and false words equivalently, stressed subjects remembered more false than true words. These results suggest that stress supports gist memory formation in the DRM task, perhaps by hindering detail-specific processing in the hippocampus. PMID:26670187
Poliva, Oren
2016-01-01
The auditory cortex communicates with the frontal lobe via the middle temporal gyrus (auditory ventral stream; AVS) or the inferior parietal lobule (auditory dorsal stream; ADS). Whereas the AVS is ascribed only with sound recognition, the ADS is ascribed with sound localization, voice detection, prosodic perception/production, lip-speech integration, phoneme discrimination, articulation, repetition, phonological long-term memory and working memory. Previously, I interpreted the juxtaposition of sound localization, voice detection, audio-visual integration and prosodic analysis, as evidence that the behavioral precursor to human speech is the exchange of contact calls in non-human primates. Herein, I interpret the remaining ADS functions as evidence of additional stages in language evolution. According to this model, the role of the ADS in vocal control enabled early Homo (Hominans) to name objects using monosyllabic calls, and allowed children to learn their parents' calls by imitating their lip movements. Initially, the calls were forgotten quickly but gradually were remembered for longer periods. Once the representations of the calls became permanent, mimicry was limited to infancy, and older individuals encoded in the ADS a lexicon for the names of objects (phonological lexicon). Consequently, sound recognition in the AVS was sufficient for activating the phonological representations in the ADS and mimicry became independent of lip-reading. Later, by developing inhibitory connections between acoustic-syllabic representations in the AVS and phonological representations of subsequent syllables in the ADS, Hominans became capable of concatenating the monosyllabic calls for repeating polysyllabic words (i.e., developed working memory). Finally, due to strengthening of connections between phonological representations in the ADS, Hominans became capable of encoding several syllables as a single representation (chunking). Consequently, Hominans began vocalizing and mimicking/rehearsing lists of words (sentences). PMID:27445676
A neuroimaging study of conflict during word recognition.
Riba, Jordi; Heldmann, Marcus; Carreiras, Manuel; Münte, Thomas F
2010-08-04
Using functional magnetic resonance imaging the neural activity associated with error commission and conflict monitoring in a lexical decision task was assessed. In a cohort of 20 native speakers of Spanish conflict was introduced by presenting words with high and low lexical frequency and pseudo-words with high and low syllabic frequency for the first syllable. Erroneous versus correct responses showed activation in the frontomedial and left inferior frontal cortex. A similar pattern was found for correctly classified words of low versus high lexical frequency and for correctly classified pseudo-words of high versus low syllabic frequency. Conflict-related activations for language materials largely overlapped with error-induced activations. The effect of syllabic frequency underscores the role of sublexical processing in visual word recognition and supports the view that the initial syllable mediates between the letter and word level.
Encoding context and false recognition memories.
Bruce, Darryl; Phillips-Grant, Kimberly; Conrad, Nicole; Bona, Susan
2004-09-01
False recognition of an extralist word that is thematically related to all words of a study list may reflect internal activation of the theme word during encoding followed by impaired source monitoring at retrieval, that is, difficulty in determining whether the word had actually been experienced or merely thought of. To assist source monitoring, distinctive visual or verbal contexts were added to study words at input. Both types of context produced similar effects: False alarms to theme-word (critical) lures were reduced; remember judgements of critical lures called old were lower; and if contextual information had been added to lists, subjects indicated as much for list items and associated critical foils identified as old. The visual and verbal contexts used in the present studies were held to disrupt semantic categorisation of list words at input and to facilitate source monitoring at output.
Juhasz, Barbara J; Lai, Yun-Hsuan; Woodcock, Michelle L
2015-12-01
Since the work of Taft and Forster (1976), a growing literature has examined how English compound words are recognized and organized in the mental lexicon. Much of this research has focused on whether compound words are decomposed during recognition by manipulating the word frequencies of their lexemes. However, many variables may impact morphological processing, including relational semantic variables such as semantic transparency, as well as additional form-related and semantic variables. In the present study, ratings were collected on 629 English compound words for six variables [familiarity, age of acquisition (AoA), semantic transparency, lexeme meaning dominance (LMD), imageability, and sensory experience ratings (SER)]. All of the compound words selected for this study are contained within the English Lexicon Project (Balota et al., 2007), which made it possible to use a regression approach to examine the predictive power of these variables for lexical decision and word naming performance. Analyses indicated that familiarity, AoA, imageability, and SER were all significant predictors of both lexical decision and word naming performance when they were added separately to a model containing the length and frequency of the compounds, as well as the lexeme frequencies. In addition, rated semantic transparency also predicted lexical decision performance. The database of English compound words should be beneficial to word recognition researchers who are interested in selecting items for experiments on compound words, and it will also allow researchers to conduct further analyses using the available data combined with word recognition times included in the English Lexicon Project.
ERIC Educational Resources Information Center
Duyck, Wouter; Van Assche, Eva; Drieghe, Denis; Hartsuiker, Robert J.
2007-01-01
Recent research on bilingualism has shown that lexical access in visual word recognition by bilinguals is not selective with respect to language. In the present study, the authors investigated language-independent lexical access in bilinguals reading sentences, which constitutes a strong unilingual linguistic context. In the first experiment,…
A Comparison of Two Flashcard Drill Methods Targeting Word Recognition
ERIC Educational Resources Information Center
Volpe, Robert J.; Mule, Christina M.; Briesch, Amy M.; Joseph, Laurice M.; Burns, Matthew K.
2011-01-01
Traditional drill and practice (TD) and incremental rehearsal (IR) are two flashcard drill instructional methods previously noted to improve word recognition. The current study sought to compare the effectiveness and efficiency of these two methods, as assessed by next day retention assessments, under 2 conditions (i.e., opportunities to respond…
Neighborhood Frequency Effect in Chinese Word Recognition: Evidence from Naming and Lexical Decision
ERIC Educational Resources Information Center
Li, Meng-Feng; Gao, Xin-Yu; Chou, Tai-Li; Wu, Jei-Tun
2017-01-01
Neighborhood frequency is a crucial variable to know the nature of word recognition. Different from alphabetic scripts, neighborhood frequency in Chinese is usually confounded by component character frequency and neighborhood size. Three experiments were designed to explore the role of the neighborhood frequency effect in Chinese and the stimuli…
Early Decomposition in Visual Word Recognition: Dissociating Morphology, Form, and Meaning
ERIC Educational Resources Information Center
Marslen-Wilson, William D.; Bozic, Mirjana; Randall, Billi
2008-01-01
The role of morphological, semantic, and form-based factors in the early stages of visual word recognition was investigated across different SOAs in a masked priming paradigm, focusing on English derivational morphology. In a first set of experiments, stimulus pairs co-varying in morphological decomposability and in semantic and orthographic…
Examining the Time Course of Indexical Specificity Effects in Spoken Word Recognition
ERIC Educational Resources Information Center
McLennan, Conor T.; Luce, Paul A.
2005-01-01
Variability in talker identity and speaking rate, commonly referred to as indexical variation, has demonstrable effects on the speed and accuracy of spoken word recognition. The present study examines the time course of indexical specificity effects to evaluate the hypothesis that such effects occur relatively late in the perceptual processing of…
ERIC Educational Resources Information Center
Weaver, Phyllis A.; Rosner, Jerome
1979-01-01
Scores of 25 learning disabled students (aged 9 to 13) were compared on five tests: a visual-perceptual test (Coloured Progressive Matrices); an auditory-perceptual test (Auditory Motor Placement); a listening and reading comprehension test (Durrell Listening-Reading Series); and a word recognition test (Word Recognition subtest, Diagnostic…
Lexical and Metrical Stress in Word Recognition: Lexical or Pre-Lexical Influences?
ERIC Educational Resources Information Center
Slowiaczek, Louisa M.; Soltano, Emily G.; Bernstein, Hilary L.
2006-01-01
The influence of lexical stress and/or metrical stress on spoken word recognition was examined. Two experiments were designed to determine whether response times in lexical decision or shadowing tasks are influenced when primes and targets share lexical stress patterns (JUVenile-BIBlical [Syllables printed in capital letters indicate those…
Automatization and Orthographic Development in Second Language Visual Word Recognition
ERIC Educational Resources Information Center
Kida, Shusaku
2016-01-01
The present study investigated second language (L2) learners' acquisition of automatic word recognition and the development of L2 orthographic representation in the mental lexicon. Participants in the study were Japanese university students enrolled in a compulsory course involving a weekly 30-minute sustained silent reading (SSR) activity with…
The Overlap Model: A Model of Letter Position Coding
ERIC Educational Resources Information Center
Gomez, Pablo; Ratcliff, Roger; Perea, Manuel
2008-01-01
Recent research has shown that letter identity and letter position are not integral perceptual dimensions (e.g., jugde primes judge in word-recognition experiments). Most comprehensive computational models of visual word recognition (e.g., the interactive activation model, J. L. McClelland & D. E. Rumelhart, 1981, and its successors) assume that…
Implicit Processing of Phonotactic Cues: Evidence from Electrophysiological and Vascular Responses
ERIC Educational Resources Information Center
Rossi, Sonja; Jurgenson, Ina B.; Hanulikova, Adriana; Telkemeyer, Silke; Wartenburger, Isabell; Obrig, Hellmuth
2011-01-01
Spoken word recognition is achieved via competition between activated lexical candidates that match the incoming speech input. The competition is modulated by prelexical cues that are important for segmenting the auditory speech stream into linguistic units. One such prelexical cue that listeners rely on in spoken word recognition is phonotactics.…
Embodied Transcription: A Creative Method for Using Voice-Recognition Software
ERIC Educational Resources Information Center
Brooks, Christine
2010-01-01
Voice-recognition software is designed to be used by one user (voice) at a time, requiring a researcher to speak all of the words of a recorded interview to achieve transcription. Thus, the researcher becomes a conduit through which interview material is inscribed as written word. Embodied Transcription acknowledges performative and interpretative…
ERIC Educational Resources Information Center
Silverston, Randall A.; Deichmann, John W.
The purpose of this study was to design and test a remedial reading instructional strategy for word recognition skills utilizing specific intersensory transfer components. The subjects were 56 high school sophomores and juniors enrolled in special education classes. Eight subjects were randomly selected from each of seven special education…
Learning of Letter Names and Sounds and Their Contribution to Word Recognition
ERIC Educational Resources Information Center
Levin, Iris; Shatil-Carmon, Sivan; Asif-Rave, Ornit
2006-01-01
This study investigated knowledge of letter names and letter sounds, their learning, and their contributions to word recognition. Of 123 preschoolers examined on letter knowledge, 65 underwent training on both letter names and letter sounds in a counterbalanced order. Prior to training, children were more advanced in associating letters with their…
The Reading Process--The Relationship Between Word Recognition and Comprehension.
ERIC Educational Resources Information Center
Hays, Warren S.
The purpose of this study was to determine the relationship between word recognition and comprehension achieved by second and fifth grade students when reading material at various levels of readability. A random sample of twenty-five second and twenty-five fifth graders, taken from three middle class schools, was administered a…
Prediction of Word Recognition in the First Half of Grade 1
ERIC Educational Resources Information Center
Snel, M. J.; Aarnoutse, C. A. J.; Terwel, J.; van Leeuwe, J. F. J.; van der Veld, W. M.
2016-01-01
Early detection of reading problems is important to prevent an enduring lag in reading skills. We studied the relationship between speed of word recognition (after six months of grade 1 education) and four kindergarten pre-literacy skills: letter knowledge, phonological awareness and naming speed for both digits and letters. Our sample consisted…
ERP Evidence of Hemispheric Independence in Visual Word Recognition
ERIC Educational Resources Information Center
Nemrodov, Dan; Harpaz, Yuval; Javitt, Daniel C.; Lavidor, Michal
2011-01-01
This study examined the capability of the left hemisphere (LH) and the right hemisphere (RH) to perform a visual recognition task independently as formulated by the Direct Access Model (Fernandino, Iacoboni, & Zaidel, 2007). Healthy native Hebrew speakers were asked to categorize nouns and non-words (created from nouns by transposing two middle…
Finding Words in a Language that Allows Words without Vowels
ERIC Educational Resources Information Center
El Aissati, Abder; McQueen, James M.; Cutler, Anne
2012-01-01
Across many languages from unrelated families, spoken-word recognition is subject to a constraint whereby potential word candidates must contain a vowel. This constraint minimizes competition from embedded words (e.g., in English, disfavoring "win" in "twin" because "t" cannot be a word). However, the constraint would be counter-productive in…
Automatic voice recognition using traditional and artificial neural network approaches
NASA Technical Reports Server (NTRS)
Botros, Nazeih M.
1989-01-01
The main objective of this research is to develop an algorithm for isolated-word recognition. This research is focused on digital signal analysis rather than linguistic analysis of speech. Features extraction is carried out by applying a Linear Predictive Coding (LPC) algorithm with order of 10. Continuous-word and speaker independent recognition will be considered in future study after accomplishing this isolated word research. To examine the similarity between the reference and the training sets, two approaches are explored. The first is implementing traditional pattern recognition techniques where a dynamic time warping algorithm is applied to align the two sets and calculate the probability of matching by measuring the Euclidean distance between the two sets. The second is implementing a backpropagation artificial neural net model with three layers as the pattern classifier. The adaptation rule implemented in this network is the generalized least mean square (LMS) rule. The first approach has been accomplished. A vocabulary of 50 words was selected and tested. The accuracy of the algorithm was found to be around 85 percent. The second approach is in progress at the present time.
Face recognition system and method using face pattern words and face pattern bytes
Zheng, Yufeng
2014-12-23
The present invention provides a novel system and method for identifying individuals and for face recognition utilizing facial features for face identification. The system and method of the invention comprise creating facial features or face patterns called face pattern words and face pattern bytes for face identification. The invention also provides for pattern recognitions for identification other than face recognition. The invention further provides a means for identifying individuals based on visible and/or thermal images of those individuals by utilizing computer software implemented by instructions on a computer or computer system and a computer readable medium containing instructions on a computer system for face recognition and identification.
Optimizing estimation of hemispheric dominance for language using magnetic source imaging.
Passaro, Antony D; Rezaie, Roozbeh; Moser, Dana C; Li, Zhimin; Dias, Nadeeka; Papanicolaou, Andrew C
2011-10-06
The efficacy of magnetoencephalography (MEG) as an alternative to invasive methods for investigating the cortical representation of language has been explored in several studies. Recently, studies comparing MEG to the gold standard Wada procedure have found inconsistent and often less-than accurate estimates of laterality across various MEG studies. Here we attempted to address this issue among normal right-handed adults (N=12) by supplementing a well-established MEG protocol involving word recognition and the single dipole method with a sentence comprehension task and a beamformer approach localizing neural oscillations. Beamformer analysis of word recognition and sentence comprehension tasks revealed a desynchronization in the 10-18Hz range, localized to the temporo-parietal cortices. Inspection of individual profiles of localized desynchronization (10-18Hz) revealed left hemispheric dominance in 91.7% and 83.3% of individuals during the word recognition and sentence comprehension tasks, respectively. In contrast, single dipole analysis yielded lower estimates, such that activity in temporal language regions was left-lateralized in 66.7% and 58.3% of individuals during word recognition and sentence comprehension, respectively. The results obtained from the word recognition task and localization of oscillatory activity using a beamformer appear to be in line with general estimates of left hemispheric dominance for language in normal right-handed individuals. Furthermore, the current findings support the growing notion that changes in neural oscillations underlie critical components of linguistic processing. Published by Elsevier B.V.
Reading handprinted addresses on IRS tax forms
NASA Astrophysics Data System (ADS)
Ramanaprasad, Vemulapati; Shin, Yong-Chul; Srihari, Sargur N.
1996-03-01
The hand-printed address recognition system described in this paper is a part of the Name and Address Block Reader (NABR) system developed by the Center of Excellence for Document Analysis and Recognition (CEDAR). NABR is currently being used by the IRS to read address blocks (hand-print as well as machine-print) on fifteen different tax forms. Although machine- print address reading was relatively straightforward, hand-print address recognition has posed some special challenges due to demands on processing speed (with an expected throughput of 8450 forms/hour) and recognition accuracy. We discuss various subsystems involved in hand- printed address recognition, including word segmentation, word recognition, digit segmentation, and digit recognition. We also describe control strategies used to make effective use of these subsystems to maximize recognition accuracy. We present system performance on 931 address blocks in recognizing various fields, such as city, state, ZIP Code, street number and name, and personal names.
The picture superiority effect: support for the distinctiveness model.
Mintzer, M Z; Snodgrass, J G
1999-01-01
The form change paradigm was used to explore the basis for the picture superiority effect. Recognition memory for studied pictures and words was tested in their study form or the alternate form. Form change cost was defined as the difference between recognition performance for same and different form items. Based on the results of Experiment 1 and previous studies, it was difficult to determine the relative cost for studied pictures and words due to a reversal of the mirror effect. We hypothesized that the reversed mirror effect results from subjects' basing their recognition decisions on their assumptions about the study form. Experiments 2 and 3 confirmed this hypothesis and generated a method for evaluating the relative cost for pictures and words despite the reversed mirror effect. More cost was observed for pictures than words, supporting the distinctiveness model of the picture superiority effect.
Speech-perception training for older adults with hearing loss impacts word recognition and effort.
Kuchinsky, Stefanie E; Ahlstrom, Jayne B; Cute, Stephanie L; Humes, Larry E; Dubno, Judy R; Eckert, Mark A
2014-10-01
The current pupillometry study examined the impact of speech-perception training on word recognition and cognitive effort in older adults with hearing loss. Trainees identified more words at the follow-up than at the baseline session. Training also resulted in an overall larger and faster peaking pupillary response, even when controlling for performance and reaction time. Perceptual and cognitive capacities affected the peak amplitude of the pupil response across participants but did not diminish the impact of training on the other pupil metrics. Thus, we demonstrated that pupillometry can be used to characterize training-related and individual differences in effort during a challenging listening task. Importantly, the results indicate that speech-perception training not only affects overall word recognition, but also a physiological metric of cognitive effort, which has the potential to be a biomarker of hearing loss intervention outcome. Copyright © 2014 Society for Psychophysiological Research.
Intonation and dialog context as constraints for speech recognition.
Taylor, P; King, S; Isard, S; Wright, H
1998-01-01
This paper describes a way of using intonation and dialog context to improve the performance of an automatic speech recognition (ASR) system. Our experiments were run on the DCIEM Maptask corpus, a corpus of spontaneous task-oriented dialog speech. This corpus has been tagged according to a dialog analysis scheme that assigns each utterance to one of 12 "move types," such as "acknowledge," "query-yes/no" or "instruct." Most ASR systems use a bigram language model to constrain the possible sequences of words that might be recognized. Here we use a separate bigram language model for each move type. We show that when the "correct" move-specific language model is used for each utterance in the test set, the word error rate of the recognizer drops. Of course when the recognizer is run on previously unseen data, it cannot know in advance what move type the speaker has just produced. To determine the move type we use an intonation model combined with a dialog model that puts constraints on possible sequences of move types, as well as the speech recognizer likelihoods for the different move-specific models. In the full recognition system, the combination of automatic move type recognition with the move specific language models reduces the overall word error rate by a small but significant amount when compared with a baseline system that does not take intonation or dialog acts into account. Interestingly, the word error improvement is restricted to "initiating" move types, where word recognition is important. In "response" move types, where the important information is conveyed by the move type itself--for example, positive versus negative response--there is no word error improvement, but recognition of the response types themselves is good. The paper discusses the intonation model, the language models, and the dialog model in detail and describes the architecture in which they are combined.
Malmberg, Kenneth J; Zeelenberg, Rene; Shiffrin, Richard M
2004-03-01
E. Hirshman, J. Fisher, T. Henthom, J. Amdt, and A. Passanname (2002) found that Midazolam disrupts the mirror-patterned word-frequency effect for recognition memory by reversing the typical hit-rate advantage for low-frequency words. They noted that this result is consistent with dual-process accounts (e.g., R. C. Atkinson & J. F. Juola, 1974; G. Mandler, 1980; A. P. Yonelinas, 1994) of the word frequency effect for recognition memory (S. Joordens & W. E. Hockley. 2000; L. M. Reder et al. 2000). The present authors show that this finding is also consistent with a variety of single-process, retrieving effectively- from-memory (REM) models (R. M. Shiffrin & M. Steyvers, 1997), the simplest of which assumes that Midazolam decreases the accuracy with which memory traces are stored. These findings therefore do not discriminate between single- and dual-process models of recognition memory.
The picture superiority effect in associative recognition.
Hockley, William E
2008-10-01
The picture superiority effect has been well documented in tests of item recognition and recall. The present study shows that the picture superiority effect extends to associative recognition. In three experiments, students studied lists consisting of random pairs of concrete words and pairs of line drawings; then they discriminated between intact (old) and rearranged (new) pairs of words and pictures at test. The discrimination advantage for pictures over words was seen in a greater hit rate for intact picture pairs, but there was no difference in the false alarm rates for the two types of stimuli. That is, there was no mirror effect. The same pattern of results was found when the test pairs consisted of the verbal labels of the pictures shown at study (Experiment 4), indicating that the hit rate advantage for picture pairs represents an encoding benefit. The results have implications for theories of the picture superiority effect and models of associative recognition.
Ally, Brandon A.
2012-01-01
Difficulty recognizing previously encountered stimuli is one of the earliest signs of incipient Alzheimer’s disease (AD). Work over the last 10 years has focused on how patients with AD and those in the prodromal stage of amnestic mild cognitive impairment (aMCI) make recognition decisions for visual and verbal stimuli. Interestingly, both groups of patients demonstrate markedly better memory for pictures over words, to a degree that is significantly greater in magnitude than their healthy older counterparts. Understanding this phenomenon not only helps to conceptualize how memory breaks down in AD, but also potentially provides the basis for future interventions. The current review will critically examine recent recognition memory work using pictures and words in the context of the dual-process theory of recognition and current hypotheses of cognitive breakdown in the course of very early AD. PMID:22927024
Effects of perceptual similarity but not semantic association on false recognition in aging
Gill, Emma
2017-01-01
This study investigated semantic and perceptual influences on false recognition in older and young adults in a variant on the Deese-Roediger-McDermott paradigm. In two experiments, participants encoded intermixed sets of semantically associated words, and sets of unrelated words. Each set was presented in a shared distinctive font. Older adults were no more likely to falsely recognize semantically associated lure words compared to unrelated lures also presented in studied fonts. However, they showed an increase in false recognition of lures which were related to studied items only by a shared font. This increased false recognition was associated with recollective experience. The data show that older adults do not always rely more on prior knowledge in episodic memory tasks. They converge with other findings suggesting that older adults may also be more prone to perceptually-driven errors. PMID:29302398
Dynamic and Contextual Information in HMM Modeling for Handwritten Word Recognition.
Bianne-Bernard, Anne-Laure; Menasri, Farès; Al-Hajj Mohamad, Rami; Mokbel, Chafic; Kermorvant, Christopher; Likforman-Sulem, Laurence
2011-10-01
This study aims at building an efficient word recognition system resulting from the combination of three handwriting recognizers. The main component of this combined system is an HMM-based recognizer which considers dynamic and contextual information for a better modeling of writing units. For modeling the contextual units, a state-tying process based on decision tree clustering is introduced. Decision trees are built according to a set of expert-based questions on how characters are written. Questions are divided into global questions, yielding larger clusters, and precise questions, yielding smaller ones. Such clustering enables us to reduce the total number of models and Gaussians densities by 10. We then apply this modeling to the recognition of handwritten words. Experiments are conducted on three publicly available databases based on Latin or Arabic languages: Rimes, IAM, and OpenHart. The results obtained show that contextual information embedded with dynamic modeling significantly improves recognition.
Can false memory for critical lures occur without conscious awareness of list words?
Sadler, Daniel D; Sodmont, Sharon M; Keefer, Lucas A
2018-02-01
We examined whether the DRM false memory effect can occur when list words are presented below the perceptual identification threshold. In four experiments, subjects showed robust veridical memory for studied words and false memory for critical lures when masked list words were presented at exposure durations of 43 ms per word. Shortening the exposure duration to 29 ms virtually eliminated veridical recognition of studied words and completely eliminated false recognition of critical lures. Subjective visibility ratings in Experiments 3a and 3b support the assumption that words presented at 29 ms were subliminal for most participants, but were occasionally experienced with partial awareness by participants with higher perceptual awareness. Our results indicate that a false memory effect does not occur in the absence of conscious awareness of list words, but it does occur when word stimuli are presented at an intermediate level of visibility. Copyright © 2017 Elsevier Inc. All rights reserved.
Contextual diversity is a main determinant of word identification times in young readers.
Perea, Manuel; Soares, Ana Paula; Comesaña, Montserrat
2013-09-01
Recent research with college-aged skilled readers by Adelman and colleagues revealed that contextual diversity (i.e., the number of contexts in which a word appears) is a more critical determinant of visual word recognition than mere repeated exposure (i.e., word frequency) (Psychological Science, 2006, Vol. 17, pp. 814-823). Given that contextual diversity has been claimed to be a relevant factor to word acquisition in developing readers, the effects of contextual diversity should also be a main determinant of word identification times in developing readers. A lexical decision experiment was conducted to examine the effects of contextual diversity and word frequency in young readers (children in fourth grade). Results revealed a sizable effect of contextual diversity, but not of word frequency, thereby generalizing Adelman and colleagues' data to a child population. These findings call for the implementation of dynamic developmental models of visual word recognition that go beyond a learning rule by mere exposure. Copyright © 2012 Elsevier Inc. All rights reserved.
Semantic and phonological schema influence spoken word learning and overnight consolidation.
Havas, Viktória; Taylor, Jsh; Vaquero, Lucía; de Diego-Balaguer, Ruth; Rodríguez-Fornells, Antoni; Davis, Matthew H
2018-06-01
We studied the initial acquisition and overnight consolidation of new spoken words that resemble words in the native language (L1) or in an unfamiliar, non-native language (L2). Spanish-speaking participants learned the spoken forms of novel words in their native language (Spanish) or in a different language (Hungarian), which were paired with pictures of familiar or unfamiliar objects, or no picture. We thereby assessed, in a factorial way, the impact of existing knowledge (schema) on word learning by manipulating both semantic (familiar vs unfamiliar objects) and phonological (L1- vs L2-like novel words) familiarity. Participants were trained and tested with a 12-hr intervening period that included overnight sleep or daytime awake. Our results showed (1) benefits of sleep to recognition memory that were greater for words with L2-like phonology and (2) that learned associations with familiar but not unfamiliar pictures enhanced recognition memory for novel words. Implications for complementary systems accounts of word learning are discussed.
Niefind, Florian; Dimigen, Olaf
2016-12-01
During reading, the parafoveal processing of an upcoming word n+1 can influence word recognition in two ways: It can affect fixation behavior during the preceding fixation on word n (parafovea-on-fovea effect, POF), and it can facilitate subsequent foveal processing once word n+1 is fixated (preview benefit). While preview benefits are established, evidence for POF effects is mixed. Recently, it has been suggested that POF effects exist, but have a delayed impact on saccade planning and thus coincide with preview benefits measured on word n+1. We combined eye movement and EEG recordings to investigate and separate neural correlates of POF and preview benefit effects. Participants read lists of nouns either in a boundary paradigm or the RSVP-with-flankers paradigm, while we recorded fixation- or event-related potentials (FRPs/ERPs), respectively. The validity and lexical frequency of the word shown as preview for the upcoming word n+1 were orthogonally manipulated. Analyses focused on the first fixation on word n+1. Preview validity (correct vs. incorrect preview) strongly modulated fixation times and electrophysiological N1 amplitudes, replicating previous findings. Importantly, gaze durations and FRPs measured on word n+1 were also affected by the frequency of the word shown as preview, with low-frequency previews eliciting a sustained, N400-like centroparietal negativity. Results support the idea that POF effects exist but affect word recognition with a delay. Lastly, once word n+1 was fixated, its frequency also modulated N1 amplitudes in ERPs and FRPs. Taken together, we separated immediate and delayed effects of parafoveal processing on brain correlates of word recognition. © 2016 Society for Psychophysiological Research.
Reading Big Words: Instructional Practices to Promote Multisyllabic Word Reading Fluency
ERIC Educational Resources Information Center
Toste, Jessica R.; Williams, Kelly J.; Capin, Philip
2017-01-01
Poorly developed word recognition skills are the most pervasive and debilitating source of reading challenges for students with learning disabilities (LD). With a notable decrease in word reading instruction in the upper elementary grades, struggling readers receive fewer instructional opportunities to develop proficient word reading skills, yet…
Gröschel, J; Philipp, F; Skonetzki, St; Genzwürker, H; Wetter, Th; Ellinger, K
2004-02-01
Precise documentation of medical treatment in emergency medical missions and for resuscitation is essential from a medical, legal and quality assurance point of view [Anästhesiologie und Intensivmedizin, 41 (2000) 737]. All conventional methods of time recording are either too inaccurate or elaborate for routine application. Automated speech recognition may offer a solution. A special erase programme for the documentation of all time events was developed. Standard speech recognition software (IBM ViaVoice 7.0) was adapted and installed on two different computer systems. One was a stationary PC (500MHz Pentium III, 128MB RAM, Soundblaster PCI 128 Soundcard, Win NT 4.0), the other was a mobile pen-PC that had already proven its value during emergency missions [Der Notarzt 16, p. 177] (Fujitsu Stylistic 2300, 230Mhz MMX Processor, 160MB RAM, embedded soundcard ESS 1879 chipset, Win98 2nd ed.). On both computers two different microphones were tested. One was a standard headset that came with the recognition software, the other was a small microphone (Lavalier-Kondensatormikrofon EM 116 from Vivanco), that could be attached to the operators collar. Seven women and 15 men spoke a text with 29 phrases to be recognised. Two emergency physicians tested the system in a simulated emergency setting using the collar microphone and the pen-PC with an analogue wireless connection. Overall recognition was best for the PC with a headset (89%) followed by the pen-PC with a headset (85%), the PC with a microphone (84%) and the pen-PC with a microphone (80%). Nevertheless, the difference was not statistically significant. Recognition became significantly worse (89.5% versus 82.3%, P<0.0001 ) when numbers had to be recognised. The gender of speaker and the number of words in a sentence had no influence. Average recognition in the simulated emergency setting was 75%. At no time did false recognition appear. Time recording with automated speech recognition seems to be possible in emergency medical missions. Although results show an average recognition of only 75%, it is possible that missing elements may be reconstructed more precisely. Future technology should integrate a secure wireless connection between microphone and mobile computer. The system could then prove its value for real out-of-hospital emergencies.
Wilson, Richard H
2011-01-01
Since the 1940s, measures of pure-tone sensitivity and speech recognition in quiet have been vital components of the audiologic evaluation. Although early investigators urged that speech recognition in noise also should be a component of the audiologic evaluation, only recently has this suggestion started to become a reality. This report focuses on the Words-in-Noise (WIN) Test, which evaluates word recognition in multitalker babble at seven signal-to-noise ratios and uses the 50% correct point (in dB SNR) calculated with the Spearman-Kärber equation as the primary metric. The WIN was developed and validated in a series of 12 laboratory studies. The current study examined the effectiveness of the WIN materials for measuring the word-recognition performance of patients in a typical clinical setting. To examine the relations among three audiometric measures including pure-tone thresholds, word-recognition performances in quiet, and word-recognition performances in multitalker babble for veterans seeking remediation for their hearing loss. Retrospective, descriptive. The participants were 3430 veterans who for the most part were evaluated consecutively in the Audiology Clinic at the VA Medical Center, Mountain Home, Tennessee. The mean age was 62.3 yr (SD = 12.8 yr). The data were collected in the course of a 60 min routine audiologic evaluation. A history, otoscopy, and aural-acoustic immittance measures also were included in the clinic protocol but were not evaluated in this report. Overall, the 1000-8000 Hz thresholds were significantly lower (better) in the right ear (RE) than in the left ear (LE). There was a direct relation between age and the pure-tone thresholds, with greater change across age in the high frequencies than in the low frequencies. Notched audiograms at 4000 Hz were observed in at least one ear in 41% of the participants with more unilateral than bilateral notches. Normal pure-tone thresholds (≤20 dB HL) were obtained from 6% of the participants. Maximum performance on the Northwestern University Auditory Test No. 6 (NU-6) in quiet was ≥90% correct by 50% of the participants, with an additional 20% performing at ≥80% correct; the RE performed 1-3% better than the LE. Of the 3291 who completed the WIN on both ears, only 7% exhibited normal performance (50% correct point of ≤6 dB SNR). Overall, WIN performance was significantly better in the RE (mean = 13.3 dB SNR) than in the LE (mean = 13.8 dB SNR). Recognition performance on both the NU-6 and the WIN decreased as a function of both pure-tone hearing loss and age. There was a stronger relation between the high-frequency pure-tone average (1000, 2000, and 4000 Hz) and the WIN than between the pure-tone average (500, 1000, and 2000 Hz) and the WIN. The results on the WIN from both the previous laboratory studies and the current clinical study indicate that the WIN is an appropriate clinic instrument to assess word-recognition performance in background noise. Recognition performance on a speech-in-quiet task does not predict performance on a speech-in-noise task, as the two tasks reflect different domains of auditory function. Experience with the WIN indicates that word-in-noise tasks should be considered the "stress test" for auditory function. American Academy of Audiology.
An ERP investigation of visual word recognition in syllabary scripts.
Okano, Kana; Grainger, Jonathan; Holcomb, Phillip J
2013-06-01
The bimodal interactive-activation model has been successfully applied to understanding the neurocognitive processes involved in reading words in alphabetic scripts, as reflected in the modulation of ERP components in masked repetition priming. In order to test the generalizability of this approach, in the present study we examined word recognition in a different writing system, the Japanese syllabary scripts hiragana and katakana. Native Japanese participants were presented with repeated or unrelated pairs of Japanese words in which the prime and target words were both in the same script (within-script priming, Exp. 1) or were in the opposite script (cross-script priming, Exp. 2). As in previous studies with alphabetic scripts, in both experiments the N250 (sublexical processing) and N400 (lexical-semantic processing) components were modulated by priming, although the time course was somewhat delayed. The earlier N/P150 effect (visual feature processing) was present only in "Experiment 1: Within-script priming", in which the prime and target words shared visual features. Overall, the results provide support for the hypothesis that visual word recognition involves a generalizable set of neurocognitive processes that operate in similar manners across different writing systems and languages, as well as pointing to the viability of the bimodal interactive-activation framework for modeling such processes.
An ERP Investigation of Visual Word Recognition in Syllabary Scripts
Okano, Kana; Grainger, Jonathan; Holcomb, Phillip J.
2013-01-01
The bi-modal interactive-activation model has been successfully applied to understanding the neuro-cognitive processes involved in reading words in alphabetic scripts, as reflected in the modulation of ERP components in masked repetition priming. In order to test the generalizability of this approach, the current study examined word recognition in a different writing system, the Japanese syllabary scripts Hiragana and Katakana. Native Japanese participants were presented with repeated or unrelated pairs of Japanese words where the prime and target words were both in the same script (within-script priming, Experiment 1) or were in the opposite script (cross-script priming, Experiment 2). As in previous studies with alphabetic scripts, in both experiments the N250 (sub-lexical processing) and N400 (lexical-semantic processing) components were modulated by priming, although the time-course was somewhat delayed. The earlier N/P150 effect (visual feature processing) was present only in Experiment 1 where prime and target words shared visual features. Overall, the results provide support for the hypothesis that visual word recognition involves a generalizable set of neuro-cognitive processes that operate in a similar manner across different writing systems and languages, as well as pointing to the viability of the bi-modal interactive activation framework for modeling such processes. PMID:23378278
Memory Asymmetry of Forward and Backward Associations in Recognition Tasks
Yang, Jiongjiong; Zhu, Zijian; Mecklinger, Axel; Fang, Zhiyong; Li, Han
2013-01-01
There is an intensive debate on whether memory for serial order is symmetric. The objective of this study was to explore whether associative asymmetry is modulated by memory task (recognition vs. cued recall). Participants were asked to memorize word triples (Experiment 1–2) or pairs (Experiment 3–6) during the study phase. They then recalled the word by a cue during a cued recall task (Experiment 1–4), and judged whether the presented two words were in the same or in a different order compared to the study phase during a recognition task (Experiment 1–6). To control for perceptual matching between the study and test phase, participants were presented with vertical test pairs when they made directional judgment in Experiment 5. In Experiment 6, participants also made associative recognition judgments for word pairs presented at the same or the reversed position. The results showed that forward associations were recalled at similar levels as backward associations, and that the correlations between forward and backward associations were high in the cued recall tasks. On the other hand, the direction of forward associations was recognized more accurately (and more quickly) than backward associations, and their correlations were comparable to the control condition in the recognition tasks. This forward advantage was also obtained for the associative recognition task. Diminishing positional information did not change the pattern of associative asymmetry. These results suggest that associative asymmetry is modulated by cued recall and recognition manipulations, and that direction as a constituent part of a memory trace can facilitate associative memory. PMID:22924326
Wagenmakers, Eric-Jan; Raaijmakers, Jeroen G W
2006-12-01
The role of orthographically similar words (i.e., neighbours) in the word recognition process has been studied extensively using short-term priming paradigms (e.g., Colombo, 1986). Here we demonstrate that long-term effects of neighbour priming can also be obtained. Experiment 1 showed that prior study of a neighbour (e.g., TANGO) increased later lexical decision performance for similar words (e.g., MANGO), but decreased performance for similar pseudowords (e.g., LANGO). Experiment 2 replicated this bias effect and showed that the increase in lexical decision performance due to neighbour priming is selectively due to words from a relatively sparse neighbourhood. Explanations of the bias effect in terms of lexical activation and episodic memory retrieval are discussed.
Semantic Ambiguity Effects in L2 Word Recognition.
Ishida, Tomomi
2018-06-01
The present study examined the ambiguity effects in second language (L2) word recognition. Previous studies on first language (L1) lexical processing have observed that ambiguous words are recognized faster and more accurately than unambiguous words on lexical decision tasks. In this research, L1 and L2 speakers of English were asked whether a letter string on a computer screen was an English word or not. An ambiguity advantage was found for both groups and greater ambiguity effects were found for the non-native speaker group when compared to the native speaker group. The findings imply that the larger ambiguity advantage for L2 processing is due to their slower response time in producing adequate feedback activation from the semantic level to the orthographic level.
ERIC Educational Resources Information Center
McCormick, Sandra; Becker, Evelyn Z.
1996-01-01
Reviews investigations related to word learning of learning disabled students. Finds that direct word study leads to reading improvement for learning disabled pupils, but that indirect instruction also provides assistance. Finds also that word knowledge instruction not only promotes word learning, but can heighten learning disabled students'…
When Half a Word Is Enough: Infants Can Recognize Spoken Words Using Partial Phonetic Information.
ERIC Educational Resources Information Center
Fernald, Anne; Swingley, Daniel; Pinto, John P.
2001-01-01
Two experiments tracked infants' eye movements to examine use of word-initial information to understand fluent speech. Results indicated that 21- and 18-month-olds recognized partial words as quickly and reliably as whole words. Infants' productive vocabulary and reaction time were related to word recognition accuracy. Results show that…
Recollection and familiarity in hippocampal amnesia.
Turriziani, Patrizia; Serra, Laura; Fadda, Lucia; Caltagirone, Carlo; Carlesimo, Giovanni Augusto
2008-01-01
Currently, there is a general agreement that two distinct cognitive operations, recollection and familiarity, contribute to performance on recognition memory tests. However, there is a controversy about whether recollection and familiarity reflect different memory processes, mediated by distinct neural substrates (dual-process models), or whether they are the expression of memory traces of different strength in the context of a unitary declarative memory system (unitary-strength models). Critical in this debate is the status of recognition memory in hippocampal amnesia and, in particular, whether the various structures in the medial temporal lobe (MTL) contribute differentially to the recollection and familiarity components of recognition. The present study aimed to explore the relative contribution of recollection and familiarity to recognition of words that had been previously read or that had been previously generated in a group of severely amnesic patients with cerebral damage restricted to the hippocampus. A convergent pattern of results emerged when we used a subjective-based (remember/know; R/K) and an objective-based (process dissociation procedure; PDP) methods to estimate the contribution of recollection and familiarity to recognition performance. In both PDP and R/K procedures, healthy controls disclosed significantly higher recollection estimates for words that had been anagrammed than for words that had been read. Amnesic patients' recollection scores were not different for words that had been generated or that had been read, and the recollection estimate for words that had been generated was significantly reduced as compared to the group of healthy controls. For familiarity, both healthy controls and amnesic patients recognized as familiar more words that had been generated than words that had been read, and there was no difference between the two groups. These data support the hypothesis of a specific role of the hippocampus in recollection processes and suggest that other components of the MTL (e.g., perirhinal cortex) may be more involved in the process of familiarity. 2008 Wiley-Liss, Inc.
NASA Astrophysics Data System (ADS)
Tanioka, Toshimasa; Egashira, Hiroyuki; Takata, Mayumi; Okazaki, Yasuhisa; Watanabe, Kenzi; Kondo, Hiroki
We have designed and implemented a PC operation support system for a physically disabled person with a speech impediment via voice. Voice operation is an effective method for a physically disabled person with involuntary movement of the limbs and the head. We have applied a commercial speech recognition engine to develop our system for practical purposes. Adoption of a commercial engine reduces development cost and will contribute to make our system useful to another speech impediment people. We have customized commercial speech recognition engine so that it can recognize the utterance of a person with a speech impediment. We have restricted the words that the recognition engine recognizes and separated a target words from similar words in pronunciation to avoid misrecognition. Huge number of words registered in commercial speech recognition engines cause frequent misrecognition for speech impediments' utterance, because their utterance is not clear and unstable. We have solved this problem by narrowing the choice of input down in a small number and also by registering their ambiguous pronunciations in addition to the original ones. To realize all character inputs and all PC operation with a small number of words, we have designed multiple input modes with categorized dictionaries and have introduced two-step input in each mode except numeral input to enable correct operation with small number of words. The system we have developed is in practical level. The first author of this paper is physically disabled with a speech impediment. He has been able not only character input into PC but also to operate Windows system smoothly by using this system. He uses this system in his daily life. This paper is written by him with this system. At present, the speech recognition is customized to him. It is, however, possible to customize for other users by changing words and registering new pronunciation according to each user's utterance.