Sample records for automatic word recognition

  1. A Limited-Vocabulary, Multi-Speaker Automatic Isolated Word Recognition System.

    ERIC Educational Resources Information Center

    Paul, James E., Jr.

    Techniques for automatic recognition of isolated words are investigated, and a computer simulation of a word recognition system is effected. Considered in detail are data acquisition and digitizing, word detection, amplitude and time normalization, short-time spectral estimation including spectral windowing, spectral envelope approximation,…

  2. Automatic speech recognition technology development at ITT Defense Communications Division

    NASA Technical Reports Server (NTRS)

    White, George M.

    1977-01-01

    An assessment of the applications of automatic speech recognition to defense communication systems is presented. Future research efforts include investigations into the following areas: (1) dynamic programming; (2) recognition of speech degraded by noise; (3) speaker independent recognition; (4) large vocabulary recognition; (5) word spotting and continuous speech recognition; and (6) isolated word recognition.

  3. Automatic vigilance for negative words in lexical decision and naming: comment on Larsen, Mercer, and Balota (2006).

    PubMed

    Estes, Zachary; Adelman, James S

    2008-08-01

    An automatic vigilance hypothesis states that humans preferentially attend to negative stimuli, and this attention to negative valence disrupts the processing of other stimulus properties. Thus, negative words typically elicit slower color naming, word naming, and lexical decisions than neutral or positive words. Larsen, Mercer, and Balota analyzed the stimuli from 32 published studies, and they found that word valence was confounded with several lexical factors known to affect word recognition. Indeed, with these lexical factors covaried out, Larsen et al. found no evidence of automatic vigilance. The authors report a more sensitive analysis of 1011 words. Results revealed a small but reliable valence effect, such that negative words (e.g., "shark") elicit slower lexical decisions and naming than positive words (e.g., "beach"). Moreover, the relation between valence and recognition was categorical rather than linear; the extremity of a word's valence did not affect its recognition. This valence effect was not attributable to word length, frequency, orthographic neighborhood size, contextual diversity, first phoneme, or arousal. Thus, the present analysis provides the most powerful demonstration of automatic vigilance to date.

  4. Rapid Word Recognition as a Measure of Word-Level Automaticity and Its Relation to Other Measures of Reading

    ERIC Educational Resources Information Center

    Frye, Elizabeth M.; Gosky, Ross

    2012-01-01

    The present study investigated the relationship between rapid recognition of individual words (Word Recognition Test) and two measures of contextual reading: (1) grade-level Passage Reading Test (IRI passage) and (2) performance on standardized STAR Reading Test. To establish if time of presentation on the word recognition test was a factor in…

  5. Automatization and Orthographic Development in Second Language Visual Word Recognition

    ERIC Educational Resources Information Center

    Kida, Shusaku

    2016-01-01

    The present study investigated second language (L2) learners' acquisition of automatic word recognition and the development of L2 orthographic representation in the mental lexicon. Participants in the study were Japanese university students enrolled in a compulsory course involving a weekly 30-minute sustained silent reading (SSR) activity with…

  6. Efficacy of a Classroom Integrated Intervention of Phonological Awareness and Word Recognition in "Double-Deficit Children" Learning a Regular Orthography

    ERIC Educational Resources Information Center

    Mayer, Andreas; Motsch, Hans-Joachim

    2015-01-01

    This study analysed the effects of a classroom intervention focusing on phonological awareness and/or automatized word recognition in children with a deficit in the domains of phonological awareness and rapid automatized naming ("double deficit"). According to the double-deficit hypothesis (Wolf & Bowers, 1999), these children belong…

  7. Military applications of automatic speech recognition and future requirements

    NASA Technical Reports Server (NTRS)

    Beek, Bruno; Cupples, Edward J.

    1977-01-01

    An updated summary of the state-of-the-art of automatic speech recognition and its relevance to military applications is provided. A number of potential systems for military applications are under development. These include: (1) digital narrowband communication systems; (2) automatic speech verification; (3) on-line cartographic processing unit; (4) word recognition for militarized tactical data system; and (5) voice recognition and synthesis for aircraft cockpit.

  8. Emotion and language: Valence and arousal affect word recognition

    PubMed Central

    Brysbaert, Marc; Warriner, Amy Beth

    2014-01-01

    Emotion influences most aspects of cognition and behavior, but emotional factors are conspicuously absent from current models of word recognition. The influence of emotion on word recognition has mostly been reported in prior studies on the automatic vigilance for negative stimuli, but the precise nature of this relationship is unclear. Various models of automatic vigilance have claimed that the effect of valence on response times is categorical, an inverted-U, or interactive with arousal. The present study used a sample of 12,658 words, and included many lexical and semantic control factors, to determine the precise nature of the effects of arousal and valence on word recognition. Converging empirical patterns observed in word-level and trial-level data from lexical decision and naming indicate that valence and arousal exert independent monotonic effects: Negative words are recognized more slowly than positive words, and arousing words are recognized more slowly than calming words. Valence explained about 2% of the variance in word recognition latencies, whereas the effect of arousal was smaller. Valence and arousal do not interact, but both interact with word frequency, such that valence and arousal exert larger effects among low-frequency words than among high-frequency words. These results necessitate a new model of affective word processing whereby the degree of negativity monotonically and independently predicts the speed of responding. This research also demonstrates that incorporating emotional factors, especially valence, improves the performance of models of word recognition. PMID:24490848

  9. Automatic Activation of Phonological Code during Visual Word Recognition in Children: A Masked Priming Study in Grades 3 and 5

    ERIC Educational Resources Information Center

    Sauval, Karinne; Perre, Laetitia; Casalis, Séverine

    2017-01-01

    The present study aimed to investigate the development of automatic phonological processes involved in visual word recognition during reading acquisition in French. A visual masked priming lexical decision experiment was carried out with third, fifth graders and adult skilled readers. Three different types of partial overlap between the prime and…

  10. The Use of an Autonomous Pedagogical Agent and Automatic Speech Recognition for Teaching Sight Words to Students with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Saadatzi, Mohammad Nasser; Pennington, Robert C.; Welch, Karla C.; Graham, James H.; Scott, Renee E.

    2017-01-01

    In the current study, we examined the effects of an instructional package comprised of an autonomous pedagogical agent, automatic speech recognition, and constant time delay during the instruction of reading sight words aloud to young adults with autism spectrum disorder. We used a concurrent multiple baseline across participants design to…

  11. Word position affects stimulus recognition: evidence for early ERP short-term plastic modulation.

    PubMed

    Spironelli, Chiara; Galfano, Giovanni; Umiltà, Carlo; Angrilli, Alessandro

    2011-12-01

    The present study was aimed at investigating the short-term plastic changes that follow word learning at a neurophysiological level. The main hypothesis was that word position (left or right visual field, LVF/RH or RVF/LH) in the initial learning phase would leave a trace that affected, in the subsequent recognition phase, the Recognition Potential (i.e., the first negative component distinguishing words from other stimuli) elicited 220-240 ms after centrally presented stimuli. Forty-eight students were administered, in the learning phase, 125 words for 4s, randomly presented half in the left and half in the right visual field. In the recognition phase, participants were split into two equal groups, one was assigned to the Word task, the other to the Picture task (in which half of the 125 pictures were new, and half matched prior studied words). During the Word task, old RVF/LH words elicited significantly greater negativity in left posterior sites with respect to old LVF/RH words, which in turn showed the same pattern of activation evoked by new words. Therefore, correspondence between stimulus spatial position and hemisphere specialized in automatic word recognition created a robust prime for subsequent recognition. During the Picture task, pictures matching old RVF/LH words showed no differences compared with new pictures, but evoked significantly greater negativity than pictures matching old LVF/RH words. Thus, the priming effect vanished when the task required a switch from visual analysis to stored linguistic information, whereas the lack of correspondence between stimulus position and network specialized in automatic word recognition (i.e., when words were presented to the LVF/RH) revealed the implicit costs for recognition. Results support the view that short-term plastic changes occurring in a linguistic learning task interact with both stimulus position and modality (written word vs. picture representation). Copyright © 2011 Elsevier B.V. All rights reserved.

  12. Prosody's Contribution to Fluency: An Examination of the Theory of Automatic Information Processing

    ERIC Educational Resources Information Center

    Schrauben, Julie E.

    2010-01-01

    LaBerge and Samuels' (1974) theory of automatic information processing in reading offers a model that explains how and where the processing of information occurs and the degree to which processing of information occurs. These processes are dependent upon two criteria: accurate word decoding and automatic word recognition. However, LaBerge and…

  13. Separating Speed from Accuracy in Beginning Reading Development

    ERIC Educational Resources Information Center

    Juul, Holger; Poulsen, Mads; Elbro, Carsten

    2014-01-01

    Phoneme awareness, letter knowledge, and rapid automatized naming (RAN) are well-known kindergarten predictors of later word recognition skills, but it is not clear whether they predict developments in accuracy or speed, or both. The present longitudinal study of 172 Danish beginning readers found that speed of word recognition mainly developed…

  14. A System for Mailpiece ZIP Code Assignment through Contextual Analysis. Phase 2

    DTIC Science & Technology

    1991-03-01

    Segmentation Address Block Interpretation Automatic Feature Generation Word Recognition Feature Detection Word Verification Optical Character Recognition Directory...in the Phase III effort. 1.1 Motivation The United States Postal Service (USPS) deploys large numbers of optical character recognition (OCR) machines...4):208-218, November 1986. [2] Gronmeyer, L. K., Ruffin, B. W., Lybanon, M. A., Neely, P. L., and Pierce, S. E. An Overview of Optical Character Recognition (OCR

  15. The Effects of Using Flashcards to Develop Automaticity with Key Vocabulary Words for Students with and without Learning Disabilities Enrolled in a High School Spanish Course

    ERIC Educational Resources Information Center

    Stager, Phillip A.

    2010-01-01

    The purpose of this study was to investigate the effects of using flashcards to develop automaticity (rapid word recognition) with key vocabulary words and phrases in order to improve fluency and reading comprehension skills for participants with and without diagnosed learning disabilities enrolled in a high school Spanish course. Eighty-seven…

  16. Sight-Word Practice in a Flash!

    ERIC Educational Resources Information Center

    Erwin, Robin W., Jr.

    2016-01-01

    For learners who need sight-word practice, including young students and struggling readers, digital flash cards may promote automatic word recognition when used as a supplemental activity to regular reading instruction. A novel use of common presentation software efficiently supports this practice strategy.

  17. Development of A Two-Stage Procedure for the Automatic Recognition of Dysfluencies in the Speech of Children Who Stutter: I. Psychometric Procedures Appropriate for Selection of Training Material for Lexical Dysfluency Classifiers

    PubMed Central

    Howell, Peter; Sackin, Stevie; Glenn, Kazan

    2007-01-01

    This program of work is intended to develop automatic recognition procedures to locate and assess stuttered dysfluencies. This and the following article together, develop and test recognizers for repetitions and prolongations. The automatic recognizers classify the speech in two stages: In the first, the speech is segmented and in the second the segments are categorized. The units that are segmented are words. Here assessments by human judges on the speech of 12 children who stutter are described using a corresponding procedure. The accuracy of word boundary placement across judges, categorization of the words as fluent, repetition or prolongation, and duration of the different fluency categories are reported. These measures allow reliable instances of repetitions and prolongations to be selected for training and assessing the recognizers in the subsequent paper. PMID:9328878

  18. Effects of Word Recognition Training in a Picture-Word Interference Task: Automaticity vs. Speed.

    ERIC Educational Resources Information Center

    Ehri, Linnea C.

    First and second graders were taught to recognize a set of written words either more accurately or more rapidly. Both before and after word training, they named pictures printed with and without these words as distractors. Of interest was whether training would enhance or diminish the interference created by these words in the picture naming task.…

  19. Processing Strategy and PI Effects in Recognition Memory of Word Lists.

    ERIC Educational Resources Information Center

    Hodge, Milton H.; Britton, Bruce K.

    Previous research by A. I. Schulman argued that an observed systematic decline in recognition memory in long word lists was due to the build-up of input and output proactive interference (PI). It also suggested that input PI resulted from process automatization; that is, each list item was processed or encoded in much the same way, producing a set…

  20. The Influence of Anticipation of Word Misrecognition on the Likelihood of Stuttering

    ERIC Educational Resources Information Center

    Brocklehurst, Paul H.; Lickley, Robin J.; Corley, Martin

    2012-01-01

    This study investigates whether the experience of stuttering can result from the speaker's anticipation of his words being misrecognized. Twelve adults who stutter (AWS) repeated single words into what appeared to be an automatic speech-recognition system. Following each iteration of each word, participants provided a self-rating of whether they…

  1. Understanding Cognitive Development: Automaticity and the Early Years Child

    ERIC Educational Resources Information Center

    Gray, Colette

    2004-01-01

    In recent years a growing body of evidence has implicated deficits in the automaticity of fundamental facts such as word and number recognition in a range of disorders: including attention deficit hyperactivity disorder, dyslexia, apraxia and autism. Variously described as habits, fluency, chunking and over learning, automatic processes are best…

  2. Computational Modeling of Emotions and Affect in Social-Cultural Interaction

    DTIC Science & Technology

    2013-10-02

    acoustic and textual information sources. Second, a cross-lingual study was performed that shed light on how human perception and automatic recognition...speech is produced, a speaker’s pitch and intonational pattern, and word usage. Better feature representation and advanced approaches were used to...recognition performance, and improved our understanding of language/cultural impact on human perception of emotion and automatic classification. • Units

  3. Early Visual Word Processing Is Flexible: Evidence from Spatiotemporal Brain Dynamics.

    PubMed

    Chen, Yuanyuan; Davis, Matthew H; Pulvermüller, Friedemann; Hauk, Olaf

    2015-09-01

    Visual word recognition is often described as automatic, but the functional locus of top-down effects is still a matter of debate. Do task demands modulate how information is retrieved, or only how it is used? We used EEG/MEG recordings to assess whether, when, and how task contexts modify early retrieval of specific psycholinguistic information in occipitotemporal cortex, an area likely to contribute to early stages of visual word processing. Using a parametric approach, we analyzed the spatiotemporal response patterns of occipitotemporal cortex for orthographic, lexical, and semantic variables in three psycholinguistic tasks: silent reading, lexical decision, and semantic decision. Task modulation of word frequency and imageability effects occurred simultaneously in ventral occipitotemporal regions-in the vicinity of the putative visual word form area-around 160 msec, following task effects on orthographic typicality around 100 msec. Frequency and typicality also produced task-independent effects in anterior temporal lobe regions after 200 msec. The early task modulation for several specific psycholinguistic variables indicates that occipitotemporal areas integrate perceptual input with prior knowledge in a task-dependent manner. Still, later task-independent effects in anterior temporal lobes suggest that word recognition eventually leads to retrieval of semantic information irrespective of task demands. We conclude that even a highly overlearned visual task like word recognition should be described as flexible rather than automatic.

  4. The tool for the automatic analysis of lexical sophistication (TAALES): version 2.0.

    PubMed

    Kyle, Kristopher; Crossley, Scott; Berger, Cynthia

    2017-07-11

    This study introduces the second release of the Tool for the Automatic Analysis of Lexical Sophistication (TAALES 2.0), a freely available and easy-to-use text analysis tool. TAALES 2.0 is housed on a user's hard drive (allowing for secure data processing) and is available on most operating systems (Windows, Mac, and Linux). TAALES 2.0 adds 316 indices to the original tool. These indices are related to word frequency, word range, n-gram frequency, n-gram range, n-gram strength of association, contextual distinctiveness, word recognition norms, semantic network, and word neighbors. In this study, we validated TAALES 2.0 by investigating whether its indices could be used to model both holistic scores of lexical proficiency in free writes and word choice scores in narrative essays. The results indicated that the TAALES 2.0 indices could be used to explain 58% of the variance in lexical proficiency scores and 32% of the variance in word-choice scores. Newly added TAALES 2.0 indices, including those related to n-gram association strength, word neighborhood, and word recognition norms, featured heavily in these predictor models, suggesting that TAALES 2.0 represents a substantial upgrade.

  5. Cognitive Predictors of Spoken Word Recognition in Children With and Without Developmental Language Disorders.

    PubMed

    Evans, Julia L; Gillam, Ronald B; Montgomery, James W

    2018-05-10

    This study examined the influence of cognitive factors on spoken word recognition in children with developmental language disorder (DLD) and typically developing (TD) children. Participants included 234 children (aged 7;0-11;11 years;months), 117 with DLD and 117 TD children, propensity matched for age, gender, socioeconomic status, and maternal education. Children completed a series of standardized assessment measures, a forward gating task, a rapid automatic naming task, and a series of tasks designed to examine cognitive factors hypothesized to influence spoken word recognition including phonological working memory, updating, attention shifting, and interference inhibition. Spoken word recognition for both initial and final accept gate points did not differ for children with DLD and TD controls after controlling target word knowledge in both groups. The 2 groups also did not differ on measures of updating, attention switching, and interference inhibition. Despite the lack of difference on these measures, for children with DLD, attention shifting and interference inhibition were significant predictors of spoken word recognition, whereas updating and receptive vocabulary were significant predictors of speed of spoken word recognition for the children in the TD group. Contrary to expectations, after controlling for target word knowledge, spoken word recognition did not differ for children with DLD and TD controls; however, the cognitive processing factors that influenced children's ability to recognize the target word in a stream of speech differed qualitatively for children with and without DLDs.

  6. Automatic lip reading by using multimodal visual features

    NASA Astrophysics Data System (ADS)

    Takahashi, Shohei; Ohya, Jun

    2013-12-01

    Since long time ago, speech recognition has been researched, though it does not work well in noisy places such as in the car or in the train. In addition, people with hearing-impaired or difficulties in hearing cannot receive benefits from speech recognition. To recognize the speech automatically, visual information is also important. People understand speeches from not only audio information, but also visual information such as temporal changes in the lip shape. A vision based speech recognition method could work well in noisy places, and could be useful also for people with hearing disabilities. In this paper, we propose an automatic lip-reading method for recognizing the speech by using multimodal visual information without using any audio information such as speech recognition. First, the ASM (Active Shape Model) is used to track and detect the face and lip in a video sequence. Second, the shape, optical flow and spatial frequencies of the lip features are extracted from the lip detected by ASM. Next, the extracted multimodal features are ordered chronologically so that Support Vector Machine is performed in order to learn and classify the spoken words. Experiments for classifying several words show promising results of this proposed method.

  7. The Importance of Concept of Word in Text as a Predictor of Sight Word Development in Spanish

    ERIC Educational Resources Information Center

    Ford, Karen L.; Invernizzi, Marcia A.; Meyer, J. Patrick

    2015-01-01

    The goal of the current study was to determine whether Concept of Word in Text (COW-T) predicts later sight word reading achievement in Spanish, as it does in English. COW-T requires that children have beginning sound awareness, automatic recognition of letters and letter sounds, and the ability to coordinate these skills to finger point…

  8. Word-Related N170 Responses to Implicit and Explicit Reading Tasks in Neoliterate Adults

    ERIC Educational Resources Information Center

    Sánchez-Vincitore, Laura V.; Avery, Trey; Froud, Karen

    2018-01-01

    The present study addresses word recognition automaticity in Spanish-speaking adults who are neoliterate by assessing the event-related potential N170 for word stimuli. Participants engaged in two reading conditions that vary the degree of attention required for linguistic components of reading: (a) an implicit reading task, in which they detected…

  9. The Relationship Between Reading Fluency and Vocabulary in Fifth Grade Turkish Students

    ERIC Educational Resources Information Center

    Yildirim, Kasim; Rasinski, Timothy; Ates, Seyit; Fitzgerald, Shawn; Zimmerman, Belinda; Yildiz, Mustafa

    2014-01-01

    Reading fluency has traditionally been recognized as a competency associated with word recognition and comprehension. As readers become more automatic in word identification they are able to devote less attention and cognitive resources to word decoding and more to text comprehension. The act of reading itself has been associated with growth in…

  10. Task-Dependent Masked Priming Effects in Visual Word Recognition

    PubMed Central

    Kinoshita, Sachiko; Norris, Dennis

    2012-01-01

    A method used widely to study the first 250 ms of visual word recognition is masked priming: These studies have yielded a rich set of data concerning the processes involved in recognizing letters and words. In these studies, there is an implicit assumption that the early processes in word recognition tapped by masked priming are automatic, and masked priming effects should therefore be invariant across tasks. Contrary to this assumption, masked priming effects are modulated by the task goal: For example, only word targets show priming in the lexical decision task, but both words and non-words do in the same-different task; semantic priming effects are generally weak in the lexical decision task but are robust in the semantic categorization task. We explain how such task dependence arises within the Bayesian Reader account of masked priming (Norris and Kinoshita, 2008), and how the task dissociations can be used to understand the early processes in lexical access. PMID:22675316

  11. Modelling Errors in Automatic Speech Recognition for Dysarthric Speakers

    NASA Astrophysics Data System (ADS)

    Caballero Morales, Santiago Omar; Cox, Stephen J.

    2009-12-01

    Dysarthria is a motor speech disorder characterized by weakness, paralysis, or poor coordination of the muscles responsible for speech. Although automatic speech recognition (ASR) systems have been developed for disordered speech, factors such as low intelligibility and limited phonemic repertoire decrease speech recognition accuracy, making conventional speaker adaptation algorithms perform poorly on dysarthric speakers. In this work, rather than adapting the acoustic models, we model the errors made by the speaker and attempt to correct them. For this task, two techniques have been developed: (1) a set of "metamodels" that incorporate a model of the speaker's phonetic confusion matrix into the ASR process; (2) a cascade of weighted finite-state transducers at the confusion matrix, word, and language levels. Both techniques attempt to correct the errors made at the phonetic level and make use of a language model to find the best estimate of the correct word sequence. Our experiments show that both techniques outperform standard adaptation techniques.

  12. Intonation and dialog context as constraints for speech recognition.

    PubMed

    Taylor, P; King, S; Isard, S; Wright, H

    1998-01-01

    This paper describes a way of using intonation and dialog context to improve the performance of an automatic speech recognition (ASR) system. Our experiments were run on the DCIEM Maptask corpus, a corpus of spontaneous task-oriented dialog speech. This corpus has been tagged according to a dialog analysis scheme that assigns each utterance to one of 12 "move types," such as "acknowledge," "query-yes/no" or "instruct." Most ASR systems use a bigram language model to constrain the possible sequences of words that might be recognized. Here we use a separate bigram language model for each move type. We show that when the "correct" move-specific language model is used for each utterance in the test set, the word error rate of the recognizer drops. Of course when the recognizer is run on previously unseen data, it cannot know in advance what move type the speaker has just produced. To determine the move type we use an intonation model combined with a dialog model that puts constraints on possible sequences of move types, as well as the speech recognizer likelihoods for the different move-specific models. In the full recognition system, the combination of automatic move type recognition with the move specific language models reduces the overall word error rate by a small but significant amount when compared with a baseline system that does not take intonation or dialog acts into account. Interestingly, the word error improvement is restricted to "initiating" move types, where word recognition is important. In "response" move types, where the important information is conveyed by the move type itself--for example, positive versus negative response--there is no word error improvement, but recognition of the response types themselves is good. The paper discusses the intonation model, the language models, and the dialog model in detail and describes the architecture in which they are combined.

  13. The Development of the Speaker Independent ARM Continuous Speech Recognition System

    DTIC Science & Technology

    1992-01-01

    spokeTi airborne reconnaissance reports u-ing a speech recognition system based on phoneme-level hidden Markov models (HMMs). Previous versions of the ARM...will involve automatic selection from multiple model sets, corresponding to different speaker types, and that the most rudimen- tary partition of a...The vocabulary size for the ARM task is 497 words. These words are related to the phoneme-level symbols corresponding to the models in the model set

  14. Thai Automatic Speech Recognition

    DTIC Science & Technology

    2005-01-01

    used in an external DARPA evaluation involving medical scenarios between an American Doctor and a naïve monolingual Thai patient. 2. Thai Language... dictionary generation more challenging, and (3) the lack of word segmentation, which calls for automatic segmentation approaches to make n-gram language...requires a dictionary and provides various segmentation algorithms to automatically select suitable segmentations. Here we used a maximal matching

  15. Offline Arabic handwriting recognition: a survey.

    PubMed

    Lorigo, Liana M; Govindaraju, Venu

    2006-05-01

    The automatic recognition of text on scanned images has enabled many applications such as searching for words in large volumes of documents, automatic sorting of postal mail, and convenient editing of previously printed documents. The domain of handwriting in the Arabic script presents unique technical challenges and has been addressed more recently than other domains. Many different methods have been proposed and applied to various types of images. This paper provides a comprehensive review of these methods. It is the first survey to focus on Arabic handwriting recognition and the first Arabic character recognition survey to provide recognition rates and descriptions of test data for the approaches discussed. It includes background on the field, discussion of the methods, and future research directions.

  16. Development of First-Graders' Word Reading Skills: For Whom Can Dynamic Assessment Tell Us More?

    PubMed

    Cho, Eunsoo; Compton, Donald L; Gilbert, Jennifer K; Steacy, Laura M; Collins, Alyson A; Lindström, Esther R

    2017-01-01

    Dynamic assessment (DA) of word reading measures learning potential for early reading development by documenting the amount of assistance needed to learn how to read words with unfamiliar orthography. We examined the additive value of DA for predicting first-grade decoding and word recognition development while controlling for autoregressive effects. Additionally, we examined whether predictive validity of DA would be higher for students who have poor phonological awareness skills. First-grade students (n = 105) were assessed on measures of word reading, phonological awareness, rapid automatized naming, and DA in the fall and again assessed on word reading measures in the spring. A series of planned, moderated multiple regression analyses indicated that DA made a significant and unique contribution in predicting word recognition development above and beyond the autoregressor, particularly for students with poor phonological awareness skills. For these students, DA explained 3.5% of the unique variance in end-of-first-grade word recognition that was not attributable to autoregressive effect. Results suggest that DA provides an important source of individual differences in the development of word recognition skills that cannot be fully captured by merely assessing the present level of reading skills through traditional static assessment, particularly for students at risk for developing reading disabilities. © Hammill Institute on Disabilities 2015.

  17. Role of processing speed and depressed mood on encoding, storage, and retrieval memory functions in patients diagnosed with schizophrenia.

    PubMed

    Brébion, Gildas; David, Anthony S; Bressan, Rodrigo A; Pilowsky, Lyn S

    2007-01-01

    The role of various types of slowing of processing speed, as well as the role of depressed mood, on each stage of verbal memory functioning in patients diagnosed with schizophrenia was investigated. Mixed lists of high- and low-frequency words were presented, and immediate and delayed free recall and recognition were required. Two levels of encoding were studied by contrasting the relatively automatic encoding of the high-frequency words and the more effortful encoding of the low-frequency words. Storage was studied by contrasting immediate and delayed recall. Retrieval was studied by contrasting free recall and recognition. Three tests of motor and cognitive processing speed were administered as well. Regression analyses involving the three processing speed measures revealed that cognitive speed was the only predictor of the recall and recognition of the low-frequency words. Furthermore, slowing in cognitive speed accounted for the deficit in recall and recognition of the low-frequency words relative to a healthy control group. Depressed mood was significantly associated with recognition of the low-frequency words. Neither processing speed nor depressed mood was associated with storage efficiency. It is concluded that both cognitive speed slowing and depressed mood impact on effortful encoding processes.

  18. Cortical Reorganization in Dyslexic Children after Phonological Training: Evidence from Early Evoked Potentials

    ERIC Educational Resources Information Center

    Spironelli, Chiara; Penolazzi, Barbara; Vio, Claudio; Angrilli, Alessandro

    2010-01-01

    Brain plasticity was investigated in 14 Italian children affected by developmental dyslexia after 6 months of phonological training. The means used to measure language reorganization was the recognition potential, an early wave, also called N150, elicited by automatic word recognition. This component peaks over the left temporo-occipital cortex…

  19. N170 Visual Word Specialization on Implicit and Explicit Reading Tasks in Spanish Speaking Adult Neoliterates

    ERIC Educational Resources Information Center

    Sanchez, Laura V.

    2014-01-01

    Adult literacy training is known to be difficult in terms of teaching and maintenance (Abadzi, 2003), perhaps because adults who recently learned to read in their first language have not acquired reading automaticity. This study examines fast word recognition process in neoliterate adults, to evaluate whether they show evidence of perceptual…

  20. Arabic Language Modeling with Stem-Derived Morphemes for Automatic Speech Recognition

    ERIC Educational Resources Information Center

    Heintz, Ilana

    2010-01-01

    The goal of this dissertation is to introduce a method for deriving morphemes from Arabic words using stem patterns, a feature of Arabic morphology. The motivations are three-fold: modeling with morphemes rather than words should help address the out-of-vocabulary problem; working with stem patterns should prove to be a cross-dialectally valid…

  1. The Role of the Ventral and Dorsal Pathways in Reading Chinese Characters and English Words

    ERIC Educational Resources Information Center

    Sun, Yafeng; Yang, Yanhui; Desroches, Amy S.; Liu, Li; Peng, Danling

    2011-01-01

    Previous literature in alphabetic languages suggests that the occipital-temporal region (the ventral pathway) is specialized for automatic parallel word recognition, whereas the parietal region (the dorsal pathway) is specialized for serial letter-by-letter reading (and). However, few studies have directly examined the role of the ventral and…

  2. Automatic voice recognition using traditional and artificial neural network approaches

    NASA Technical Reports Server (NTRS)

    Botros, Nazeih M.

    1989-01-01

    The main objective of this research is to develop an algorithm for isolated-word recognition. This research is focused on digital signal analysis rather than linguistic analysis of speech. Features extraction is carried out by applying a Linear Predictive Coding (LPC) algorithm with order of 10. Continuous-word and speaker independent recognition will be considered in future study after accomplishing this isolated word research. To examine the similarity between the reference and the training sets, two approaches are explored. The first is implementing traditional pattern recognition techniques where a dynamic time warping algorithm is applied to align the two sets and calculate the probability of matching by measuring the Euclidean distance between the two sets. The second is implementing a backpropagation artificial neural net model with three layers as the pattern classifier. The adaptation rule implemented in this network is the generalized least mean square (LMS) rule. The first approach has been accomplished. A vocabulary of 50 words was selected and tested. The accuracy of the algorithm was found to be around 85 percent. The second approach is in progress at the present time.

  3. Memory for pictures, words, and spatial location in older adults: evidence for pictorial superiority.

    PubMed

    Park, D C; Puglisi, J T; Sovacool, M

    1983-09-01

    In the present study the spatial location of picture and word stimuli was varied across four quadrants of photographic slides. Young and old people received either pictures or words to study and were told to remember either just the item or the item and its location. Recognition memory for items and memory for spatial location were tested. A pictorial superiority effect occurred for both old and young people's item recognition. Additionally, instructions to study position decreased item memory and facilitated position memory in both age groups. Spatial memory was markedly superior for pictures compared with matched words for old and young adults. The results are interpreted within the Hasher and Zacks framework of automatic processing. The implications of the data for designing mnemonic aids for elderly persons are considered.

  4. Phoneme Awareness, Visual-Verbal Paired-Associate Learning, and Rapid Automatized Naming as Predictors of Individual Differences in Reading Ability

    ERIC Educational Resources Information Center

    Warmington, Meesha; Hulme, Charles

    2012-01-01

    This study examines the concurrent relationships between phoneme awareness, visual-verbal paired-associate learning, rapid automatized naming (RAN), and reading skills in 7- to 11-year-old children. Path analyses showed that visual-verbal paired-associate learning and RAN, but not phoneme awareness, were unique predictors of word recognition,…

  5. Unvoiced Speech Recognition Using Tissue-Conductive Acoustic Sensor

    NASA Astrophysics Data System (ADS)

    Heracleous, Panikos; Kaino, Tomomi; Saruwatari, Hiroshi; Shikano, Kiyohiro

    2006-12-01

    We present the use of stethoscope and silicon NAM (nonaudible murmur) microphones in automatic speech recognition. NAM microphones are special acoustic sensors, which are attached behind the talker's ear and can capture not only normal (audible) speech, but also very quietly uttered speech (nonaudible murmur). As a result, NAM microphones can be applied in automatic speech recognition systems when privacy is desired in human-machine communication. Moreover, NAM microphones show robustness against noise and they might be used in special systems (speech recognition, speech transform, etc.) for sound-impaired people. Using adaptation techniques and a small amount of training data, we achieved for a 20 k dictation task a[InlineEquation not available: see fulltext.] word accuracy for nonaudible murmur recognition in a clean environment. In this paper, we also investigate nonaudible murmur recognition in noisy environments and the effect of the Lombard reflex on nonaudible murmur recognition. We also propose three methods to integrate audible speech and nonaudible murmur recognition using a stethoscope NAM microphone with very promising results.

  6. Event-related potentials and recognition memory for pictures and words: the effects of intentional and incidental learning.

    PubMed

    Noldy, N E; Stelmack, R M; Campbell, K B

    1990-07-01

    Event-related potentials were recorded under conditions of intentional or incidental learning of pictures and words, and during the subsequent recognition memory test for these stimuli. Intentionally learned pictures were remembered better than incidentally learned pictures and intentionally learned words, which, in turn, were remembered better than incidentally learned words. In comparison to pictures that were ignored, the pictures that were attended were characterized by greater positive amplitude frontally at 250 ms and centro-parietally at 350 ms and by greater negativity at 450 ms at parietal and occipital sites. There were no effects of attention on the waveforms elicited by words. These results support the view that processing becomes automatic for words, whereas the processing of pictures involves additional effort or allocation of attentional resources. The N450 amplitude was greater for words than for pictures during both acquisition (intentional items) and recognition phases (hit and correct rejection categories for intentional items, hit category for incidental items). Because pictures are better remembered than words, the greater late positive wave (600 ms) elicited by the pictures than the words during the acquisition phase is also consistent with the association between P300 and better memory that has been reported.

  7. Retrieval, automaticity, vocabulary elaboration, orthography (RAVE-O): a comprehensive, fluency-based reading intervention program.

    PubMed

    Wolf, M; Miller, L; Donnelly, K

    2000-01-01

    The most important implication of the double-deficit hypothesis (Wolf & Bowers, in this issue) concerns a new emphasis on fluency and automaticity in intervention for children with developmental reading disabilities. The RAVE-O (Retrieval, Automaticity, Vocabulary Elaboration, Orthography) program is an experimental, fluency-based approach to reading intervention that is designed to accompany a phonological analysis program. In an effort to address multiple possible sources of dysfluency in readers with disabilities, the program involves comprehensive emphases both on fluency in word attack, word identification, and comprehension and on automaticity in underlying componential processes (e.g., phonological, orthographic, semantic, and lexical retrieval skills). The goals, theoretical principles, and applied activities of the RAVE-O curriculum are described with particular stress on facilitating the development of rapid orthographic pattern recognition and on changing children's attitudes toward language.

  8. A novel thermal face recognition approach using face pattern words

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng

    2010-04-01

    A reliable thermal face recognition system can enhance the national security applications such as prevention against terrorism, surveillance, monitoring and tracking, especially at nighttime. The system can be applied at airports, customs or high-alert facilities (e.g., nuclear power plant) for 24 hours a day. In this paper, we propose a novel face recognition approach utilizing thermal (long wave infrared) face images that can automatically identify a subject at both daytime and nighttime. With a properly acquired thermal image (as a query image) in monitoring zone, the following processes will be employed: normalization and denoising, face detection, face alignment, face masking, Gabor wavelet transform, face pattern words (FPWs) creation, face identification by similarity measure (Hamming distance). If eyeglasses are present on a subject's face, an eyeglasses mask will be automatically extracted from the querying face image, and then masked with all comparing FPWs (no more transforms). A high identification rate (97.44% with Top-1 match) has been achieved upon our preliminary face dataset (of 39 subjects) from the proposed approach regardless operating time and glasses-wearing condition.e

  9. The emergence of automaticity in reading: Effects of orthographic depth and word decoding ability on an adjusted Stroop measure.

    PubMed

    Megherbi, Hakima; Elbro, Carsten; Oakhill, Jane; Segui, Juan; New, Boris

    2018-02-01

    How long does it take for word reading to become automatic? Does the appearance and development of automaticity differ as a function of orthographic depth (e.g., French vs. English)? These questions were addressed in a longitudinal study of English and French beginning readers. The study focused on automaticity as obligatory processing as measured in the Stroop test. Measures of decoding ability and the Stroop effect were taken at three time points during first grade (and during second grade in the United Kingdom) in 84 children. The study is the first to adjust the classic Stroop effect for inhibition (of distracting colors). The adjusted Stroop effect was zero in the absence of reading ability, and it was found to develop in tandem with decoding ability. After a further control for decoding, no effects of age or orthography were found on the adjusted Stroop measure. The results are in line with theories of the development of whole word recognition that emphasize the importance of the acquisition of the basic orthographic code. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Dietary Assessment on a Mobile Phone Using Image Processing and Pattern Recognition Techniques: Algorithm Design and System Prototyping.

    PubMed

    Probst, Yasmine; Nguyen, Duc Thanh; Tran, Minh Khoi; Li, Wanqing

    2015-07-27

    Dietary assessment, while traditionally based on pen-and-paper, is rapidly moving towards automatic approaches. This study describes an Australian automatic food record method and its prototype for dietary assessment via the use of a mobile phone and techniques of image processing and pattern recognition. Common visual features including scale invariant feature transformation (SIFT), local binary patterns (LBP), and colour are used for describing food images. The popular bag-of-words (BoW) model is employed for recognizing the images taken by a mobile phone for dietary assessment. Technical details are provided together with discussions on the issues and future work.

  11. Automatic measurement and representation of prosodic features

    NASA Astrophysics Data System (ADS)

    Ying, Goangshiuan Shawn

    Effective measurement and representation of prosodic features of the acoustic signal for use in automatic speech recognition and understanding systems is the goal of this work. Prosodic features-stress, duration, and intonation-are variations of the acoustic signal whose domains are beyond the boundaries of each individual phonetic segment. Listeners perceive prosodic features through a complex combination of acoustic correlates such as intensity, duration, and fundamental frequency (F0). We have developed new tools to measure F0 and intensity features. We apply a probabilistic global error correction routine to an Average Magnitude Difference Function (AMDF) pitch detector. A new short-term frequency-domain Teager energy algorithm is used to measure the energy of a speech signal. We have conducted a series of experiments performing lexical stress detection on words in continuous English speech from two speech corpora. We have experimented with two different approaches, a segment-based approach and a rhythm unit-based approach, in lexical stress detection. The first approach uses pattern recognition with energy- and duration-based measurements as features to build Bayesian classifiers to detect the stress level of a vowel segment. In the second approach we define rhythm unit and use only the F0-based measurement and a scoring system to determine the stressed segment in the rhythm unit. A duration-based segmentation routine was developed to break polysyllabic words into rhythm units. The long-term goal of this work is to develop a system that can effectively detect the stress pattern for each word in continuous speech utterances. Stress information will be integrated as a constraint for pruning the word hypotheses in a word recognition system based on hidden Markov models.

  12. Divided attention enhances the recognition of emotional stimuli: evidence from the attentional boost effect.

    PubMed

    Rossi-Arnaud, Clelia; Spataro, Pietro; Costanzi, Marco; Saraulli, Daniele; Cestari, Vincenzo

    2018-01-01

    The present study examined predictions of the early-phase-elevated-attention hypothesis of the attentional boost effect (ABE), which suggests that transient increases in attention at encoding, as instantiated in the ABE paradigm, should enhance the recognition of neutral and positive items (whose encoding is mostly based on controlled processes), while having small or null effects on the recognition of negative items (whose encoding is primarily based on automatic processes). Participants were presented a sequence of negative, neutral and positive stimuli (pictures in Experiment 1, words in Experiment 2) associated to target (red) squares, distractor (green) squares or no squares (baseline condition). They were told to attend to the pictures/words and simultaneously press the spacebar of the computer when a red square appeared. In a later recognition task, stimuli associated to target squares were recognised better than stimuli associated to distractor squares, replicating the standard ABE. More importantly, we also found that: (a) the memory enhancement following target detection occurred with all types of stimuli (neutral, negative and positive) and (b) the advantage of negative stimuli over neutral stimuli was intact in the DA condition. These findings suggest that the encoding of negative stimuli depends on both controlled (attention-dependent) and automatic (attention-independent) processes.

  13. How should a speech recognizer work?

    PubMed

    Scharenborg, Odette; Norris, Dennis; Bosch, Louis; McQueen, James M

    2005-11-12

    Although researchers studying human speech recognition (HSR) and automatic speech recognition (ASR) share a common interest in how information processing systems (human or machine) recognize spoken language, there is little communication between the two disciplines. We suggest that this lack of communication follows largely from the fact that research in these related fields has focused on the mechanics of how speech can be recognized. In Marr's (1982) terms, emphasis has been on the algorithmic and implementational levels rather than on the computational level. In this article, we provide a computational-level analysis of the task of speech recognition, which reveals the close parallels between research concerned with HSR and ASR. We illustrate this relation by presenting a new computational model of human spoken-word recognition, built using techniques from the field of ASR that, in contrast to current existing models of HSR, recognizes words from real speech input. 2005 Lawrence Erlbaum Associates, Inc.

  14. Decision-related factors in pupil old/new effects: Attention, response execution, and false memory.

    PubMed

    Brocher, Andreas; Graf, Tim

    2017-07-28

    In this study, we investigate the effects of decision-related factors on recognition memory in pupil old/new paradigms. In Experiment 1, we used an old/new paradigm with words and pseudowords and participants made lexical decisions during recognition rather than old/new decisions. Importantly, participants were instructed to focus on the nonword-likeness of presented items, not their word-likeness. We obtained no old/new effects. In Experiment 2, participants discriminated old from new words and old from new pseudowords during recognition, and they did so as quickly as possible. We found old/new effects for both words and pseudowords. In Experiment 3, we used materials and an old/new design known to elicit a large number of incorrect responses. For false alarms ("old" response for new word), we found larger pupils than for correctly classified new items, starting at the point at which response execution was allowed (2750ms post stimulus onset). In contrast, pupil size for misses ("new" response for old word) was statistically indistinguishable from pupil size in correct rejections. Taken together, our data suggest that pupil old/new effects result more from the intentional use of memory than from its automatic use. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Automatic speech recognition research at NASA-Ames Research Center

    NASA Technical Reports Server (NTRS)

    Coler, Clayton R.; Plummer, Robert P.; Huff, Edward M.; Hitchcock, Myron H.

    1977-01-01

    A trainable acoustic pattern recognizer manufactured by Scope Electronics is presented. The voice command system VCS encodes speech by sampling 16 bandpass filters with center frequencies in the range from 200 to 5000 Hz. Variations in speaking rate are compensated for by a compression algorithm that subdivides each utterance into eight subintervals in such a way that the amount of spectral change within each subinterval is the same. The recorded filter values within each subinterval are then reduced to a 15-bit representation, giving a 120-bit encoding for each utterance. The VCS incorporates a simple recognition algorithm that utilizes five training samples of each word in a vocabulary of up to 24 words. The recognition rate of approximately 85 percent correct for untrained speakers and 94 percent correct for trained speakers was not considered adequate for flight systems use. Therefore, the built-in recognition algorithm was disabled, and the VCS was modified to transmit 120-bit encodings to an external computer for recognition.

  16. Memory bias in health anxiety is related to the emotional valence of health-related words.

    PubMed

    Ferguson, Eamonn; Moghaddam, Nima G; Bibby, Peter A

    2007-03-01

    A model based on the associative strength of object evaluations is tested to explain why those who score higher on health anxiety have a better memory for health-related words. Sixty participants observed health and nonhealth words. A recognition memory task followed a free recall task and finally subjects provided evaluations (emotionality, imageability, and frequency) for all the words. Hit rates for health words, d', c, and psychological response times (PRTs) for evaluations were examined using multi-level modelling (MLM) and regression. Health words had a higher hit rate, which was greater for those with higher levels of health anxiety. The higher hit rate for health words is partly mediated by the extent to which health words are evaluated as emotionally unpleasant, and this was stronger for (moderated by) those with higher levels of health anxiety. Consistent with the associative strength model, those with higher levels of health anxiety demonstrated faster PRTs when making emotional evaluations of health words compared to nonhealth words, while those lower in health anxiety were slower to evaluate health words. Emotional evaluations speed the recognition of health words for high health anxious individuals. These findings are discussed with respect to the wider literature on cognitive processes in health anxiety, automatic processing, implicit attitudes, and emotions in decision making.

  17. Dietary Assessment on a Mobile Phone Using Image Processing and Pattern Recognition Techniques: Algorithm Design and System Prototyping

    PubMed Central

    Probst, Yasmine; Nguyen, Duc Thanh; Tran, Minh Khoi; Li, Wanqing

    2015-01-01

    Dietary assessment, while traditionally based on pen-and-paper, is rapidly moving towards automatic approaches. This study describes an Australian automatic food record method and its prototype for dietary assessment via the use of a mobile phone and techniques of image processing and pattern recognition. Common visual features including scale invariant feature transformation (SIFT), local binary patterns (LBP), and colour are used for describing food images. The popular bag-of-words (BoW) model is employed for recognizing the images taken by a mobile phone for dietary assessment. Technical details are provided together with discussions on the issues and future work. PMID:26225994

  18. Reading Fluency and College Readiness

    ERIC Educational Resources Information Center

    Rasinski, Timothy V.; Chang, Shu-Ching; Edmondson, Elizabeth; Nageldinger, James; Nigh, Jennifer; Remark, Linda; Kenney, Kristen Srsen; Walsh-Moorman, Elizabeth; Yildirim, Kasim; Nichols, William Dee; Paige, David D.; Rupley, William H.

    2017-01-01

    The Common Core State Standards suggest that an appropriate goal for secondary education is college and career readiness. Previous research has identified reading fluency as a critical component for proficient reading. One component of fluency is word recognition accuracy and automaticity. The present study attempted to determine the word…

  19. Automatic concept extraction from spoken medical reports.

    PubMed

    Happe, André; Pouliquen, Bruno; Burgun, Anita; Cuggia, Marc; Le Beux, Pierre

    2003-07-01

    The objective of this project is to investigate methods whereby a combination of speech recognition and automated indexing methods substitute for current transcription and indexing practices. We based our study on existing speech recognition software programs and on NOMINDEX, a tool that extracts MeSH concepts from medical text in natural language and that is mainly based on a French medical lexicon and on the UMLS. For each document, the process consists of three steps: (1) dictation and digital audio recording, (2) speech recognition, (3) automatic indexing. The evaluation consisted of a comparison between the set of concepts extracted by NOMINDEX after the speech recognition phase and the set of keywords manually extracted from the initial document. The method was evaluated on a set of 28 patient discharge summaries extracted from the MENELAS corpus in French, corresponding to in-patients admitted for coronarography. The overall precision was 73% and the overall recall was 90%. Indexing errors were mainly due to word sense ambiguity and abbreviations. A specific issue was the fact that the standard French translation of MeSH terms lacks diacritics. A preliminary evaluation of speech recognition tools showed that the rate of accurate recognition was higher than 98%. Only 3% of the indexing errors were generated by inadequate speech recognition. We discuss several areas to focus on to improve this prototype. However, the very low rate of indexing errors due to speech recognition errors highlights the potential benefits of combining speech recognition techniques and automatic indexing.

  20. Real-time speech gisting for ATC applications

    NASA Astrophysics Data System (ADS)

    Dunkelberger, Kirk A.

    1995-06-01

    Command and control within the ATC environment remains primarily voice-based. Hence, automatic real time, speaker independent, continuous speech recognition (CSR) has many obvious applications and implied benefits to the ATC community: automated target tagging, aircraft compliance monitoring, controller training, automatic alarm disabling, display management, and many others. However, while current state-of-the-art CSR systems provide upwards of 98% word accuracy in laboratory environments, recent low-intrusion experiments in the ATCT environments demonstrated less than 70% word accuracy in spite of significant investments in recognizer tuning. Acoustic channel irregularities and controller/pilot grammar verities impact current CSR algorithms at their weakest points. It will be shown herein, however, that real time context- and environment-sensitive gisting can provide key command phrase recognition rates of greater than 95% using the same low-intrusion approach. The combination of real time inexact syntactic pattern recognition techniques and a tight integration of CSR, gisting, and ATC database accessor system components is the key to these high phase recognition rates. A system concept for real time gisting in the ATC context is presented herein. After establishing an application context, discussion presents a minimal CSR technology context then focuses on the gisting mechanism, desirable interfaces into the ATCT database environment, and data and control flow within the prototype system. Results of recent tests for a subset of the functionality are presented together with suggestions for further research.

  1. Automatically Detecting Likely Edits in Clinical Notes Created Using Automatic Speech Recognition

    PubMed Central

    Lybarger, Kevin; Ostendorf, Mari; Yetisgen, Meliha

    2017-01-01

    The use of automatic speech recognition (ASR) to create clinical notes has the potential to reduce costs associated with note creation for electronic medical records, but at current system accuracy levels, post-editing by practitioners is needed to ensure note quality. Aiming to reduce the time required to edit ASR transcripts, this paper investigates novel methods for automatic detection of edit regions within the transcripts, including both putative ASR errors but also regions that are targets for cleanup or rephrasing. We create detection models using logistic regression and conditional random field models, exploring a variety of text-based features that consider the structure of clinical notes and exploit the medical context. Different medical text resources are used to improve feature extraction. Experimental results on a large corpus of practitioner-edited clinical notes show that 67% of sentence-level edits and 45% of word-level edits can be detected with a false detection rate of 15%. PMID:29854187

  2. Adapting Word Embeddings from Multiple Domains to Symptom Recognition from Psychiatric Notes

    PubMed Central

    Zhang, Yaoyun; Li, Hee-Jin; Wang, Jingqi; Cohen, Trevor; Roberts, Kirk; Xu, Hua

    2018-01-01

    Mental health is increasingly recognized an important topic in healthcare. Information concerning psychiatric symptoms is critical for the timely diagnosis of mental disorders, as well as for the personalization of interventions. However, the diversity and sparsity of psychiatric symptoms make it challenging for conventional natural language processing techniques to automatically extract such information from clinical text. To address this problem, this study takes the initiative to use and adapt word embeddings from four source domains – intensive care, biomedical literature, Wikipedia and Psychiatric Forum – to recognize symptoms in the target domain of psychiatry. We investigated four different approaches including 1) only using word embeddings of the source domain, 2) directly combining data of the source and target to generate word embeddings, 3) assigning different weights to word embeddings, and 4) retraining the word embedding model of the source domain using a corpus of the target domain. To the best of our knowledge, this is the first work of adapting multiple word embeddings of external domains to improve psychiatric symptom recognition in clinical text. Experimental results showed that the last two approaches outperformed the baseline methods, indicating the effectiveness of our new strategies to leverage embeddings from other domains. PMID:29888086

  3. Automatic Speech Recognition from Neural Signals: A Focused Review.

    PubMed

    Herff, Christian; Schultz, Tanja

    2016-01-01

    Speech interfaces have become widely accepted and are nowadays integrated in various real-life applications and devices. They have become a part of our daily life. However, speech interfaces presume the ability to produce intelligible speech, which might be impossible due to either loud environments, bothering bystanders or incapabilities to produce speech (i.e., patients suffering from locked-in syndrome). For these reasons it would be highly desirable to not speak but to simply envision oneself to say words or sentences. Interfaces based on imagined speech would enable fast and natural communication without the need for audible speech and would give a voice to otherwise mute people. This focused review analyzes the potential of different brain imaging techniques to recognize speech from neural signals by applying Automatic Speech Recognition technology. We argue that modalities based on metabolic processes, such as functional Near Infrared Spectroscopy and functional Magnetic Resonance Imaging, are less suited for Automatic Speech Recognition from neural signals due to low temporal resolution but are very useful for the investigation of the underlying neural mechanisms involved in speech processes. In contrast, electrophysiologic activity is fast enough to capture speech processes and is therefor better suited for ASR. Our experimental results indicate the potential of these signals for speech recognition from neural data with a focus on invasively measured brain activity (electrocorticography). As a first example of Automatic Speech Recognition techniques used from neural signals, we discuss the Brain-to-text system.

  4. Word add-in for ontology recognition: semantic enrichment of scientific literature.

    PubMed

    Fink, J Lynn; Fernicola, Pablo; Chandran, Rahul; Parastatidis, Savas; Wade, Alex; Naim, Oscar; Quinn, Gregory B; Bourne, Philip E

    2010-02-24

    In the current era of scientific research, efficient communication of information is paramount. As such, the nature of scholarly and scientific communication is changing; cyberinfrastructure is now absolutely necessary and new media are allowing information and knowledge to be more interactive and immediate. One approach to making knowledge more accessible is the addition of machine-readable semantic data to scholarly articles. The Word add-in presented here will assist authors in this effort by automatically recognizing and highlighting words or phrases that are likely information-rich, allowing authors to associate semantic data with those words or phrases, and to embed that data in the document as XML. The add-in and source code are publicly available at http://www.codeplex.com/UCSDBioLit. The Word add-in for ontology term recognition makes it possible for an author to add semantic data to a document as it is being written and it encodes these data using XML tags that are effectively a standard in life sciences literature. Allowing authors to mark-up their own work will help increase the amount and quality of machine-readable literature metadata.

  5. Speech recognition-based and automaticity programs to help students with severe reading and spelling problems.

    PubMed

    Higgins, Eleanor L; Raskind, Marshall H

    2004-12-01

    This study was conducted to assess the effectiveness of two programs developed by the Frostig Center Research Department to improve the reading and spelling of students with learning disabilities (LD): a computer Speech Recognition-based Program (SRBP) and a computer and text-based Automaticity Program (AP). Twenty-eight LD students with reading and spelling difficulties (aged 8 to 18) received each program for 17 weeks and were compared with 16 students in a contrast group who did not receive either program. After adjusting for age and IQ, both the SRBP and AP groups showed significant differences over the contrast group in improving word recognition and reading comprehension. Neither program showed significant differences over contrasts in spelling. The SRBP also improved the performance of the target group when compared with the contrast group on phonological elision and nonword reading efficiency tasks. The AP showed significant differences in all process and reading efficiency measures.

  6. A Corpus-Based Approach for Automatic Thai Unknown Word Recognition Using Boosting Techniques

    NASA Astrophysics Data System (ADS)

    Techo, Jakkrit; Nattee, Cholwich; Theeramunkong, Thanaruk

    While classification techniques can be applied for automatic unknown word recognition in a language without word boundary, it faces with the problem of unbalanced datasets where the number of positive unknown word candidates is dominantly smaller than that of negative candidates. To solve this problem, this paper presents a corpus-based approach that introduces a so-called group-based ranking evaluation technique into ensemble learning in order to generate a sequence of classification models that later collaborate to select the most probable unknown word from multiple candidates. Given a classification model, the group-based ranking evaluation (GRE) is applied to construct a training dataset for learning the succeeding model, by weighing each of its candidates according to their ranks and correctness when the candidates of an unknown word are considered as one group. A number of experiments have been conducted on a large Thai medical text to evaluate performance of the proposed group-based ranking evaluation approach, namely V-GRE, compared to the conventional naïve Bayes classifier and our vanilla version without ensemble learning. As the result, the proposed method achieves an accuracy of 90.93±0.50% when the first rank is selected while it gains 97.26±0.26% when the top-ten candidates are considered, that is 8.45% and 6.79% improvement over the conventional record-based naïve Bayes classifier and the vanilla version. Another result on applying only best features show 93.93±0.22% and up to 98.85±0.15% accuracy for top-1 and top-10, respectively. They are 3.97% and 9.78% improvement over naive Bayes and the vanilla version. Finally, an error analysis is given.

  7. Spoken Grammar Practice and Feedback in an ASR-Based CALL System

    ERIC Educational Resources Information Center

    de Vries, Bart Penning; Cucchiarini, Catia; Bodnar, Stephen; Strik, Helmer; van Hout, Roeland

    2015-01-01

    Speaking practice is important for learners of a second language. Computer assisted language learning (CALL) systems can provide attractive opportunities for speaking practice when combined with automatic speech recognition (ASR) technology. In this paper, we present a CALL system that offers spoken practice of word order, an important aspect of…

  8. Reading in EFL: Facts and Fictions.

    ERIC Educational Resources Information Center

    Paran, Amos

    1996-01-01

    Examines the representation of the reading process in English as a Foreign Language (EFL) texts. The article argues that many of these representations are dated and based on a theory that was never a mainstream theory of first-language reading. Suggestions for exercises to strengthen automatic word recognition in EFL readers are provided. (33…

  9. The Relationship between Reading Fluency and Reading Comprehension in Fifth-Grade Turkish Students

    ERIC Educational Resources Information Center

    Yildiz, Mustafa; Yildirim, Kasim; Ates, Seyit; Rasinski, Timothy; Fitzgerald, Shawn; Zimmerman, Belinda

    2014-01-01

    This research study focused on the relationships among the various components of reading fluency components (word recognition accuracy, automaticity, and prosody), as well as their relationships with reading comprehension among fifth-grade students in Turkey. A total of 119 fifth-grade elementary school students participated in the study. The…

  10. Spreading Activation in an Attractor Network with Latching Dynamics: Automatic Semantic Priming Revisited

    ERIC Educational Resources Information Center

    Lerner, Itamar; Bentin, Shlomo; Shriki, Oren

    2012-01-01

    Localist models of spreading activation (SA) and models assuming distributed representations offer very different takes on semantic priming, a widely investigated paradigm in word recognition and semantic memory research. In this study, we implemented SA in an attractor neural network model with distributed representations and created a unified…

  11. The Development of Reading for Comprehension: An Information Processing Analysis. Final Report.

    ERIC Educational Resources Information Center

    Schadler, Margaret; Juola, James F.

    This report summarizes research performed at the Universtiy of Kansas that involved several topics related to reading and learning to read, including the development of automatic word recognition processes, reading for comprehension, and the development of new computer technologies designed to facilitate the reading process. The first section…

  12. Learning to Be a Good Orthographic Reader

    ERIC Educational Resources Information Center

    Castles, Anne; Nation, Kate

    2008-01-01

    Recent years have brought about rapid advances in our understanding of reading and how it develops, particularly in relation to the importance of alphabetic coding skills. However, much less has been known about the transition from alphabetic decoding to the rapid and automatic orthographic recognition of words, which is the hallmark of skilled…

  13. Automaticity of phonological and semantic processing during visual word recognition.

    PubMed

    Pattamadilok, Chotiga; Chanoine, Valérie; Pallier, Christophe; Anton, Jean-Luc; Nazarian, Bruno; Belin, Pascal; Ziegler, Johannes C

    2017-04-01

    Reading involves activation of phonological and semantic knowledge. Yet, the automaticity of the activation of these representations remains subject to debate. The present study addressed this issue by examining how different brain areas involved in language processing responded to a manipulation of bottom-up (level of visibility) and top-down information (task demands) applied to written words. The analyses showed that the same brain areas were activated in response to written words whether the task was symbol detection, rime detection, or semantic judgment. This network included posterior, temporal and prefrontal regions, which clearly suggests the involvement of orthographic, semantic and phonological/articulatory processing in all tasks. However, we also found interactions between task and stimulus visibility, which reflected the fact that the strength of the neural responses to written words in several high-level language areas varied across tasks. Together, our findings suggest that the involvement of phonological and semantic processing in reading is supported by two complementary mechanisms. First, an automatic mechanism that results from a task-independent spread of activation throughout a network in which orthography is linked to phonology and semantics. Second, a mechanism that further fine-tunes the sensitivity of high-level language areas to the sensory input in a task-dependent manner. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Hemispheric asymmetry in holistic processing of words.

    PubMed

    Ventura, Paulo; Delgado, João; Ferreira, Miguel; Farinha-Fernandes, António; Guerreiro, José C; Faustino, Bruno; Leite, Isabel; Wong, Alan C-N

    2018-05-13

    Holistic processing has been regarded as a hallmark of face perception, indicating the automatic and obligatory tendency of the visual system to process all face parts as a perceptual unit rather than in isolation. Studies involving lateralized stimulus presentation suggest that the right hemisphere dominates holistic face processing. Holistic processing can also be shown with other categories such as words and thus it is not specific to faces or face-like expertize. Here, we used divided visual field presentation to investigate the possibly different contributions of the two hemispheres for holistic word processing. Observers performed same/different judgment on the cued parts of two sequentially presented words in the complete composite paradigm. Our data indicate a right hemisphere specialization for holistic word processing. Thus, these markers of expert object recognition are domain general.

  15. The Use of Error Data to Study the Development of Verbal Encoding of Pictorial Stimuli.

    ERIC Educational Resources Information Center

    Cramer, Phebe

    If older children automatically label pictorial stimuli, then their performance should be impaired on tasks in which such labeling would increase the error rate. Children were asked to learn pairs of verbal or pictorial stimuli which, when combined, formed a different compound word (BUTTER-FLY). Subsequently, a false recognition test that included…

  16. Tucker Signing as a Phonics Instruction Tool to Develop Phonemic Awareness in Children

    ERIC Educational Resources Information Center

    Valbuena, Amanda Carolina

    2014-01-01

    To develop reading acquisition in an effective way, it is necessary to take into account three goals during the process: automatic word recognition, or development of phonemic awareness, reading comprehension, and a desire for reading. This article focuses on promoting phonemic awareness in English as a second language through a program called…

  17. Can the Relationship Between Rapid Automatized Naming and Word Reading Be Explained by a Catastrophe? Empirical Evidence From Students With and Without Reading Difficulties.

    PubMed

    Sideridis, Georgios D; Simos, Panagiotis; Mouzaki, Angeliki; Stamovlasis, Dimitrios; Georgiou, George K

    2018-05-01

    The purpose of the present study was to explain the moderating role of rapid automatized naming (RAN) in word reading with a cusp catastrophe model. We hypothesized that increases in RAN performance speed beyond a critical point would be associated with the disruption in word reading, consistent with a "generic shutdown" hypothesis. Participants were 587 elementary schoolchildren (Grades 2-4), among whom 87 had reading comprehension difficulties per the IQ-achievement discrepancy criterion. Data were analyzed via a cusp catastrophe model derived from the nonlinear dynamics systems theory. Results indicated that for children with reading comprehension difficulties, as naming speed falls below a critical level, the association between core reading processes (word recognition and decoding) becomes chaotic and unpredictable. However, after the significant common variance attributed to motivation, emotional, and internalizing symptoms measures from RAN scores was partialed out, its role as a bifurcation variable was no longer evident. Taken together, these findings suggest that RAN represents a salient cognitive measure that may be associated with psychoemotional processes that are, at least in part, responsible for unpredictable and chaotic word reading behavior among children with reading comprehension deficits.

  18. Selected Topics from LVCSR Research for Asian Languages at Tokyo Tech

    NASA Astrophysics Data System (ADS)

    Furui, Sadaoki

    This paper presents our recent work in regard to building Large Vocabulary Continuous Speech Recognition (LVCSR) systems for the Thai, Indonesian, and Chinese languages. For Thai, since there is no word boundary in the written form, we have proposed a new method for automatically creating word-like units from a text corpus, and applied topic and speaking style adaptation to the language model to recognize spoken-style utterances. For Indonesian, we have applied proper noun-specific adaptation to acoustic modeling, and rule-based English-to-Indonesian phoneme mapping to solve the problem of large variation in proper noun and English word pronunciation in a spoken-query information retrieval system. In spoken Chinese, long organization names are frequently abbreviated, and abbreviated utterances cannot be recognized if the abbreviations are not included in the dictionary. We have proposed a new method for automatically generating Chinese abbreviations, and by expanding the vocabulary using the generated abbreviations, we have significantly improved the performance of spoken query-based search.

  19. Word add-in for ontology recognition: semantic enrichment of scientific literature

    PubMed Central

    2010-01-01

    Background In the current era of scientific research, efficient communication of information is paramount. As such, the nature of scholarly and scientific communication is changing; cyberinfrastructure is now absolutely necessary and new media are allowing information and knowledge to be more interactive and immediate. One approach to making knowledge more accessible is the addition of machine-readable semantic data to scholarly articles. Results The Word add-in presented here will assist authors in this effort by automatically recognizing and highlighting words or phrases that are likely information-rich, allowing authors to associate semantic data with those words or phrases, and to embed that data in the document as XML. The add-in and source code are publicly available at http://www.codeplex.com/UCSDBioLit. Conclusions The Word add-in for ontology term recognition makes it possible for an author to add semantic data to a document as it is being written and it encodes these data using XML tags that are effectively a standard in life sciences literature. Allowing authors to mark-up their own work will help increase the amount and quality of machine-readable literature metadata. PMID:20181245

  20. ASM Based Synthesis of Handwritten Arabic Text Pages

    PubMed Central

    Al-Hamadi, Ayoub; Elzobi, Moftah; El-etriby, Sherif; Ghoneim, Ahmed

    2015-01-01

    Document analysis tasks, as text recognition, word spotting, or segmentation, are highly dependent on comprehensive and suitable databases for training and validation. However their generation is expensive in sense of labor and time. As a matter of fact, there is a lack of such databases, which complicates research and development. This is especially true for the case of Arabic handwriting recognition, that involves different preprocessing, segmentation, and recognition methods, which have individual demands on samples and ground truth. To bypass this problem, we present an efficient system that automatically turns Arabic Unicode text into synthetic images of handwritten documents and detailed ground truth. Active Shape Models (ASMs) based on 28046 online samples were used for character synthesis and statistical properties were extracted from the IESK-arDB database to simulate baselines and word slant or skew. In the synthesis step ASM based representations are composed to words and text pages, smoothed by B-Spline interpolation and rendered considering writing speed and pen characteristics. Finally, we use the synthetic data to validate a segmentation method. An experimental comparison with the IESK-arDB database encourages to train and test document analysis related methods on synthetic samples, whenever no sufficient natural ground truthed data is available. PMID:26295059

  1. ASM Based Synthesis of Handwritten Arabic Text Pages.

    PubMed

    Dinges, Laslo; Al-Hamadi, Ayoub; Elzobi, Moftah; El-Etriby, Sherif; Ghoneim, Ahmed

    2015-01-01

    Document analysis tasks, as text recognition, word spotting, or segmentation, are highly dependent on comprehensive and suitable databases for training and validation. However their generation is expensive in sense of labor and time. As a matter of fact, there is a lack of such databases, which complicates research and development. This is especially true for the case of Arabic handwriting recognition, that involves different preprocessing, segmentation, and recognition methods, which have individual demands on samples and ground truth. To bypass this problem, we present an efficient system that automatically turns Arabic Unicode text into synthetic images of handwritten documents and detailed ground truth. Active Shape Models (ASMs) based on 28046 online samples were used for character synthesis and statistical properties were extracted from the IESK-arDB database to simulate baselines and word slant or skew. In the synthesis step ASM based representations are composed to words and text pages, smoothed by B-Spline interpolation and rendered considering writing speed and pen characteristics. Finally, we use the synthetic data to validate a segmentation method. An experimental comparison with the IESK-arDB database encourages to train and test document analysis related methods on synthetic samples, whenever no sufficient natural ground truthed data is available.

  2. EMG-based speech recognition using hidden markov models with global control variables.

    PubMed

    Lee, Ki-Seung

    2008-03-01

    It is well known that a strong relationship exists between human voices and the movement of articulatory facial muscles. In this paper, we utilize this knowledge to implement an automatic speech recognition scheme which uses solely surface electromyogram (EMG) signals. The sequence of EMG signals for each word is modelled by a hidden Markov model (HMM) framework. The main objective of the work involves building a model for state observation density when multichannel observation sequences are given. The proposed model reflects the dependencies between each of the EMG signals, which are described by introducing a global control variable. We also develop an efficient model training method, based on a maximum likelihood criterion. In a preliminary study, 60 isolated words were used as recognition variables. EMG signals were acquired from three articulatory facial muscles. The findings indicate that such a system may have the capacity to recognize speech signals with an accuracy of up to 87.07%, which is superior to the independent probabilistic model.

  3. Implicit phonological priming during visual word recognition.

    PubMed

    Wilson, Lisa B; Tregellas, Jason R; Slason, Erin; Pasko, Bryce E; Rojas, Donald C

    2011-03-15

    Phonology is a lower-level structural aspect of language involving the sounds of a language and their organization in that language. Numerous behavioral studies utilizing priming, which refers to an increased sensitivity to a stimulus following prior experience with that or a related stimulus, have provided evidence for the role of phonology in visual word recognition. However, most language studies utilizing priming in conjunction with functional magnetic resonance imaging (fMRI) have focused on lexical-semantic aspects of language processing. The aim of the present study was to investigate the neurobiological substrates of the automatic, implicit stages of phonological processing. While undergoing fMRI, eighteen individuals performed a lexical decision task (LDT) on prime-target pairs including word-word homophone and pseudoword-word pseudohomophone pairs with a prime presentation below perceptual threshold. Whole-brain analyses revealed several cortical regions exhibiting hemodynamic response suppression due to phonological priming including bilateral superior temporal gyri (STG), middle temporal gyri (MTG), and angular gyri (AG) with additional region of interest (ROI) analyses revealing response suppression in the left lateralized supramarginal gyrus (SMG). Homophone and pseudohomophone priming also resulted in different patterns of hemodynamic responses relative to one another. These results suggest that phonological processing plays a key role in visual word recognition. Furthermore, enhanced hemodynamic responses for unrelated stimuli relative to primed stimuli were observed in midline cortical regions corresponding to the default-mode network (DMN) suggesting that DMN activity can be modulated by task requirements within the context of an implicit task. Copyright © 2010 Elsevier Inc. All rights reserved.

  4. Transcript mapping for handwritten English documents

    NASA Astrophysics Data System (ADS)

    Jose, Damien; Bharadwaj, Anurag; Govindaraju, Venu

    2008-01-01

    Transcript mapping or text alignment with handwritten documents is the automatic alignment of words in a text file with word images in a handwritten document. Such a mapping has several applications in fields ranging from machine learning where large quantities of truth data are required for evaluating handwriting recognition algorithms, to data mining where word image indexes are used in ranked retrieval of scanned documents in a digital library. The alignment also aids "writer identity" verification algorithms. Interfaces which display scanned handwritten documents may use this alignment to highlight manuscript tokens when a person examines the corresponding transcript word. We propose an adaptation of the True DTW dynamic programming algorithm for English handwritten documents. The integration of the dissimilarity scores from a word-model word recognizer and Levenshtein distance between the recognized word and lexicon word, as a cost metric in the DTW algorithm leading to a fast and accurate alignment, is our primary contribution. Results provided, confirm the effectiveness of our approach.

  5. Speech Clarity Index (Ψ): A Distance-Based Speech Quality Indicator and Recognition Rate Prediction for Dysarthric Speakers with Cerebral Palsy

    NASA Astrophysics Data System (ADS)

    Kayasith, Prakasith; Theeramunkong, Thanaruk

    It is a tedious and subjective task to measure severity of a dysarthria by manually evaluating his/her speech using available standard assessment methods based on human perception. This paper presents an automated approach to assess speech quality of a dysarthric speaker with cerebral palsy. With the consideration of two complementary factors, speech consistency and speech distinction, a speech quality indicator called speech clarity index (Ψ) is proposed as a measure of the speaker's ability to produce consistent speech signal for a certain word and distinguished speech signal for different words. As an application, it can be used to assess speech quality and forecast speech recognition rate of speech made by an individual dysarthric speaker before actual exhaustive implementation of an automatic speech recognition system for the speaker. The effectiveness of Ψ as a speech recognition rate predictor is evaluated by rank-order inconsistency, correlation coefficient, and root-mean-square of difference. The evaluations had been done by comparing its predicted recognition rates with ones predicted by the standard methods called the articulatory and intelligibility tests based on the two recognition systems (HMM and ANN). The results show that Ψ is a promising indicator for predicting recognition rate of dysarthric speech. All experiments had been done on speech corpus composed of speech data from eight normal speakers and eight dysarthric speakers.

  6. On national flags and language tags: Effects of flag-language congruency in bilingual word recognition.

    PubMed

    Grainger, Jonathan; Declerck, Mathieu; Marzouki, Yousri

    2017-07-01

    French-English bilinguals performed a generalized lexical decision experiment with mixed lists of French and English words and pseudo-words. In Experiment 1, each word/pseudo-word was superimposed on the picture of the French or UK flag, and flag-word congruency was manipulated. The flag was not informative with respect to either the lexical decision response or the language of the word. Nevertheless, lexical decisions to word stimuli were faster following the congruent flag compared with the incongruent flag, but only for French (L1) words. Experiment 2 replicated this flag-language congruency effect in a priming paradigm, where the word and pseudo-word targets followed the brief presentation of the flag prime, and this time effects were seen in both languages. We take these findings as evidence for a mechanism that automatically processes linguistic and non-linguistic information concerning the presence or not of a given language. Language membership information can then modulate lexical processing, in line with the architecture of the BIA model, but not the BIA+ model. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Free recall test experience potentiates strategy-driven effects of value on memory.

    PubMed

    Cohen, Michael S; Rissman, Jesse; Hovhannisyan, Mariam; Castel, Alan D; Knowlton, Barbara J

    2017-10-01

    People tend to show better memory for information that is deemed valuable or important. By one mechanism, individuals selectively engage deeper, semantic encoding strategies for high value items (Cohen, Rissman, Suthana, Castel, & Knowlton, 2014). By another mechanism, information paired with value or reward is automatically strengthened in memory via dopaminergic projections from midbrain to hippocampus (Shohamy & Adcock, 2010). We hypothesized that the latter mechanism would primarily enhance recollection-based memory, while the former mechanism would strengthen both recollection and familiarity. We also hypothesized that providing interspersed tests during study is a key to encouraging selective engagement of strategies. To test these hypotheses, we presented participants with sets of words, and each word was associated with a high or low point value. In some experiments, free recall tests were given after each list. In all experiments, a recognition test was administered 5 minutes after the final word list. Process dissociation was accomplished via remember/know judgments at recognition, a recall test probing both item memory and memory for a contextual detail (word plurality), and a task dissociation combining a recognition test for plurality (intended to probe recollection) with a speeded item recognition test (to probe familiarity). When recall tests were administered after study lists, high value strengthened both recollection and familiarity. When memory was not tested after each study list, but rather only at the end, value increased recollection but not familiarity. These dual process dissociations suggest that interspersed recall tests guide learners' use of metacognitive control to selectively apply effective encoding strategies. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  8. Improving Automated Lexical and Discourse Analysis of Online Chat Dialog

    DTIC Science & Technology

    2007-09-01

    include spelling- and grammar-checking on our word processing software; voice-recognition in our automobiles; and telephone-based conversational agents ...conversational agents can help customers make purchases on-line [3]. In addition, discourse analyzers can automatically separate multiple, interleaved...telephone-based conversational agent needs to know if it was asked a question or tasked to do something. Indeed, Stolcke et al demonstrated that

  9. A paper form processing system with an error correcting function for reading handwritten Kanji strings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Katsumi Marukawa; Kazuki Nakashima; Masashi Koga

    1994-12-31

    This paper presents a paper form processing system with an error correcting function for reading handwritten kanji strings. In the paper form processing system, names and addresses are important key data, and especially this paper takes up an error correcting method for name and address recognition. The method automatically corrects errors of the kanji OCR (Optical Character Reader) with the help of word dictionaries and other knowledge. Moreover, it allows names and addresses to be written in any style. The method consists of word matching {open_quotes}furigana{close_quotes} verification for name strings, and address approval for address strings. For word matching, kanjimore » name candidates are extracted by automaton-type word matching. In {open_quotes}furigana{close_quotes} verification, kana candidate characters recognized by the kana OCR are compared with kana`s searched from the name dictionary based on kanji name candidates, given by the word matching. The correct name is selected from the results of word matching and furigana verification. Also, the address approval efficiently searches for the right address based on a bottom-up procedure which follows hierarchical relations from a lower placename to a upper one by using the positional condition among the placenames. We ascertained that the error correcting method substantially improves the recognition rate and processing speed in experiments on 5,032 forms.« less

  10. [Creating language model of the forensic medicine domain for developing a autopsy recording system by automatic speech recognition].

    PubMed

    Niijima, H; Ito, N; Ogino, S; Takatori, T; Iwase, H; Kobayashi, M

    2000-11-01

    For the purpose of practical use of speech recognition technology for recording of forensic autopsy, a language model of the speech recording system, specialized for the forensic autopsy, was developed. The language model for the forensic autopsy by applying 3-gram model was created, and an acoustic model for Japanese speech recognition by Hidden Markov Model in addition to the above were utilized to customize the speech recognition engine for forensic autopsy. A forensic vocabulary set of over 10,000 words was compiled and some 300,000 sentence patterns were made to create the forensic language model, then properly mixing with a general language model to attain high exactitude. When tried by dictating autopsy findings, this speech recognition system was proved to be about 95% of recognition rate that seems to have reached to the practical usability in view of speech recognition software, though there remains rooms for improving its hardware and application-layer software.

  11. Analysis of Factors Affecting System Performance in the ASpIRE Challenge

    DTIC Science & Technology

    2015-12-13

    performance in the ASpIRE (Automatic Speech recognition In Reverberant Environments) challenge. In particular, overall word error rate (WER) of the solver...systems is analyzed as a function of room, distance between talker and microphone, and microphone type. We also analyze speech activity detection...analysis will inform the design of future challenges and provide insight into the efficacy of current solutions addressing noisy reverberant speech

  12. LANDMARK-BASED SPEECH RECOGNITION: REPORT OF THE 2004 JOHNS HOPKINS SUMMER WORKSHOP.

    PubMed

    Hasegawa-Johnson, Mark; Baker, James; Borys, Sarah; Chen, Ken; Coogan, Emily; Greenberg, Steven; Juneja, Amit; Kirchhoff, Katrin; Livescu, Karen; Mohan, Srividya; Muller, Jennifer; Sonmez, Kemal; Wang, Tianyu

    2005-01-01

    Three research prototype speech recognition systems are described, all of which use recently developed methods from artificial intelligence (specifically support vector machines, dynamic Bayesian networks, and maximum entropy classification) in order to implement, in the form of an automatic speech recognizer, current theories of human speech perception and phonology (specifically landmark-based speech perception, nonlinear phonology, and articulatory phonology). All three systems begin with a high-dimensional multiframe acoustic-to-distinctive feature transformation, implemented using support vector machines trained to detect and classify acoustic phonetic landmarks. Distinctive feature probabilities estimated by the support vector machines are then integrated using one of three pronunciation models: a dynamic programming algorithm that assumes canonical pronunciation of each word, a dynamic Bayesian network implementation of articulatory phonology, or a discriminative pronunciation model trained using the methods of maximum entropy classification. Log probability scores computed by these models are then combined, using log-linear combination, with other word scores available in the lattice output of a first-pass recognizer, and the resulting combination score is used to compute a second-pass speech recognition output.

  13. Eye-tracking the time-course of novel word learning and lexical competition in adults and children.

    PubMed

    Weighall, A R; Henderson, L M; Barr, D J; Cairney, S A; Gaskell, M G

    2017-04-01

    Lexical competition is a hallmark of proficient, automatic word recognition. Previous research suggests that there is a delay before a new spoken word becomes engaged in this process, with sleep playing an important role. However, data from one method - the visual world paradigm - consistently show competition without a delay. We trained 42 adults and 40 children (aged 7-8) on novel word-object pairings, and employed this paradigm to measure the time-course of lexical competition. Fixations to novel objects upon hearing existing words (e.g., looks to the novel object biscal upon hearing "click on the biscuit") were compared to fixations on untrained objects. Novel word-object pairings learned immediately before testing and those learned the previous day exhibited significant competition effects, with stronger competition for the previous day pairings for children but not adults. Crucially, this competition effect was significantly smaller for novel than existing competitors (e.g., looks to candy upon hearing "click on the candle"), suggesting that novel items may not compete for recognition like fully-fledged lexical items, even after 24h. Explicit memory (cued recall) was superior for words learned the day before testing, particularly for children; this effect (but not the lexical competition effects) correlated with sleep-spindle density. Together, the results suggest that different aspects of new word learning follow different time courses: visual world competition effects can emerge swiftly, but are qualitatively different from those observed with established words, and are less reliant upon sleep. Furthermore, the findings fit with the view that word learning earlier in development is boosted by sleep to a greater degree. Copyright © 2016. Published by Elsevier Inc.

  14. Influence of color word availability on the Stroop color-naming effect.

    PubMed

    Kim, Hyosun; Cho, Yang Seok; Yamaguchi, Motonori; Proctor, Robert W

    2008-11-01

    Three experiments tested whether the Stroop color-naming effect is a consequence of word recognition's being automatic or of the color word's capturing visual attention. In Experiment 1, a color bar was presented at fixation as the color carrier, with color and neutral words presented in locations above or below the color bar; Experiment 2 was similar, except that the color carrier could occur in one of the peripheral locations and the color word at fixation. The Stroop effect increased as display duration increased, and the Stroop dilution effect (a reduced Stroop effect when a neutral word is also present) was an approximately constant proportion of the Stroop effect at all display durations, regardless of whether the color bar or color word was at fixation. In Experiment 3, the interval between the onsets of the to-be-named color and the color word was manipulated. The Stroop effect decreased with increasing delay of the color word onset, but the absolute amount of Stroop dilution produced by the neutral word increased. This study's results imply that an attention shift from the color carrier to the color word is an important factor modulating the size of the Stroop effect.

  15. Noise-robust speech recognition through auditory feature detection and spike sequence decoding.

    PubMed

    Schafer, Phillip B; Jin, Dezhe Z

    2014-03-01

    Speech recognition in noisy conditions is a major challenge for computer systems, but the human brain performs it routinely and accurately. Automatic speech recognition (ASR) systems that are inspired by neuroscience can potentially bridge the performance gap between humans and machines. We present a system for noise-robust isolated word recognition that works by decoding sequences of spikes from a population of simulated auditory feature-detecting neurons. Each neuron is trained to respond selectively to a brief spectrotemporal pattern, or feature, drawn from the simulated auditory nerve response to speech. The neural population conveys the time-dependent structure of a sound by its sequence of spikes. We compare two methods for decoding the spike sequences--one using a hidden Markov model-based recognizer, the other using a novel template-based recognition scheme. In the latter case, words are recognized by comparing their spike sequences to template sequences obtained from clean training data, using a similarity measure based on the length of the longest common sub-sequence. Using isolated spoken digits from the AURORA-2 database, we show that our combined system outperforms a state-of-the-art robust speech recognizer at low signal-to-noise ratios. Both the spike-based encoding scheme and the template-based decoding offer gains in noise robustness over traditional speech recognition methods. Our system highlights potential advantages of spike-based acoustic coding and provides a biologically motivated framework for robust ASR development.

  16. Memory loss versus memory distortion: the role of encoding and retrieval deficits in Korsakoff patients' false memories.

    PubMed

    Van Damme, Ilse; d'Ydewalle, Gery

    2009-05-01

    Recent studies with the Deese/Roediger-McDermott (DRM) paradigm have revealed that Korsakoff patients show reduced levels of false recognition and different patterns of false recall compared to controls. The present experiment examined whether this could be attributed to an encoding deficit, or rather to problems with explicitly retrieving thematic information at test. In a variation on the DRM paradigm, both patients and controls were presented with associative as well as categorised word lists, with the order of recall and recognition tests manipulated between-subjects. The results point to an important role for the automatic/controlled retrieval distinction: Korsakoff patients' false memory was only diminished compared to controls' when automatic or short-term memory processes could not be used to fulfil the task at hand. Hence, the patients' explicit retrieval deficit appears to be crucial in explaining past and present data. Results are discussed in terms of fuzzy-trace and activation-monitoring theories.

  17. Character-level neural network for biomedical named entity recognition.

    PubMed

    Gridach, Mourad

    2017-06-01

    Biomedical named entity recognition (BNER), which extracts important named entities such as genes and proteins, is a challenging task in automated systems that mine knowledge in biomedical texts. The previous state-of-the-art systems required large amounts of task-specific knowledge in the form of feature engineering, lexicons and data pre-processing to achieve high performance. In this paper, we introduce a novel neural network architecture that benefits from both word- and character-level representations automatically, by using a combination of bidirectional long short-term memory (LSTM) and conditional random field (CRF) eliminating the need for most feature engineering tasks. We evaluate our system on two datasets: JNLPBA corpus and the BioCreAtIvE II Gene Mention (GM) corpus. We obtained state-of-the-art performance by outperforming the previous systems. To the best of our knowledge, we are the first to investigate the combination of deep neural networks, CRF, word embeddings and character-level representation in recognizing biomedical named entities. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. What Is in the Naming? A 5-Year Longitudinal Study of Early Rapid Naming and Phonological Sensitivity in Relation to Subsequent Reading Skills in Both Native Chinese and English as a Second Language

    ERIC Educational Resources Information Center

    Pan, Jinger; McBride-Chang, Catherine; Shu, Hua; Liu, Hongyun; Zhang, Yuping; Li, Hong

    2011-01-01

    Among 262 Chinese children, syllable awareness and rapid automatized naming (RAN) at age 5 years and invented spelling of Pinyin at age 6 years independently predicted subsequent Chinese character recognition and English word reading at ages 8 years and 10 years, even with initial Chinese character reading ability statistically controlled. In…

  19. Orthographic recognition in late adolescents: an assessment through event-related brain potentials.

    PubMed

    González-Garrido, Andrés Antonio; Gómez-Velázquez, Fabiola Reveca; Rodríguez-Santillán, Elizabeth

    2014-04-01

    Reading speed and efficiency are achieved through the automatic recognition of written words. Difficulties in learning and recognizing the orthography of words can arise despite reiterative exposure to texts. This study aimed to investigate, in native Spanish-speaking late adolescents, how different levels of orthographic knowledge might result in behavioral and event-related brain potential differences during the recognition of orthographic errors. Forty-five healthy high school students were selected and divided into 3 equal groups (High, Medium, Low) according to their performance on a 5-test battery of orthographic knowledge. All participants performed an orthographic recognition task consisting of the sequential presentation of a picture (object, fruit, or animal) followed by a correctly, or incorrectly, written word (orthographic mismatch) that named the picture just shown. Electroencephalogram (EEG) recording took place simultaneously. Behavioral results showed that the Low group had a significantly lower number of correct responses and increased reaction times while processing orthographical errors. Tests showed significant positive correlations between higher performance on the experimental task and faster and more accurate reading. The P150 and P450 components showed higher voltages in the High group when processing orthographic errors, whereas N170 seemed less lateralized to the left hemisphere in the lower orthographic performers. Also, trials with orthographic errors elicited a frontal P450 component that was only evident in the High group. The present results show that higher levels of orthographic knowledge correlate with high reading performance, likely because of faster and more accurate perceptual processing, better visual orthographic representations, and top-down supervision, as the event-related brain potential findings seem to suggest.

  20. Emotion Recognition of Weblog Sentences Based on an Ensemble Algorithm of Multi-label Classification and Word Emotions

    NASA Astrophysics Data System (ADS)

    Li, Ji; Ren, Fuji

    Weblogs have greatly changed the communication ways of mankind. Affective analysis of blog posts is found valuable for many applications such as text-to-speech synthesis or computer-assisted recommendation. Traditional emotion recognition in text based on single-label classification can not satisfy higher requirements of affective computing. In this paper, the automatic identification of sentence emotion in weblogs is modeled as a multi-label text categorization task. Experiments are carried out on 12273 blog sentences from the Chinese emotion corpus Ren_CECps with 8-dimension emotion annotation. An ensemble algorithm RAKEL is used to recognize dominant emotions from the writer's perspective. Our emotion feature using detailed intensity representation for word emotions outperforms the other main features such as the word frequency feature and the traditional lexicon-based feature. In order to deal with relatively complex sentences, we integrate grammatical characteristics of punctuations, disjunctive connectives, modification relations and negation into features. It achieves 13.51% and 12.49% increases for Micro-averaged F1 and Macro-averaged F1 respectively compared to the traditional lexicon-based feature. Result shows that multiple-dimension emotion representation with grammatical features can efficiently classify sentence emotion in a multi-label problem.

  1. Biologically-Inspired Spike-Based Automatic Speech Recognition of Isolated Digits Over a Reproducing Kernel Hilbert Space

    PubMed Central

    Li, Kan; Príncipe, José C.

    2018-01-01

    This paper presents a novel real-time dynamic framework for quantifying time-series structure in spoken words using spikes. Audio signals are converted into multi-channel spike trains using a biologically-inspired leaky integrate-and-fire (LIF) spike generator. These spike trains are mapped into a function space of infinite dimension, i.e., a Reproducing Kernel Hilbert Space (RKHS) using point-process kernels, where a state-space model learns the dynamics of the multidimensional spike input using gradient descent learning. This kernelized recurrent system is very parsimonious and achieves the necessary memory depth via feedback of its internal states when trained discriminatively, utilizing the full context of the phoneme sequence. A main advantage of modeling nonlinear dynamics using state-space trajectories in the RKHS is that it imposes no restriction on the relationship between the exogenous input and its internal state. We are free to choose the input representation with an appropriate kernel, and changing the kernel does not impact the system nor the learning algorithm. Moreover, we show that this novel framework can outperform both traditional hidden Markov model (HMM) speech processing as well as neuromorphic implementations based on spiking neural network (SNN), yielding accurate and ultra-low power word spotters. As a proof of concept, we demonstrate its capabilities using the benchmark TI-46 digit corpus for isolated-word automatic speech recognition (ASR) or keyword spotting. Compared to HMM using Mel-frequency cepstral coefficient (MFCC) front-end without time-derivatives, our MFCC-KAARMA offered improved performance. For spike-train front-end, spike-KAARMA also outperformed state-of-the-art SNN solutions. Furthermore, compared to MFCCs, spike trains provided enhanced noise robustness in certain low signal-to-noise ratio (SNR) regime. PMID:29666568

  2. Biologically-Inspired Spike-Based Automatic Speech Recognition of Isolated Digits Over a Reproducing Kernel Hilbert Space.

    PubMed

    Li, Kan; Príncipe, José C

    2018-01-01

    This paper presents a novel real-time dynamic framework for quantifying time-series structure in spoken words using spikes. Audio signals are converted into multi-channel spike trains using a biologically-inspired leaky integrate-and-fire (LIF) spike generator. These spike trains are mapped into a function space of infinite dimension, i.e., a Reproducing Kernel Hilbert Space (RKHS) using point-process kernels, where a state-space model learns the dynamics of the multidimensional spike input using gradient descent learning. This kernelized recurrent system is very parsimonious and achieves the necessary memory depth via feedback of its internal states when trained discriminatively, utilizing the full context of the phoneme sequence. A main advantage of modeling nonlinear dynamics using state-space trajectories in the RKHS is that it imposes no restriction on the relationship between the exogenous input and its internal state. We are free to choose the input representation with an appropriate kernel, and changing the kernel does not impact the system nor the learning algorithm. Moreover, we show that this novel framework can outperform both traditional hidden Markov model (HMM) speech processing as well as neuromorphic implementations based on spiking neural network (SNN), yielding accurate and ultra-low power word spotters. As a proof of concept, we demonstrate its capabilities using the benchmark TI-46 digit corpus for isolated-word automatic speech recognition (ASR) or keyword spotting. Compared to HMM using Mel-frequency cepstral coefficient (MFCC) front-end without time-derivatives, our MFCC-KAARMA offered improved performance. For spike-train front-end, spike-KAARMA also outperformed state-of-the-art SNN solutions. Furthermore, compared to MFCCs, spike trains provided enhanced noise robustness in certain low signal-to-noise ratio (SNR) regime.

  3. A novel method of language modeling for automatic captioning in TC video teleconferencing.

    PubMed

    Zhang, Xiaojia; Zhao, Yunxin; Schopp, Laura

    2007-05-01

    We are developing an automatic captioning system for teleconsultation video teleconferencing (TC-VTC) in telemedicine, based on large vocabulary conversational speech recognition. In TC-VTC, doctors' speech contains a large number of infrequently used medical terms in spontaneous styles. Due to insufficiency of data, we adopted mixture language modeling, with models trained from several datasets of medical and nonmedical domains. This paper proposes novel modeling and estimation methods for the mixture language model (LM). Component LMs are trained from individual datasets, with class n-gram LMs trained from in-domain datasets and word n-gram LMs trained from out-of-domain datasets, and they are interpolated into a mixture LM. For class LMs, semantic categories are used for class definition on medical terms, names, and digits. The interpolation weights of a mixture LM are estimated by a greedy algorithm of forward weight adjustment (FWA). The proposed mixing of in-domain class LMs and out-of-domain word LMs, the semantic definitions of word classes, as well as the weight-estimation algorithm of FWA are effective on the TC-VTC task. As compared with using mixtures of word LMs with weights estimated by the conventional expectation-maximization algorithm, the proposed methods led to a 21% reduction of perplexity on test sets of five doctors, which translated into improvements of captioning accuracy.

  4. L2 Word Recognition: Influence of L1 Orthography on Multi-Syllabic Word Recognition

    ERIC Educational Resources Information Center

    Hamada, Megumi

    2017-01-01

    L2 reading research suggests that L1 orthographic experience influences L2 word recognition. Nevertheless, the findings on multi-syllabic words in English are still limited despite the fact that a vast majority of words are multi-syllabic. The study investigated whether L1 orthography influences the recognition of multi-syllabic words, focusing on…

  5. Prosody and Spoken Word Recognition in Early and Late Spanish-English Bilingual Individuals

    ERIC Educational Resources Information Center

    Boutsen, Frank R.; Dvorak, Justin D.; Deweber, Derick D.

    2017-01-01

    Purpose: This study was conducted to compare the influence of word properties on gated single-word recognition in monolingual and bilingual individuals under conditions of native and nonnative accent and to determine whether word-form prosody facilitates recognition in bilingual individuals. Method: Word recognition was assessed in monolingual and…

  6. Automated smartphone audiometry: Validation of a word recognition test app.

    PubMed

    Dewyer, Nicholas A; Jiradejvong, Patpong; Henderson Sabes, Jennifer; Limb, Charles J

    2018-03-01

    Develop and validate an automated smartphone word recognition test. Cross-sectional case-control diagnostic test comparison. An automated word recognition test was developed as an app for a smartphone with earphones. English-speaking adults with recent audiograms and various levels of hearing loss were recruited from an audiology clinic and were administered the smartphone word recognition test. Word recognition scores determined by the smartphone app and the gold standard speech audiometry test performed by an audiologist were compared. Test scores for 37 ears were analyzed. Word recognition scores determined by the smartphone app and audiologist testing were in agreement, with 86% of the data points within a clinically acceptable margin of error and a linear correlation value between test scores of 0.89. The WordRec automated smartphone app accurately determines word recognition scores. 3b. Laryngoscope, 128:707-712, 2018. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.

  7. Word Recognition in Auditory Cortex

    ERIC Educational Resources Information Center

    DeWitt, Iain D. J.

    2013-01-01

    Although spoken word recognition is more fundamental to human communication than text recognition, knowledge of word-processing in auditory cortex is comparatively impoverished. This dissertation synthesizes current models of auditory cortex, models of cortical pattern recognition, models of single-word reading, results in phonetics and results in…

  8. Phonologic-graphemic transcodifier for Portuguese Language spoken in Brazil (PLB)

    NASA Astrophysics Data System (ADS)

    Fragadasilva, Francisco Jose; Saotome, Osamu; Deoliveira, Carlos Alberto

    An automatic speech-to-text transformer system, suited to unlimited vocabulary, is presented. The basic acoustic unit considered are the allophones of the phonemes corresponding to the Portuguese language spoken in Brazil (PLB). The input to the system is a phonetic sequence, from a former step of isolated word recognition of slowly spoken speech. In a first stage, the system eliminates phonetic elements that don't belong to PLB. Using knowledge sources such as phonetics, phonology, orthography, and PLB specific lexicon, the output is a sequence of written words, ordered by probabilistic criterion that constitutes the set of graphemic possibilities to that input sequence. Pronunciation differences of some regions of Brazil are considered, but only those that cause differences in phonological transcription, because those of phonetic level are absorbed, during the transformation to phonological level. In the final stage, all possible written words are analyzed for orthography and grammar point of view, to eliminate the incorrect ones.

  9. A reversed-typicality effect in pictures but not in written words in deaf and hard of hearing adolescents.

    PubMed

    Li, Degao; Gao, Kejuan; Wu, Xueyun; Xong, Ying; Chen, Xiaojun; He, Weiwei; Li, Ling; Huang, Jingjia

    2015-01-01

    Two experiments investigated Chinese deaf and hard of hearing (DHH) adolescents' recognition of category names in an innovative task of semantic categorization. In each trial, the category-name target appeared briefly at the screen center followed by two words or two pictures for two basic-level exemplars of high or middle typicality, which appeared briefly approximately where the target had appeared. Participants' reaction times when they were deciding whether the target referred to living or nonliving things consistently revealed the typicality effect for the word, but a reversed-typicality effect for picture-presented exemplars. It was found that in automatically processing a category name, DHH adolescents with natural sign language as their first language evidently activate two sets of exemplar representations: those for middle-typicality exemplars, which they develop in interactions with the physical world and in sign language uses; and those in written-language learning.

  10. Lexical and age effects on word recognition in noise in normal-hearing children.

    PubMed

    Ren, Cuncun; Liu, Sha; Liu, Haihong; Kong, Ying; Liu, Xin; Li, Shujing

    2015-12-01

    The purposes of the present study were (1) to examine the lexical and age effects on word recognition of normal-hearing (NH) children in noise, and (2) to compare the word-recognition performance in noise to that in quiet listening conditions. Participants were 213 NH children (age ranged between 3 and 6 years old). Eighty-nine and 124 of the participants were tested in noise and quiet listening conditions, respectively. The Standard-Chinese Lexical Neighborhood Test, which contains lists of words in four lexical categories (i.e., dissyllablic easy (DE), dissyllablic hard (DH), monosyllable easy (ME), and monosyllable hard (MH)) was used to evaluate the Mandarin Chinese word recognition in speech spectrum-shaped noise (SSN) with a signal-to-noise ratio (SNR) of 0dB. A two-way repeated-measures analysis of variance was conducted to examine the lexical effects with syllable length and difficulty level as the main factors on word recognition in the quiet and noise listening conditions. The effects of age on word-recognition performance were examined using a regression model. The word-recognition performance in noise was significantly poorer than that in quiet and the individual variations in performance in noise were much greater than those in quiet. Word recognition scores showed that the lexical effects were significant in the SSN. Children scored higher with dissyllabic words than with monosyllabic words; "easy" words scored higher than "hard" words in the noise condition. The scores of the NH children in the SSN (SNR=0dB) for the DE, DH, ME, and MH words were 85.4, 65.9, 71.7, and 46.2% correct, respectively. The word-recognition performance also increased with age in each lexical category for the NH children tested in noise. Both age and lexical characteristics of words had significant influences on the performance of Mandarin-Chinese word recognition in noise. The lexical effects were more obvious under noise listening conditions than in quiet. The word-recognition performance in noise increased with age in NH children of 3-6 years old and had not reached plateau at 6 years of age in the NH children. Copyright © 2015. Published by Elsevier Ireland Ltd.

  11. The effect of word concreteness on recognition memory.

    PubMed

    Fliessbach, K; Weis, S; Klaver, P; Elger, C E; Weber, B

    2006-09-01

    Concrete words that are readily imagined are better remembered than abstract words. Theoretical explanations for this effect either claim a dual coding of concrete words in the form of both a verbal and a sensory code (dual-coding theory), or a more accessible semantic network for concrete words than for abstract words (context-availability theory). However, the neural mechanisms of improved memory for concrete versus abstract words are poorly understood. Here, we investigated the processing of concrete and abstract words during encoding and retrieval in a recognition memory task using event-related functional magnetic resonance imaging (fMRI). As predicted, memory performance was significantly better for concrete words than for abstract words. Abstract words elicited stronger activations of the left inferior frontal cortex both during encoding and recognition than did concrete words. Stronger activation of this area was also associated with successful encoding for both abstract and concrete words. Concrete words elicited stronger activations bilaterally in the posterior inferior parietal lobe during recognition. The left parietal activation was associated with correct identification of old stimuli. The anterior precuneus, left cerebellar hemisphere and the posterior and anterior cingulate cortex showed activations both for successful recognition of concrete words and for online processing of concrete words during encoding. Additionally, we observed a correlation across subjects between brain activity in the left anterior fusiform gyrus and hippocampus during recognition of learned words and the strength of the concreteness effect. These findings support the idea of specific brain processes for concrete words, which are reactivated during successful recognition.

  12. Study of style effects on OCR errors in the MEDLINE database

    NASA Astrophysics Data System (ADS)

    Garrison, Penny; Davis, Diane L.; Andersen, Tim L.; Barney Smith, Elisa H.

    2005-01-01

    The National Library of Medicine has developed a system for the automatic extraction of data from scanned journal articles to populate the MEDLINE database. Although the 5-engine OCR system used in this process exhibits good performance overall, it does make errors in character recognition that must be corrected in order for the process to achieve the requisite accuracy. The correction process works by feeding words that have characters with less than 100% confidence (as determined automatically by the OCR engine) to a human operator who then must manually verify the word or correct the error. The majority of these errors are contained in the affiliation information zone where the characters are in italics or small fonts. Therefore only affiliation information data is used in this research. This paper examines the correlation between OCR errors and various character attributes in the MEDLINE database, such as font size, italics, bold, etc. and OCR confidence levels. The motivation for this research is that if a correlation between the character style and types of errors exists it should be possible to use this information to improve operator productivity by increasing the probability that the correct word option is presented to the human editor. We have determined that this correlation exists, in particular for the case of characters with diacritics.

  13. Automated recognition and extraction of tabular fields for the indexing of census records

    NASA Astrophysics Data System (ADS)

    Clawson, Robert; Bauer, Kevin; Chidester, Glen; Pohontsch, Milan; Kennard, Douglas; Ryu, Jongha; Barrett, William

    2013-01-01

    We describe a system for indexing of census records in tabular documents with the goal of recognizing the content of each cell, including both headers and handwritten entries. Each document is automatically rectified, registered and scaled to a known template following which lines and fields are detected and delimited as cells in a tabular form. Whole-word or whole-phrase recognition of noisy machine-printed text is performed using a glyph library, providing greatly increased efficiency and accuracy (approaching 100%), while avoiding the problems inherent with traditional OCR approaches. Constrained handwriting recognition results for a single author reach as high as 98% and 94.5% for the Gender field and Birthplace respectively. Multi-author accuracy (currently 82%) can be improved through an increased training set. Active integration of user feedback in the system will accelerate the indexing of records while providing a tightly coupled learning mechanism for system improvement.

  14. Anatomical entity mention recognition at literature scale

    PubMed Central

    Pyysalo, Sampo; Ananiadou, Sophia

    2014-01-01

    Motivation: Anatomical entities ranging from subcellular structures to organ systems are central to biomedical science, and mentions of these entities are essential to understanding the scientific literature. Despite extensive efforts to automatically analyze various aspects of biomedical text, there have been only few studies focusing on anatomical entities, and no dedicated methods for learning to automatically recognize anatomical entity mentions in free-form text have been introduced. Results: We present AnatomyTagger, a machine learning-based system for anatomical entity mention recognition. The system incorporates a broad array of approaches proposed to benefit tagging, including the use of Unified Medical Language System (UMLS)- and Open Biomedical Ontologies (OBO)-based lexical resources, word representations induced from unlabeled text, statistical truecasing and non-local features. We train and evaluate the system on a newly introduced corpus that substantially extends on previously available resources, and apply the resulting tagger to automatically annotate the entire open access scientific domain literature. The resulting analyses have been applied to extend services provided by the Europe PubMed Central literature database. Availability and implementation: All tools and resources introduced in this work are available from http://nactem.ac.uk/anatomytagger. Contact: sophia.ananiadou@manchester.ac.uk Supplementary Information: Supplementary data are available at Bioinformatics online. PMID:24162468

  15. Exploring a recognition-induced recognition decrement

    PubMed Central

    Dopkins, Stephen; Ngo, Catherine Trinh; Sargent, Jesse

    2007-01-01

    Four experiments explored a recognition decrement that is associated with the recognition of a word from a short list. The stimulus material for demonstrating the phenomenon was a list of words of different syntactic types. A word from the list was recognized less well following a decision that a word of the same type had occurred in the list than following a decision that such a word had not occurred in the list. A recognition decrement did not occur for a word of a given type following a positive recognition decision to a word of a different type. A recognition decrement did not occur when the list consisted exclusively of nouns. It was concluded that the phenomenon may reflect a criterion shift but probably does not reflect a list strength effect, suppression, or familiarity attribution consequent to a perceived discrepancy between actual and expected fluency. PMID:17063915

  16. Acquired prosopagnosia without word recognition deficits.

    PubMed

    Susilo, Tirta; Wright, Victoria; Tree, Jeremy J; Duchaine, Bradley

    2015-01-01

    It has long been suggested that face recognition relies on specialized mechanisms that are not involved in visual recognition of other object categories, including those that require expert, fine-grained discrimination at the exemplar level such as written words. But according to the recently proposed many-to-many theory of object recognition (MTMT), visual recognition of faces and words are carried out by common mechanisms [Behrmann, M., & Plaut, D. C. ( 2013 ). Distributed circuits, not circumscribed centers, mediate visual recognition. Trends in Cognitive Sciences, 17, 210-219]. MTMT acknowledges that face and word recognition are lateralized, but posits that the mechanisms that predominantly carry out face recognition still contribute to word recognition and vice versa. MTMT makes a key prediction, namely that acquired prosopagnosics should exhibit some measure of word recognition deficits. We tested this prediction by assessing written word recognition in five acquired prosopagnosic patients. Four patients had lesions limited to the right hemisphere while one had bilateral lesions with more pronounced lesions in the right hemisphere. The patients completed a total of seven word recognition tasks: two lexical decision tasks and five reading aloud tasks totalling more than 1200 trials. The performances of the four older patients (3 female, age range 50-64 years) were compared to those of 12 older controls (8 female, age range 56-66 years), while the performances of the younger prosopagnosic (male, 31 years) were compared to those of 14 younger controls (9 female, age range 20-33 years). We analysed all results at the single-patient level using Crawford's t-test. Across seven tasks, four prosopagnosics performed as quickly and accurately as controls. Our results demonstrate that acquired prosopagnosia can exist without word recognition deficits. These findings are inconsistent with a key prediction of MTMT. They instead support the hypothesis that face recognition is carried out by specialized mechanisms that do not contribute to recognition of written words.

  17. Strategic deployment of orthographic knowledge in phoneme detection.

    PubMed

    Cutler, Anne; Treiman, Rebecca; van Ooijen, Brit

    2010-01-01

    The phoneme detection task is widely used in spoken-word recognition research. Alphabetically literate participants, however, are more used to explicit representations of letters than of phonemes. The present study explored whether phoneme detection is sensitive to how target phonemes are, or may be, orthographically realized. Listeners detected the target sounds [b, m, t, f, s, k] in word-initial position in sequences of isolated English words. Response times were faster to the targets [b, m, t], which have consistent word-initial spelling, than to the targets [f, s, k], which are inconsistently spelled, but only when spelling was rendered salient by the presence in the experiment of many irregularly spelled filler words. Within the inconsistent targets [f, s, k], there was no significant difference between responses to targets in words with more usual (foam, seed, cattle) versus less usual (phone, cede, kettle) spellings. Phoneme detection is thus not necessarily sensitive to orthographic effects; knowledge of spelling stored in the lexical representations of words does not automatically become available as word candidates are activated. However, salient orthographic manipulations in experimental input can induce such sensitivity. We attribute this to listeners' experience of the value of spelling in everyday situations that encourage phonemic decisions (such as learning new names).

  18. Improved word recognition for observers with age-related maculopathies using compensation filters

    NASA Technical Reports Server (NTRS)

    Lawton, Teri B.

    1988-01-01

    A method for improving word recognition for people with age-related maculopathies, which cause a loss of central vision, is discussed. It is found that the use of individualized compensation filters based on an person's normalized contrast sensitivity function can improve word recognition for people with age-related maculopathies. It is shown that 27-70 pct more magnification is needed for unfiltered words compared to filtered words. The improvement in word recognition is positively correlated with the severity of vision loss.

  19. The software for automatic creation of the formal grammars used by speech recognition, computer vision, editable text conversion systems, and some new functions

    NASA Astrophysics Data System (ADS)

    Kardava, Irakli; Tadyszak, Krzysztof; Gulua, Nana; Jurga, Stefan

    2017-02-01

    For more flexibility of environmental perception by artificial intelligence it is needed to exist the supporting software modules, which will be able to automate the creation of specific language syntax and to make a further analysis for relevant decisions based on semantic functions. According of our proposed approach, of which implementation it is possible to create the couples of formal rules of given sentences (in case of natural languages) or statements (in case of special languages) by helping of computer vision, speech recognition or editable text conversion system for further automatic improvement. In other words, we have developed an approach, by which it can be achieved to significantly improve the training process automation of artificial intelligence, which as a result will give us a higher level of self-developing skills independently from us (from users). At the base of our approach we have developed a software demo version, which includes the algorithm and software code for the entire above mentioned component's implementation (computer vision, speech recognition and editable text conversion system). The program has the ability to work in a multi - stream mode and simultaneously create a syntax based on receiving information from several sources.

  20. [Explicit memory for type font of words in source monitoring and recognition tasks].

    PubMed

    Hatanaka, Yoshiko; Fujita, Tetsuya

    2004-02-01

    We investigated whether people can consciously remember type fonts of words by methods of examining explicit memory; source-monitoring and old/new-recognition. We set matched, non-matched, and non-studied conditions between the study and the test words using two kinds of type fonts; Gothic and MARU. After studying words in one way of encoding, semantic or physical, subjects in a source-monitoring task made a three way discrimination between new words, Gothic words, and MARU words (Exp. 1). Subjects in an old/new-recognition task indicated whether test words were previously presented or not (Exp. 2). We compared the source judgments with old/new recognition data. As a result, these data showed conscious recollection for type font of words on the source monitoring task and dissociation between source monitoring and old/new recognition performance.

  1. Recurrent neural networks with specialized word embeddings for health-domain named-entity recognition.

    PubMed

    Jauregi Unanue, Iñigo; Zare Borzeshi, Ehsan; Piccardi, Massimo

    2017-12-01

    Previous state-of-the-art systems on Drug Name Recognition (DNR) and Clinical Concept Extraction (CCE) have focused on a combination of text "feature engineering" and conventional machine learning algorithms such as conditional random fields and support vector machines. However, developing good features is inherently heavily time-consuming. Conversely, more modern machine learning approaches such as recurrent neural networks (RNNs) have proved capable of automatically learning effective features from either random assignments or automated word "embeddings". (i) To create a highly accurate DNR and CCE system that avoids conventional, time-consuming feature engineering. (ii) To create richer, more specialized word embeddings by using health domain datasets such as MIMIC-III. (iii) To evaluate our systems over three contemporary datasets. Two deep learning methods, namely the Bidirectional LSTM and the Bidirectional LSTM-CRF, are evaluated. A CRF model is set as the baseline to compare the deep learning systems to a traditional machine learning approach. The same features are used for all the models. We have obtained the best results with the Bidirectional LSTM-CRF model, which has outperformed all previously proposed systems. The specialized embeddings have helped to cover unusual words in DrugBank and MedLine, but not in the i2b2/VA dataset. We present a state-of-the-art system for DNR and CCE. Automated word embeddings has allowed us to avoid costly feature engineering and achieve higher accuracy. Nevertheless, the embeddings need to be retrained over datasets that are adequate for the domain, in order to adequately cover the domain-specific vocabulary. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Genetic and environmental influences on word recognition and spelling deficits as a function of age.

    PubMed

    Friend, Angela; DeFries, John C; Wadsworth, Sally J; Olson, Richard K

    2007-05-01

    Previous twin studies have suggested a possible developmental dissociation between genetic influences on word recognition and spelling deficits, wherein genetic influence declined across age for word recognition, and increased for spelling recognition. The present study included two measures of word recognition (timed, untimed) and two measures of spelling (recognition, production) in younger and older twins. The heritability estimates for the two word recognition measures were .65 (timed) and .64 (untimed) in the younger group and .65 and .58 respectively in the older group. For spelling, the corresponding estimates were .57 (recognition) and .51 (production) in the younger group and .65 and .67 in the older group. Although these age group differences were not significant, the pattern of decline in heritability across age for reading and increase for spelling conformed to that predicted by the developmental dissociation hypothesis. However, the tests for an interaction between genetic influences on word recognition and spelling deficits as a function of age were not significant.

  3. Sizing up the competition: quantifying the influence of the mental lexicon on auditory and visual spoken word recognition.

    PubMed

    Strand, Julia F; Sommers, Mitchell S

    2011-09-01

    Much research has explored how spoken word recognition is influenced by the architecture and dynamics of the mental lexicon (e.g., Luce and Pisoni, 1998; McClelland and Elman, 1986). A more recent question is whether the processes underlying word recognition are unique to the auditory domain, or whether visually perceived (lipread) speech may also be sensitive to the structure of the mental lexicon (Auer, 2002; Mattys, Bernstein, and Auer, 2002). The current research was designed to test the hypothesis that both aurally and visually perceived spoken words are isolated in the mental lexicon as a function of their modality-specific perceptual similarity to other words. Lexical competition (the extent to which perceptually similar words influence recognition of a stimulus word) was quantified using metrics that are well-established in the literature, as well as a statistical method for calculating perceptual confusability based on the phi-square statistic. Both auditory and visual spoken word recognition were influenced by modality-specific lexical competition as well as stimulus word frequency. These findings extend the scope of activation-competition models of spoken word recognition and reinforce the hypothesis (Auer, 2002; Mattys et al., 2002) that perceptual and cognitive properties underlying spoken word recognition are not specific to the auditory domain. In addition, the results support the use of the phi-square statistic as a better predictor of lexical competition than metrics currently used in models of spoken word recognition. © 2011 Acoustical Society of America

  4. Recognizing Spoken Words: The Neighborhood Activation Model

    PubMed Central

    Luce, Paul A.; Pisoni, David B.

    2012-01-01

    Objective A fundamental problem in the study of human spoken word recognition concerns the structural relations among the sound patterns of words in memory and the effects these relations have on spoken word recognition. In the present investigation, computational and experimental methods were employed to address a number of fundamental issues related to the representation and structural organization of spoken words in the mental lexicon and to lay the groundwork for a model of spoken word recognition. Design Using a computerized lexicon consisting of transcriptions of 20,000 words, similarity neighborhoods for each of the transcriptions were computed. Among the variables of interest in the computation of the similarity neighborhoods were: 1) the number of words occurring in a neighborhood, 2) the degree of phonetic similarity among the words, and 3) the frequencies of occurrence of the words in the language. The effects of these variables on auditory word recognition were examined in a series of behavioral experiments employing three experimental paradigms: perceptual identification of words in noise, auditory lexical decision, and auditory word naming. Results The results of each of these experiments demonstrated that the number and nature of words in a similarity neighborhood affect the speed and accuracy of word recognition. A neighborhood probability rule was developed that adequately predicted identification performance. This rule, based on Luce's (1959) choice rule, combines stimulus word intelligibility, neighborhood confusability, and frequency into a single expression. Based on this rule, a model of auditory word recognition, the neighborhood activation model, was proposed. This model describes the effects of similarity neighborhood structure on the process of discriminating among the acoustic-phonetic representations of words in memory. The results of these experiments have important implications for current conceptions of auditory word recognition in normal and hearing impaired populations of children and adults. PMID:9504270

  5. The effect of background noise on the word activation process in nonnative spoken-word recognition.

    PubMed

    Scharenborg, Odette; Coumans, Juul M J; van Hout, Roeland

    2018-02-01

    This article investigates 2 questions: (1) does the presence of background noise lead to a differential increase in the number of simultaneously activated candidate words in native and nonnative listening? And (2) do individual differences in listeners' cognitive and linguistic abilities explain the differential effect of background noise on (non-)native speech recognition? English and Dutch students participated in an English word recognition experiment, in which either a word's onset or offset was masked by noise. The native listeners outperformed the nonnative listeners in all listening conditions. Importantly, however, the effect of noise on the multiple activation process was found to be remarkably similar in native and nonnative listening. The presence of noise increased the set of candidate words considered for recognition in both native and nonnative listening. The results indicate that the observed performance differences between the English and Dutch listeners should not be primarily attributed to a differential effect of noise, but rather to the difference between native and nonnative listening. Additional analyses showed that word-initial information was found to be more important than word-final information during spoken-word recognition. When word-initial information was no longer reliably available word recognition accuracy dropped and word frequency information could no longer be used suggesting that word frequency information is strongly tied to the onset of words and the earliest moments of lexical access. Proficiency and inhibition ability were found to influence nonnative spoken-word recognition in noise, with a higher proficiency in the nonnative language and worse inhibition ability leading to improved recognition performance. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  6. Latency of modality-specific reactivation of auditory and visual information during episodic memory retrieval.

    PubMed

    Ueno, Daisuke; Masumoto, Kouhei; Sutani, Kouichi; Iwaki, Sunao

    2015-04-15

    This study used magnetoencephalography (MEG) to examine the latency of modality-specific reactivation in the visual and auditory cortices during a recognition task to determine the effects of reactivation on episodic memory retrieval. Nine right-handed healthy young adults participated in the experiment. The experiment consisted of a word-encoding phase and two recognition phases. Three encoding conditions were included: encoding words alone (word-only) and encoding words presented with either related pictures (visual) or related sounds (auditory). The recognition task was conducted in the MEG scanner 15 min after the completion of the encoding phase. After the recognition test, a source-recognition task was given, in which participants were required to choose whether each recognition word was not presented or was presented with which information during the encoding phase. Word recognition in the auditory condition was higher than that in the word-only condition. Confidence-of-recognition scores (d') and the source-recognition test showed superior performance in both the visual and the auditory conditions compared with the word-only condition. An equivalent current dipoles analysis of MEG data indicated that higher equivalent current dipole amplitudes in the right fusiform gyrus occurred during the visual condition and in the superior temporal auditory cortices during the auditory condition, both 450-550 ms after onset of the recognition stimuli. Results suggest that reactivation of visual and auditory brain regions during recognition binds language with modality-specific information and that reactivation enhances confidence in one's recognition performance.

  7. An Investigation of the Role of Grapheme Units in Word Recognition

    ERIC Educational Resources Information Center

    Lupker, Stephen J.; Acha, Joana; Davis, Colin J.; Perea, Manuel

    2012-01-01

    In most current models of word recognition, the word recognition process is assumed to be driven by the activation of letter units (i.e., that letters are the perceptual units in reading). An alternative possibility is that the word recognition process is driven by the activation of grapheme units, that is, that graphemes, rather than letters, are…

  8. Severe difficulties with word recognition in noise after platinum chemotherapy in childhood, and improvements with open-fitting hearing-aids.

    PubMed

    Einarsson, Einar-Jón; Petersen, Hannes; Wiebe, Thomas; Fransson, Per-Anders; Magnusson, Måns; Moëll, Christian

    2011-10-01

    To investigate word recognition in noise in subjects treated in childhood with chemotherapy, study benefits of open-fitting hearing-aids for word recognition, and investigate whether self-reported hearing-handicap corresponded to subjects' word recognition ability. Subjects diagnosed with cancer and treated with platinum-based chemotherapy in childhood underwent audiometric evaluations. Fifteen subjects (eight females and seven males) fulfilled the criteria set for the study, and four of those received customized open-fitting hearing-aids. Subjects with cisplatin-induced ototoxicity had severe difficulties recognizing words in noise, and scored as low as 54% below reference scores standardized for age and degree of hearing loss. Hearing-impaired subjects' self-reported hearing-handicap correlated significantly with word recognition in a quiet environment but not in noise. Word recognition in noise improved markedly (up to 46%) with hearing-aids, and the self-reported hearing-handicap and disability score were reduced by more than 50%. This study demonstrates the importance of testing word recognition in noise in subjects treated with platinum-based chemotherapy in childhood, and to use specific custom-made questionnaires to evaluate the experienced hearing-handicap. Open-fitting hearing-aids are a good alternative for subjects suffering from poor word recognition in noise.

  9. Word-level recognition of multifont Arabic text using a feature vector matching approach

    NASA Astrophysics Data System (ADS)

    Erlandson, Erik J.; Trenkle, John M.; Vogt, Robert C., III

    1996-03-01

    Many text recognition systems recognize text imagery at the character level and assemble words from the recognized characters. An alternative approach is to recognize text imagery at the word level, without analyzing individual characters. This approach avoids the problem of individual character segmentation, and can overcome local errors in character recognition. A word-level recognition system for machine-printed Arabic text has been implemented. Arabic is a script language, and is therefore difficult to segment at the character level. Character segmentation has been avoided by recognizing text imagery of complete words. The Arabic recognition system computes a vector of image-morphological features on a query word image. This vector is matched against a precomputed database of vectors from a lexicon of Arabic words. Vectors from the database with the highest match score are returned as hypotheses for the unknown image. Several feature vectors may be stored for each word in the database. Database feature vectors generated using multiple fonts and noise models allow the system to be tuned to its input stream. Used in conjunction with database pruning techniques, this Arabic recognition system has obtained promising word recognition rates on low-quality multifont text imagery.

  10. Reading in developmental prosopagnosia: Evidence for a dissociation between word and face recognition.

    PubMed

    Starrfelt, Randi; Klargaard, Solja K; Petersen, Anders; Gerlach, Christian

    2018-02-01

    Recent models suggest that face and word recognition may rely on overlapping cognitive processes and neural regions. In support of this notion, face recognition deficits have been demonstrated in developmental dyslexia. Here we test whether the opposite association can also be found, that is, impaired reading in developmental prosopagnosia. We tested 10 adults with developmental prosopagnosia and 20 matched controls. All participants completed the Cambridge Face Memory Test, the Cambridge Face Perception test and a Face recognition questionnaire used to quantify everyday face recognition experience. Reading was measured in four experimental tasks, testing different levels of letter, word, and text reading: (a) single word reading with words of varying length,(b) vocal response times in single letter and short word naming, (c) recognition of single letters and short words at brief exposure durations (targeting the word superiority effect), and d) text reading. Participants with developmental prosopagnosia performed strikingly similar to controls across the four reading tasks. Formal analysis revealed a significant dissociation between word and face recognition, as the difference in performance with faces and words was significantly greater for participants with developmental prosopagnosia than for controls. Adult developmental prosopagnosics read as quickly and fluently as controls, while they are seemingly unable to learn efficient strategies for recognizing faces. We suggest that this is due to the differing demands that face and word recognition put on the perceptual system. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  11. Italians Use Abstract Knowledge about Lexical Stress during Spoken-Word Recognition

    ERIC Educational Resources Information Center

    Sulpizio, Simone; McQueen, James M.

    2012-01-01

    In two eye-tracking experiments in Italian, we investigated how acoustic information and stored knowledge about lexical stress are used during the recognition of tri-syllabic spoken words. Experiment 1 showed that Italians use acoustic cues to a word's stress pattern rapidly in word recognition, but only for words with antepenultimate stress.…

  12. The Slow Developmental Time Course of Real-Time Spoken Word Recognition

    ERIC Educational Resources Information Center

    Rigler, Hannah; Farris-Trimble, Ashley; Greiner, Lea; Walker, Jessica; Tomblin, J. Bruce; McMurray, Bob

    2015-01-01

    This study investigated the developmental time course of spoken word recognition in older children using eye tracking to assess how the real-time processing dynamics of word recognition change over development. We found that 9-year-olds were slower to activate the target words and showed more early competition from competitor words than…

  13. Developmental Spelling and Word Recognition: A Validation of Ehri's Model of Word Recognition Development

    ERIC Educational Resources Information Center

    Ebert, Ashlee A.

    2009-01-01

    Ehri's developmental model of word recognition outlines early reading development that spans from the use of logos to advanced knowledge of oral and written language to read words. Henderson's developmental spelling theory presents stages of word knowledge that progress in a similar manner to Ehri's phases. The purpose of this research study was…

  14. The A2iA French handwriting recognition system at the Rimes-ICDAR2011 competition

    NASA Astrophysics Data System (ADS)

    Menasri, Farès; Louradour, Jérôme; Bianne-Bernard, Anne-Laure; Kermorvant, Christopher

    2012-01-01

    This paper describes the system for the recognition of French handwriting submitted by A2iA to the competition organized at ICDAR2011 using the Rimes database. This system is composed of several recognizers based on three different recognition technologies, combined using a novel combination method. A framework multi-word recognition based on weighted finite state transducers is presented, using an explicit word segmentation, a combination of isolated word recognizers and a language model. The system was tested both for isolated word recognition and for multi-word line recognition and submitted to the RIMES-ICDAR2011 competition. This system outperformed all previously proposed systems on these tasks.

  15. Separable spectro-temporal Gabor filter bank features: Reducing the complexity of robust features for automatic speech recognition.

    PubMed

    Schädler, Marc René; Kollmeier, Birger

    2015-04-01

    To test if simultaneous spectral and temporal processing is required to extract robust features for automatic speech recognition (ASR), the robust spectro-temporal two-dimensional-Gabor filter bank (GBFB) front-end from Schädler, Meyer, and Kollmeier [J. Acoust. Soc. Am. 131, 4134-4151 (2012)] was de-composed into a spectral one-dimensional-Gabor filter bank and a temporal one-dimensional-Gabor filter bank. A feature set that is extracted with these separate spectral and temporal modulation filter banks was introduced, the separate Gabor filter bank (SGBFB) features, and evaluated on the CHiME (Computational Hearing in Multisource Environments) keywords-in-noise recognition task. From the perspective of robust ASR, the results showed that spectral and temporal processing can be performed independently and are not required to interact with each other. Using SGBFB features permitted the signal-to-noise ratio (SNR) to be lowered by 1.2 dB while still performing as well as the GBFB-based reference system, which corresponds to a relative improvement of the word error rate by 12.8%. Additionally, the real time factor of the spectro-temporal processing could be reduced by more than an order of magnitude. Compared to human listeners, the SNR needed to be 13 dB higher when using Mel-frequency cepstral coefficient features, 11 dB higher when using GBFB features, and 9 dB higher when using SGBFB features to achieve the same recognition performance.

  16. Research and Implementation of Tibetan Word Segmentation Based on Syllable Methods

    NASA Astrophysics Data System (ADS)

    Jiang, Jing; Li, Yachao; Jiang, Tao; Yu, Hongzhi

    2018-03-01

    Tibetan word segmentation (TWS) is an important problem in Tibetan information processing, while abbreviated word recognition is one of the key and most difficult problems in TWS. Most of the existing methods of Tibetan abbreviated word recognition are rule-based approaches, which need vocabulary support. In this paper, we propose a method based on sequence tagging model for abbreviated word recognition, and then implement in TWS systems with sequence labeling models. The experimental results show that our abbreviated word recognition method is fast and effective and can be combined easily with the segmentation model. This significantly increases the effect of the Tibetan word segmentation.

  17. The low-frequency encoding disadvantage: Word frequency affects processing demands.

    PubMed

    Diana, Rachel A; Reder, Lynne M

    2006-07-01

    Low-frequency words produce more hits and fewer false alarms than high-frequency words in a recognition task. The low-frequency hit rate advantage has sometimes been attributed to processes that operate during the recognition test (e.g., L. M. Reder et al., 2000). When tasks other than recognition, such as recall, cued recall, or associative recognition, are used, the effects seem to contradict a low-frequency advantage in memory. Four experiments are presented to support the claim that in addition to the advantage of low-frequency words at retrieval, there is a low-frequency disadvantage during encoding. That is, low-frequency words require more processing resources to be encoded episodically than high-frequency words. Under encoding conditions in which processing resources are limited, low-frequency words show a larger decrement in recognition than high-frequency words. Also, studying items (pictures and words of varying frequencies) along with low-frequency words reduces performance for those stimuli. Copyright 2006 APA, all rights reserved.

  18. L2 Word Recognition: Influence of L1 Orthography on Multi-syllabic Word Recognition.

    PubMed

    Hamada, Megumi

    2017-10-01

    L2 reading research suggests that L1 orthographic experience influences L2 word recognition. Nevertheless, the findings on multi-syllabic words in English are still limited despite the fact that a vast majority of words are multi-syllabic. The study investigated whether L1 orthography influences the recognition of multi-syllabic words, focusing on the position of an embedded word. The participants were Arabic ESL learners, Chinese ESL learners, and native speakers of English. The task was a word search task, in which the participants identified a target word embedded in a pseudoword at the initial, middle, or final position. The search accuracy and speed indicated that all groups showed a strong preference for the initial position. The accuracy data further indicated group differences. The Arabic group showed higher accuracy in the final than middle, while the Chinese group showed the opposite and the native speakers showed no difference between the two positions. The findings suggest that L2 multi-syllabic word recognition involves unique processes.

  19. Automatic Item Generation of Probability Word Problems

    ERIC Educational Resources Information Center

    Holling, Heinz; Bertling, Jonas P.; Zeuch, Nina

    2009-01-01

    Mathematical word problems represent a common item format for assessing student competencies. Automatic item generation (AIG) is an effective way of constructing many items with predictable difficulties, based on a set of predefined task parameters. The current study presents a framework for the automatic generation of probability word problems…

  20. Font adaptive word indexing of modern printed documents.

    PubMed

    Marinai, Simone; Marino, Emanuele; Soda, Giovanni

    2006-08-01

    We propose an approach for the word-level indexing of modern printed documents which are difficult to recognize using current OCR engines. By means of word-level indexing, it is possible to retrieve the position of words in a document, enabling queries involving proximity of terms. Web search engines implement this kind of indexing, allowing users to retrieve Web pages on the basis of their textual content. Nowadays, digital libraries hold collections of digitized documents that can be retrieved either by browsing the document images or relying on appropriate metadata assembled by domain experts. Word indexing tools would therefore increase the access to these collections. The proposed system is designed to index homogeneous document collections by automatically adapting to different languages and font styles without relying on OCR engines for character recognition. The approach is based on three main ideas: the use of Self Organizing Maps (SOM) to perform unsupervised character clustering, the definition of one suitable vector-based word representation whose size depends on the word aspect-ratio, and the run-time alignment of the query word with indexed words to deal with broken and touching characters. The most appropriate applications are for processing modern printed documents (17th to 19th centuries) where current OCR engines are less accurate. Our experimental analysis addresses six data sets containing documents ranging from books of the 17th century to contemporary journals.

  1. Automatic speech recognition and training for severely dysarthric users of assistive technology: the STARDUST project.

    PubMed

    Parker, Mark; Cunningham, Stuart; Enderby, Pam; Hawley, Mark; Green, Phil

    2006-01-01

    The STARDUST project developed robust computer speech recognizers for use by eight people with severe dysarthria and concomitant physical disability to access assistive technologies. Independent computer speech recognizers trained with normal speech are of limited functional use by those with severe dysarthria due to limited and inconsistent proximity to "normal" articulatory patterns. Severe dysarthric output may also be characterized by a small mass of distinguishable phonetic tokens making the acoustic differentiation of target words difficult. Speaker dependent computer speech recognition using Hidden Markov Models was achieved by the identification of robust phonetic elements within the individual speaker output patterns. A new system of speech training using computer generated visual and auditory feedback reduced the inconsistent production of key phonetic tokens over time.

  2. Clinical implications of word recognition differences in earphone and aided conditions

    PubMed Central

    McRackan, Theodore R.; Ahlstrom, Jayne B.; Clinkscales, William B.; Meyer, Ted A.; Dubno, Judy R

    2017-01-01

    Objective To compare word recognition scores for adults with hearing loss measured using earphones and in the sound field without and with hearing aids (HA) Study design Independent review of pre-surgical audiological data from an active middle ear implant (MEI) FDA clinical trial Setting Multicenter prospective FDA clinical trial Patients Ninety-four adult HA users Interventions/Main outcomes measured Pre-operative earphone, unaided and aided pure tone thresholds, word recognition scores, and speech intelligibility index. Results We performed an independent review of pre-surgical audiological data from a MEI FDA trial and compared unaided and aided word recognition scores with participants’ HAs fit according to the NAL-R algorithm. For 52 participants (55.3%), differences in scores between earphone and aided conditions were >10%; for 33 participants (35.1%), earphone scores were higher by 10% or more than aided scores. These participants had significantly higher pure tone thresholds at 250 Hz, 500 Hz, and 1000 Hz), higher pure tone averages, higher speech recognition thresholds, (and higher earphone speech levels (p=0.002). No significant correlation was observed between word recognition scores measured with earphones and with hearing aids (r=.14; p=0.16), whereas a moderately high positive correlation was observed between unaided and aided word recognition (r=0.68; p<0.001). Conclusion Results of the these analyses do not support the common clinical practice of using word recognition scores measured with earphones to predict aided word recognition or hearing aid benefit. Rather, these results provide evidence supporting the measurement of aided word recognition in patients who are considering hearing aids. PMID:27631832

  3. Adult Word Recognition and Visual Sequential Memory

    ERIC Educational Resources Information Center

    Holmes, V. M.

    2012-01-01

    Two experiments were conducted investigating the role of visual sequential memory skill in the word recognition efficiency of undergraduate university students. Word recognition was assessed in a lexical decision task using regularly and strangely spelt words, and nonwords that were either standard orthographically legal strings or items made from…

  4. Interpreting Chicken-Scratch: Lexical Access for Handwritten Words

    ERIC Educational Resources Information Center

    Barnhart, Anthony S.; Goldinger, Stephen D.

    2010-01-01

    Handwritten word recognition is a field of study that has largely been neglected in the psychological literature, despite its prevalence in society. Whereas studies of spoken word recognition almost exclusively employ natural, human voices as stimuli, studies of visual word recognition use synthetic typefaces, thus simplifying the process of word…

  5. Asymmetries in Early Word Recognition: The Case of Stops and Fricatives

    ERIC Educational Resources Information Center

    Altvater-Mackensen, Nicole; van der Feest, Suzanne V. H.; Fikkert, Paula

    2014-01-01

    Toddlers' discrimination of native phonemic contrasts is generally unproblematic. Yet using those native contrasts in word learning and word recognition can be more challenging. In this article, we investigate perceptual versus phonological explanations for asymmetrical patterns found in early word recognition. We systematically investigated the…

  6. Automatic Activation of Exercise and Sedentary Stereotypes

    ERIC Educational Resources Information Center

    Berry, Tanya; Spence, John C.

    2009-01-01

    We examined the automatic activation of "sedentary" and "exerciser" stereotypes using a social prime Stroop task. Results showed significantly slower response times between the exercise words and the exercise control words and between the sedentary words and the exercise control words when preceded by an attractive exerciser prime. Words preceded…

  7. Anticipatory coarticulation facilitates word recognition in toddlers.

    PubMed

    Mahr, Tristan; McMillan, Brianna T M; Saffran, Jenny R; Ellis Weismer, Susan; Edwards, Jan

    2015-09-01

    Children learn from their environments and their caregivers. To capitalize on learning opportunities, young children have to recognize familiar words efficiently by integrating contextual cues across word boundaries. Previous research has shown that adults can use phonetic cues from anticipatory coarticulation during word recognition. We asked whether 18-24 month-olds (n=29) used coarticulatory cues on the word "the" when recognizing the following noun. We performed a looking-while-listening eyetracking experiment to examine word recognition in neutral vs. facilitating coarticulatory conditions. Participants looked to the target image significantly sooner when the determiner contained facilitating coarticulatory cues. These results provide the first evidence that novice word-learners can take advantage of anticipatory sub-phonemic cues during word recognition. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. False recognition production indexes in forward associative strength (FAS) lists with three critical words.

    PubMed

    Beato, María Soledad; Arndt, Jason

    2014-01-01

    False memory illusions have been widely studied using the Deese/Roediger-McDermott paradigm (DRM). In this paradigm, participants study words semantically related to a single nonpresented critical word. In a memory test critical words are often falsely recalled and recognized. The present study was conducted to measure the levels of false recognition for seventy-five Spanish DRM word lists that have multiple critical words per list. Lists included three critical words (e.g., HELL, LUCEFER, and SATAN) simultaneously associated with six studied words (e.g., devil, demon, fire, red, bad, and evil). Different levels of forward associative strength (FAS) between the critical words and their studied associates were used in the construction of the lists. Specifically, we selected lists with the highest FAS values possible and FAS was continuously decreased in order to obtain the 75 lists. Six words per list, simultaneously associated with three critical words, were sufficient to produce false recognition. Furthermore, there was wide variability in rates of false recognition (e.g., 53% for DUNGEON, PRISON, and GRATES; 1% for BRACKETS, GARMENT, and CLOTHING). Finally, there was no correlation between false recognition and associative strength. False recognition variability could not be attributed to differences in the forward associative strength.

  9. Effect of task demands on dual coding of pictorial stimuli.

    PubMed

    Babbitt, B C

    1982-01-01

    Recent studies have suggested that verbal labeling of a picture does not occur automatically. Although several experiments using paired-associate tasks produced little evidence indicating the use of a verbal code with picture stimuli, the tasks were probably not sensitive to whether the codes were activated initially. It is possible that verbal labels were activated at input, but not used later in performing the tasks. The present experiment used a color-naming interference task in order to assess, with a more sensitive measure, the amount of verbal coding occurring in response to word or picture input. Subjects named the color of ink in which words were printed following either word or picture input. If verbal labeling of the input occurs, then latency of color naming should increase when the input item and color-naming word are related. The results provided substantial evidence of such verbal activation when the input items were words. However, the presence of verbal activation with picture input was a function of task demands. Activation occurred when a recall memory test was used, but not when a recognition memory test was used. The results support the conclusion that name information (labels) need not be activated during presentation of visual stimuli.

  10. Novel Blind Recognition Algorithm of Frame Synchronization Words Based on Soft-Decision in Digital Communication Systems.

    PubMed

    Qin, Jiangyi; Huang, Zhiping; Liu, Chunwu; Su, Shaojing; Zhou, Jing

    2015-01-01

    A novel blind recognition algorithm of frame synchronization words is proposed to recognize the frame synchronization words parameters in digital communication systems. In this paper, a blind recognition method of frame synchronization words based on the hard-decision is deduced in detail. And the standards of parameter recognition are given. Comparing with the blind recognition based on the hard-decision, utilizing the soft-decision can improve the accuracy of blind recognition. Therefore, combining with the characteristics of Quadrature Phase Shift Keying (QPSK) signal, an improved blind recognition algorithm based on the soft-decision is proposed. Meanwhile, the improved algorithm can be extended to other signal modulation forms. Then, the complete blind recognition steps of the hard-decision algorithm and the soft-decision algorithm are given in detail. Finally, the simulation results show that both the hard-decision algorithm and the soft-decision algorithm can recognize the parameters of frame synchronization words blindly. What's more, the improved algorithm can enhance the accuracy of blind recognition obviously.

  11. Recognition intent and visual word recognition.

    PubMed

    Wang, Man-Ying; Ching, Chi-Le

    2009-03-01

    This study adopted a change detection task to investigate whether and how recognition intent affects the construction of orthographic representation in visual word recognition. Chinese readers (Experiment 1-1) and nonreaders (Experiment 1-2) detected color changes in radical components of Chinese characters. Explicit recognition demand was imposed in Experiment 2 by an additional recognition task. When the recognition was implicit, a bias favoring the radical location informative of character identity was found in Chinese readers (Experiment 1-1), but not nonreaders (Experiment 1-2). With explicit recognition demands, the effect of radical location interacted with radical function and word frequency (Experiment 2). An estimate of identification performance under implicit recognition was derived in Experiment 3. These findings reflect the joint influence of recognition intent and orthographic regularity in shaping readers' orthographic representation. The implication for the role of visual attention in word recognition was also discussed.

  12. Practical automatic Arabic license plate recognition system

    NASA Astrophysics Data System (ADS)

    Mohammad, Khader; Agaian, Sos; Saleh, Hani

    2011-02-01

    Since 1970's, the need of an automatic license plate recognition system, sometimes referred as Automatic License Plate Recognition system, has been increasing. A license plate recognition system is an automatic system that is able to recognize a license plate number, extracted from image sensors. In specific, Automatic License Plate Recognition systems are being used in conjunction with various transportation systems in application areas such as law enforcement (e.g. speed limit enforcement) and commercial usages such as parking enforcement and automatic toll payment private and public entrances, border control, theft and vandalism control. Vehicle license plate recognition has been intensively studied in many countries. Due to the different types of license plates being used, the requirement of an automatic license plate recognition system is different for each country. [License plate detection using cluster run length smoothing algorithm ].Generally, an automatic license plate localization and recognition system is made up of three modules; license plate localization, character segmentation and optical character recognition modules. This paper presents an Arabic license plate recognition system that is insensitive to character size, font, shape and orientation with extremely high accuracy rate. The proposed system is based on a combination of enhancement, license plate localization, morphological processing, and feature vector extraction using the Haar transform. The performance of the system is fast due to classification of alphabet and numerals based on the license plate organization. Experimental results for license plates of two different Arab countries show an average of 99 % successful license plate localization and recognition in a total of more than 20 different images captured from a complex outdoor environment. The results run times takes less time compared to conventional and many states of art methods.

  13. The Impact of Left and Right Intracranial Tumors on Picture and Word Recognition Memory

    ERIC Educational Resources Information Center

    Goldstein, Bram; Armstrong, Carol L.; Modestino, Edward; Ledakis, George; John, Cameron; Hunter, Jill V.

    2004-01-01

    This study investigated the effects of left and right intracranial tumors on picture and word recognition memory. We hypothesized that left hemispheric (LH) patients would exhibit greater word recognition memory impairment than right hemispheric (RH) patients, with no significant hemispheric group picture recognition memory differences. The LH…

  14. Modeling Spoken Word Recognition Performance by Pediatric Cochlear Implant Users using Feature Identification

    PubMed Central

    Frisch, Stefan A.; Pisoni, David B.

    2012-01-01

    Objective Computational simulations were carried out to evaluate the appropriateness of several psycholinguistic theories of spoken word recognition for children who use cochlear implants. These models also investigate the interrelations of commonly used measures of closed-set and open-set tests of speech perception. Design A software simulation of phoneme recognition performance was developed that uses feature identification scores as input. Two simulations of lexical access were developed. In one, early phoneme decisions are used in a lexical search to find the best matching candidate. In the second, phoneme decisions are made only when lexical access occurs. Simulated phoneme and word identification performance was then applied to behavioral data from the Phonetically Balanced Kindergarten test and Lexical Neighborhood Test of open-set word recognition. Simulations of performance were evaluated for children with prelingual sensorineural hearing loss who use cochlear implants with the MPEAK or SPEAK coding strategies. Results Open-set word recognition performance can be successfully predicted using feature identification scores. In addition, we observed no qualitative differences in performance between children using MPEAK and SPEAK, suggesting that both groups of children process spoken words similarly despite differences in input. Word recognition ability was best predicted in the model in which phoneme decisions were delayed until lexical access. Conclusions Closed-set feature identification and open-set word recognition focus on different, but related, levels of language processing. Additional insight for clinical intervention may be achieved by collecting both types of data. The most successful model of performance is consistent with current psycholinguistic theories of spoken word recognition. Thus it appears that the cognitive process of spoken word recognition is fundamentally the same for pediatric cochlear implant users and children and adults with normal hearing. PMID:11132784

  15. Linguistic Context Versus Semantic Competition in Word Recognition by Younger and Older Adults With Cochlear Implants.

    PubMed

    Amichetti, Nicole M; Atagi, Eriko; Kong, Ying-Yee; Wingfield, Arthur

    The increasing numbers of older adults now receiving cochlear implants raises the question of how the novel signal produced by cochlear implants may interact with cognitive aging in the recognition of words heard spoken within a linguistic context. The objective of this study was to pit the facilitative effects of a constraining linguistic context against a potential age-sensitive negative effect of response competition on effectiveness of word recognition. Younger (n = 8; mean age = 22.5 years) and older (n = 8; mean age = 67.5 years) adult implant recipients heard 20 target words as the final words in sentences that manipulated the target word's probability of occurrence within the sentence context. Data from published norms were also used to measure response entropy, calculated as the total number of different responses and the probability distribution of the responses suggested by the sentence context. Sentence-final words were presented to participants using a word-onset gating paradigm, in which a target word was presented with increasing amounts of its onset duration in 50 msec increments until the word was correctly identified. Results showed that for both younger and older adult implant users, the amount of word-onset information needed for correct recognition of sentence-final words was inversely proportional to their likelihood of occurrence within the sentence context, with older adults gaining differential advantage from the contextual constraints offered by a sentence context. On the negative side, older adults' word recognition was differentially hampered by high response entropy, with this effect being driven primarily by the number of competing responses that might also fit the sentence context. Consistent with previous research with normal-hearing younger and older adults, the present results showed older adult implant users' recognition of spoken words to be highly sensitive to linguistic context. This sensitivity, however, also resulted in a greater degree of interference from other words that might also be activated by the context, with negative effects on ease of word recognition. These results are consistent with an age-related inhibition deficit extending to the domain of semantic constraints on word recognition.

  16. Word Recognition and Critical Reading.

    ERIC Educational Resources Information Center

    Groff, Patrick

    1991-01-01

    This article discusses the distinctions between literal and critical reading and explains the role that word recognition ability plays in critical reading behavior. It concludes that correct word recognition provides the raw material on which higher order critical reading is based. (DB)

  17. The role of backward associative strength in false recognition of DRM lists with multiple critical words.

    PubMed

    Beato, María S; Arndt, Jason

    2017-08-01

    Memory is a reconstruction of the past and is prone to errors. One of the most widely-used paradigms to examine false memory is the Deese/Roediger-McDermott (DRM) paradigm. In this paradigm, participants studied words associatively related to a non-presented critical word. In a subsequent memory test critical words are often falsely recalled and/or recognized. In the present study, we examined the influence of backward associative strength (BAS) on false recognition using DRM lists with multiple critical words. In forty-eight English DRM lists, we manipulated BAS while controlling forward associative strength (FAS). Lists included four words (e.g., prison, convict, suspect, fugitive) simultaneously associated with two critical words (e.g., CRIMINAL, JAIL). The results indicated that true recognition was similar in high-BAS and low-BAS lists, while false recognition was greater in high-BAS lists than in low-BAS lists. Furthermore, there was a positive correlation between false recognition and the probability of a resonant connection between the studied words and their associates. These findings suggest that BAS and resonant connections influence false recognition, and extend prior research using DRM lists associated with a single critical word to studies of DRM lists associated with multiple critical words.

  18. Do handwritten words magnify lexical effects in visual word recognition?

    PubMed

    Perea, Manuel; Gil-López, Cristina; Beléndez, Victoria; Carreiras, Manuel

    2016-01-01

    An examination of how the word recognition system is able to process handwritten words is fundamental to formulate a comprehensive model of visual word recognition. Previous research has revealed that the magnitude of lexical effects (e.g., the word-frequency effect) is greater with handwritten words than with printed words. In the present lexical decision experiments, we examined whether the quality of handwritten words moderates the recruitment of top-down feedback, as reflected in word-frequency effects. Results showed a reading cost for difficult-to-read and easy-to-read handwritten words relative to printed words. But the critical finding was that difficult-to-read handwritten words, but not easy-to-read handwritten words, showed a greater word-frequency effect than printed words. Therefore, the inherent physical variability of handwritten words does not necessarily boost the magnitude of lexical effects.

  19. Word recognition using a lexicon constrained by first/last character decisions

    NASA Astrophysics Data System (ADS)

    Zhao, Sheila X.; Srihari, Sargur N.

    1995-03-01

    In lexicon based recognition of machine-printed word images, the size of the lexicon can be quite extensive. The recognition performance is closely related to the size of the lexicon. Recognition performance drops quickly when lexicon size increases. Here, we present an algorithm to improve the word recognition performance by reducing the size of the given lexicon. The algorithm utilizes the information provided by the first and last characters of a word to reduce the size of the given lexicon. Given a word image and a lexicon that contains the word in the image, the first and last characters are segmented and then recognized by a character classifier. The possible candidates based on the results given by the classifier are selected, which give us the sub-lexicon. Then a word shape analysis algorithm is applied to produce the final ranking of the given lexicon. The algorithm was tested on a set of machine- printed gray-scale word images which includes a wide range of print types and qualities.

  20. Word Spotting and Recognition with Embedded Attributes.

    PubMed

    Almazán, Jon; Gordo, Albert; Fornés, Alicia; Valveny, Ernest

    2014-12-01

    This paper addresses the problems of word spotting and word recognition on images. In word spotting, the goal is to find all instances of a query word in a dataset of images. In recognition, the goal is to recognize the content of the word image, usually aided by a dictionary or lexicon. We describe an approach in which both word images and text strings are embedded in a common vectorial subspace. This is achieved by a combination of label embedding and attributes learning, and a common subspace regression. In this subspace, images and strings that represent the same word are close together, allowing one to cast recognition and retrieval tasks as a nearest neighbor problem. Contrary to most other existing methods, our representation has a fixed length, is low dimensional, and is very fast to compute and, especially, to compare. We test our approach on four public datasets of both handwritten documents and natural images showing results comparable or better than the state-of-the-art on spotting and recognition tasks.

  1. Large-Corpus Phoneme and Word Recognition and the Generality of Lexical Context in CVC Word Perception

    ERIC Educational Resources Information Center

    Gelfand, Jessica T.; Christie, Robert E.; Gelfand, Stanley A.

    2014-01-01

    Purpose: Speech recognition may be analyzed in terms of recognition probabilities for perceptual wholes (e.g., words) and parts (e.g., phonemes), where j or the j-factor reveals the number of independent perceptual units required for recognition of the whole (Boothroyd, 1968b; Boothroyd & Nittrouer, 1988; Nittrouer & Boothroyd, 1990). For…

  2. A pilot study to assess oral health literacy by comparing a word recognition and comprehension tool.

    PubMed

    Khan, Khadija; Ruby, Brendan; Goldblatt, Ruth S; Schensul, Jean J; Reisine, Susan

    2014-11-18

    Oral health literacy is important to oral health outcomes. Very little has been established on comparing word recognition to comprehension in oral health literacy especially in older adults. Our goal was to compare methods to measure oral health literacy in older adults by using the Rapid Estimate of Literacy in Dentistry (REALD-30) tool including word recognition and comprehension and by assessing comprehension of a brochure about dry mouth. 75 males and 75 females were recruited from the University of Connecticut Dental practice. Participants were English speakers and at least 50 years of age. They were asked to read the REALD-30 words out loud (word recognition) and then define them (comprehension). Each correctly-pronounced and defined word was scored 1 for total REALD-30 word recognition and REALD-30 comprehension scores of 0-30. Participants then read the National Institute of Dental and Craniofacial Research brochure "Dry Mouth" and answered three questions defining dry mouth, causes and treatment. Participants also completed a survey on dental behavior. Participants scored higher on REALD-30 word recognition with a mean of 22.98 (SD = 5.1) compared to REALD-30 comprehension with a mean of 16.1 (SD = 4.3). The mean score on the brochure comprehension was 5.1 of a possible total of 7 (SD = 1.6). Pearson correlations demonstrated significant associations among the three measures. Multivariate regression showed that females and those with higher education had significantly higher scores on REALD-30 word-recognition, and dry mouth brochure questions. Being white was significantly related to higher REALD-30 recognition and comprehension scores but not to the scores on the brochure. This pilot study demonstrates the feasibility of using the REALD-30 and a brochure to assess literacy in a University setting among older adults. Participants had higher scores on the word recognition than on comprehension agreeing with other studies that recognition does not imply understanding.

  3. The cingulo-opercular network provides word-recognition benefit.

    PubMed

    Vaden, Kenneth I; Kuchinsky, Stefanie E; Cute, Stephanie L; Ahlstrom, Jayne B; Dubno, Judy R; Eckert, Mark A

    2013-11-27

    Recognizing speech in difficult listening conditions requires considerable focus of attention that is often demonstrated by elevated activity in putative attention systems, including the cingulo-opercular network. We tested the prediction that elevated cingulo-opercular activity provides word-recognition benefit on a subsequent trial. Eighteen healthy, normal-hearing adults (10 females; aged 20-38 years) performed word recognition (120 trials) in multi-talker babble at +3 and +10 dB signal-to-noise ratios during a sparse sampling functional magnetic resonance imaging (fMRI) experiment. Blood oxygen level-dependent (BOLD) contrast was elevated in the anterior cingulate cortex, anterior insula, and frontal operculum in response to poorer speech intelligibility and response errors. These brain regions exhibited significantly greater correlated activity during word recognition compared with rest, supporting the premise that word-recognition demands increased the coherence of cingulo-opercular network activity. Consistent with an adaptive control network explanation, general linear mixed model analyses demonstrated that increased magnitude and extent of cingulo-opercular network activity was significantly associated with correct word recognition on subsequent trials. These results indicate that elevated cingulo-opercular network activity is not simply a reflection of poor performance or error but also supports word recognition in difficult listening conditions.

  4. Storage and retrieval properties of dual codes for pictures and words in recognition memory.

    PubMed

    Snodgrass, J G; McClure, P

    1975-09-01

    Storage and retrieval properties of pictures and words were studied within a recognition memory paradigm. Storage was manipulated by instructing subjects either to image or to verbalize to both picture and word stimuli during the study sequence. Retrieval was manipulated by representing a proportion of the old picture and word items in their opposite form during the recognition test (i.e., some old pictures were tested with their corresponding words and vice versa). Recognition performance for pictures was identical under the two instructional conditions, whereas recognition performance for words was markedly superior under the imagery instruction condition. It was suggested that subjects may engage in dual coding of simple pictures naturally, regardless of instructions, whereas dual coding of words may occur only under imagery instructions. The form of the test item had no effect on recognition performance for either type of stimulus and under either instructional condition. However, change of form of the test item markedly reduced item-by-item correlations between the two instructional conditions. It is tentatively proposed that retrieval is required in recognition, but that the effect of a form change is simply to make the retrieval process less consistent, not less efficient.

  5. [Representation of letter position in visual word recognition process].

    PubMed

    Makioka, S

    1994-08-01

    Two experiments investigated the representation of letter position in visual word recognition process. In Experiment 1, subjects (12 undergraduates and graduates) were asked to detect a target word in a briefly-presented probe. Probes consisted of two kanji words. The latters which formed targets (critical letters) were always contained in probes. (e.g. target: [symbol: see text] probe: [symbol: see text]) High false alarm rate was observed when critical letters occupied the same within-word relative position (left or right within the word) in the probe words as in the target word. In Experiment 2 (subject were ten undergraduates and graduates), spaces adjacent to probe words were replaced by randomly chosen hiragana letters (e.g. [symbol: see text]), because spaces are not used to separate words in regular Japanese sentences. In addition to the effect of within-word relative position as in Experiment 1, the effect of between-word relative position (left or right across the probe words) was observed. These results suggest that information about within-word relative position of a letter is used in word recognition process. The effect of within-word relative position was explained by a connectionist model of word recognition.

  6. Eye-fixation behavior, lexical storage, and visual word recognition in a split processing model.

    PubMed

    Shillcock, R; Ellison, T M; Monaghan, P

    2000-10-01

    Some of the implications of a model of visual word recognition in which processing is conditioned by the anatomical splitting of the visual field between the two hemispheres of the brain are explored. The authors investigate the optimal processing of visually presented words within such an architecture, and, for a realistically sized lexicon of English, characterize a computationally optimal fixation point in reading. They demonstrate that this approach motivates a range of behavior observed in reading isolated words and text, including the optimal viewing position and its relationship with the preferred viewing location, the failure to fixate smaller words, asymmetries in hemisphere-specific processing, and the priority given to the exterior letters of words. The authors also show that split architectures facilitate the uptake of all the letter-position information necessary for efficient word recognition and that this information may be less specific than is normally assumed. A split model of word recognition captures a range of behavior in reading that is greater than that covered by existing models of visual word recognition.

  7. Phonological Priming and Cohort Effects in Toddlers

    ERIC Educational Resources Information Center

    Mani, Nivedita; Plunkett, Kim

    2011-01-01

    Adult word recognition is influenced by prior exposure to phonologically or semantically related words ("cup" primes "cat" or "plate") compared to unrelated words ("door"), suggesting that words are organised in the adult lexicon based on their phonological and semantic properties and that word recognition implicates not just the heard word, but…

  8. On the Automaticity of Emotion Processing in Words and Faces: Event-Related Brain Potentials Evidence from a Superficial Task

    ERIC Educational Resources Information Center

    Rellecke, Julian; Palazova, Marina; Sommer, Werner; Schacht, Annekathrin

    2011-01-01

    The degree to which emotional aspects of stimuli are processed automatically is controversial. Here, we assessed the automatic elicitation of emotion-related brain potentials (ERPs) to positive, negative, and neutral words and facial expressions in an easy and superficial face-word discrimination task, for which the emotional valence was…

  9. Modeling Polymorphemic Word Recognition: Exploring Differences among Children with Early-Emerging and Late- Emerging Word Reading Difficulty

    ERIC Educational Resources Information Center

    Kearns, Devin M.; Steacy, Laura M.; Compton, Donald L.; Gilbert, Jennifer K.; Goodwin, Amanda P.; Cho, Eunsoo; Lindstrom, Esther R.; Collins, Alyson A.

    2016-01-01

    Comprehensive models of derived polymorphemic word recognition skill in developing readers, with an emphasis on children with reading difficulty (RD), have not been developed. The purpose of the present study was to model individual differences in polymorphemic word recognition ability at the item level among 5th-grade children (N = 173)…

  10. Surviving blind decomposition: A distributional analysis of the time-course of complex word recognition.

    PubMed

    Schmidtke, Daniel; Matsuki, Kazunaga; Kuperman, Victor

    2017-11-01

    The current study addresses a discrepancy in the psycholinguistic literature about the chronology of information processing during the visual recognition of morphologically complex words. Form-then-meaning accounts of complex word recognition claim that morphemes are processed as units of form prior to any influence of their meanings, whereas form-and-meaning models posit that recognition of complex word forms involves the simultaneous access of morphological and semantic information. The study reported here addresses this theoretical discrepancy by applying a nonparametric distributional technique of survival analysis (Reingold & Sheridan, 2014) to 2 behavioral measures of complex word processing. Across 7 experiments reported here, this technique is employed to estimate the point in time at which orthographic, morphological, and semantic variables exert their earliest discernible influence on lexical decision RTs and eye movement fixation durations. Contrary to form-then-meaning predictions, Experiments 1-4 reveal that surface frequency is the earliest lexical variable to exert a demonstrable influence on lexical decision RTs for English and Dutch derived words (e.g., badness ; bad + ness ), English pseudoderived words (e.g., wander ; wand + er ) and morphologically simple control words (e.g., ballad ; ball + ad ). Furthermore, for derived word processing across lexical decision and eye-tracking paradigms (Experiments 1-2; 5-7), semantic effects emerge early in the time-course of word recognition, and their effects either precede or emerge simultaneously with morphological effects. These results are not consistent with the premises of the form-then-meaning view of complex word recognition, but are convergent with a form-and-meaning account of complex word recognition. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  11. Age-related Effects on Word Recognition: Reliance on Cognitive Control Systems with Structural Declines in Speech-responsive Cortex

    PubMed Central

    Walczak, Adam; Ahlstrom, Jayne; Denslow, Stewart; Horwitz, Amy; Dubno, Judy R.

    2008-01-01

    Speech recognition can be difficult and effortful for older adults, even for those with normal hearing. Declining frontal lobe cognitive control has been hypothesized to cause age-related speech recognition problems. This study examined age-related changes in frontal lobe function for 15 clinically normal hearing adults (21–75 years) when they performed a word recognition task that was made challenging by decreasing word intelligibility. Although there were no age-related changes in word recognition, there were age-related changes in the degree of activity within left middle frontal gyrus (MFG) and anterior cingulate (ACC) regions during word recognition. Older adults engaged left MFG and ACC regions when words were most intelligible compared to younger adults who engaged these regions when words were least intelligible. Declining gray matter volume within temporal lobe regions responsive to word intelligibility significantly predicted left MFG activity, even after controlling for total gray matter volume, suggesting that declining structural integrity of brain regions responsive to speech leads to the recruitment of frontal regions when words are easily understood. Electronic supplementary material The online version of this article (doi:10.1007/s10162-008-0113-3) contains supplementary material, which is available to authorized users. PMID:18274825

  12. Syllable Transposition Effects in Korean Word Recognition

    ERIC Educational Resources Information Center

    Lee, Chang H.; Kwon, Youan; Kim, Kyungil; Rastle, Kathleen

    2015-01-01

    Research on the impact of letter transpositions in visual word recognition has yielded important clues about the nature of orthographic representations. This study investigated the impact of syllable transpositions on the recognition of Korean multisyllabic words. Results showed that rejection latencies in visual lexical decision for…

  13. Towards Contactless Silent Speech Recognition Based on Detection of Active and Visible Articulators Using IR-UWB Radar

    PubMed Central

    Shin, Young Hoon; Seo, Jiwon

    2016-01-01

    People with hearing or speaking disabilities are deprived of the benefits of conventional speech recognition technology because it is based on acoustic signals. Recent research has focused on silent speech recognition systems that are based on the motions of a speaker’s vocal tract and articulators. Because most silent speech recognition systems use contact sensors that are very inconvenient to users or optical systems that are susceptible to environmental interference, a contactless and robust solution is hence required. Toward this objective, this paper presents a series of signal processing algorithms for a contactless silent speech recognition system using an impulse radio ultra-wide band (IR-UWB) radar. The IR-UWB radar is used to remotely and wirelessly detect motions of the lips and jaw. In order to extract the necessary features of lip and jaw motions from the received radar signals, we propose a feature extraction algorithm. The proposed algorithm noticeably improved speech recognition performance compared to the existing algorithm during our word recognition test with five speakers. We also propose a speech activity detection algorithm to automatically select speech segments from continuous input signals. Thus, speech recognition processing is performed only when speech segments are detected. Our testbed consists of commercial off-the-shelf radar products, and the proposed algorithms are readily applicable without designing specialized radar hardware for silent speech processing. PMID:27801867

  14. Towards Contactless Silent Speech Recognition Based on Detection of Active and Visible Articulators Using IR-UWB Radar.

    PubMed

    Shin, Young Hoon; Seo, Jiwon

    2016-10-29

    People with hearing or speaking disabilities are deprived of the benefits of conventional speech recognition technology because it is based on acoustic signals. Recent research has focused on silent speech recognition systems that are based on the motions of a speaker's vocal tract and articulators. Because most silent speech recognition systems use contact sensors that are very inconvenient to users or optical systems that are susceptible to environmental interference, a contactless and robust solution is hence required. Toward this objective, this paper presents a series of signal processing algorithms for a contactless silent speech recognition system using an impulse radio ultra-wide band (IR-UWB) radar. The IR-UWB radar is used to remotely and wirelessly detect motions of the lips and jaw. In order to extract the necessary features of lip and jaw motions from the received radar signals, we propose a feature extraction algorithm. The proposed algorithm noticeably improved speech recognition performance compared to the existing algorithm during our word recognition test with five speakers. We also propose a speech activity detection algorithm to automatically select speech segments from continuous input signals. Thus, speech recognition processing is performed only when speech segments are detected. Our testbed consists of commercial off-the-shelf radar products, and the proposed algorithms are readily applicable without designing specialized radar hardware for silent speech processing.

  15. A comparison of conscious and automatic memory processes for picture and word stimuli: a process dissociation analysis.

    PubMed

    McBride, Dawn M; Anne Dosher, Barbara

    2002-09-01

    Four experiments were conducted to evaluate explanations of picture superiority effects previously found for several tasks. In a process dissociation procedure (Jacoby, 1991) with word stem completion, picture fragment completion, and category production tasks, conscious and automatic memory processes were compared for studied pictures and words with an independent retrieval model and a generate-source model. The predictions of a transfer appropriate processing account of picture superiority were tested and validated in "process pure" latent measures of conscious and unconscious, or automatic and source, memory processes. Results from both model fits verified that pictures had a conceptual (conscious/source) processing advantage over words for all tasks. The effects of perceptual (automatic/word generation) compatibility depended on task type, with pictorial tasks favoring pictures and linguistic tasks favoring words. Results show support for an explanation of the picture superiority effect that involves an interaction of encoding and retrieval processes.

  16. Longitudinal changes in speech recognition in older persons.

    PubMed

    Dubno, Judy R; Lee, Fu-Shing; Matthews, Lois J; Ahlstrom, Jayne B; Horwitz, Amy R; Mills, John H

    2008-01-01

    Recognition of isolated monosyllabic words in quiet and recognition of key words in low- and high-context sentences in babble were measured in a large sample of older persons enrolled in a longitudinal study of age-related hearing loss. Repeated measures were obtained yearly or every 2 to 3 years. To control for concurrent changes in pure-tone thresholds and speech levels, speech-recognition scores were adjusted using an importance-weighted speech-audibility metric (AI). Linear-regression slope estimated the rate of change in adjusted speech-recognition scores. Recognition of words in quiet declined significantly faster with age than predicted by declines in speech audibility. As subjects aged, observed scores deviated increasingly from AI-predicted scores, but this effect did not accelerate with age. Rate of decline in word recognition was significantly faster for females than males and for females with high serum progesterone levels, whereas noise history had no effect. Rate of decline did not accelerate with age but increased with degree of hearing loss, suggesting that with more severe injury to the auditory system, impairments to auditory function other than reduced audibility resulted in faster declines in word recognition as subjects aged. Recognition of key words in low- and high-context sentences in babble did not decline significantly with age.

  17. Visual words for lip-reading

    NASA Astrophysics Data System (ADS)

    Hassanat, Ahmad B. A.; Jassim, Sabah

    2010-04-01

    In this paper, the automatic lip reading problem is investigated, and an innovative approach to providing solutions to this problem has been proposed. This new VSR approach is dependent on the signature of the word itself, which is obtained from a hybrid feature extraction method dependent on geometric, appearance, and image transform features. The proposed VSR approach is termed "visual words". The visual words approach consists of two main parts, 1) Feature extraction/selection, and 2) Visual speech feature recognition. After localizing face and lips, several visual features for the lips where extracted. Such as the height and width of the mouth, mutual information and the quality measurement between the DWT of the current ROI and the DWT of the previous ROI, the ratio of vertical to horizontal features taken from DWT of ROI, The ratio of vertical edges to horizontal edges of ROI, the appearance of the tongue and the appearance of teeth. Each spoken word is represented by 8 signals, one of each feature. Those signals maintain the dynamic of the spoken word, which contains a good portion of information. The system is then trained on these features using the KNN and DTW. This approach has been evaluated using a large database for different people, and large experiment sets. The evaluation has proved the visual words efficiency, and shown that the VSR is a speaker dependent problem.

  18. Does the cost function matter in Bayes decision rule?

    PubMed

    Schlü ter, Ralf; Nussbaum-Thom, Markus; Ney, Hermann

    2012-02-01

    In many tasks in pattern recognition, such as automatic speech recognition (ASR), optical character recognition (OCR), part-of-speech (POS) tagging, and other string recognition tasks, we are faced with a well-known inconsistency: The Bayes decision rule is usually used to minimize string (symbol sequence) error, whereas, in practice, we want to minimize symbol (word, character, tag, etc.) error. When comparing different recognition systems, we do indeed use symbol error rate as an evaluation measure. The topic of this work is to analyze the relation between string (i.e., 0-1) and symbol error (i.e., metric, integer valued) cost functions in the Bayes decision rule, for which fundamental analytic results are derived. Simple conditions are derived for which the Bayes decision rule with integer-valued metric cost function and with 0-1 cost gives the same decisions or leads to classes with limited cost. The corresponding conditions can be tested with complexity linear in the number of classes. The results obtained do not make any assumption w.r.t. the structure of the underlying distributions or the classification problem. Nevertheless, the general analytic results are analyzed via simulations of string recognition problems with Levenshtein (edit) distance cost function. The results support earlier findings that considerable improvements are to be expected when initial error rates are high.

  19. Phonotactics Constraints and the Spoken Word Recognition of Chinese Words in Speech

    ERIC Educational Resources Information Center

    Yip, Michael C.

    2016-01-01

    Two word-spotting experiments were conducted to examine the question of whether native Cantonese listeners are constrained by phonotactics information in spoken word recognition of Chinese words in speech. Because no legal consonant clusters occurred within an individual Chinese word, this kind of categorical phonotactics information of Chinese…

  20. Experiments in automatic word class and word sense identification for information retrieval

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gauch, S.; Futrelle, R.P.

    Automatic identification of related words and automatic detection of word senses are two long-standing goals of researchers in natural language processing. Word class information and word sense identification may enhance the performance of information retrieval system4ms. Large online corpora and increased computational capabilities make new techniques based on corpus linguisitics feasible. Corpus-based analysis is especially needed for corpora from specialized fields for which no electronic dictionaries or thesauri exist. The methods described here use a combination of mutual information and word context to establish word similarities. Then, unsupervised classification is done using clustering in the word space, identifying word classesmore » without pretagging. We also describe an extension of the method to handle the difficult problems of disambiguation and of determining part-of-speech and semantic information for low-frequency words. The method is powerful enough to produce high-quality results on a small corpus of 200,000 words from abstracts in a field of molecular biology.« less

  1. L2 Word Recognition Research: A Critical Review.

    ERIC Educational Resources Information Center

    Koda, Keiko

    1996-01-01

    Explores conceptual syntheses advancing second language (L2) word recognition research and uncovers agendas relating to cross-linguistic examinations of L2 processing in a cohort of undergraduate students in France. Describes connections between word recognition and reading, overviews the connectionist construct, and illustrates cross-linguistic…

  2. Orthographic Facilitation in Chinese Spoken Word Recognition: An ERP Study

    ERIC Educational Resources Information Center

    Zou, Lijuan; Desroches, Amy S.; Liu, Youyi; Xia, Zhichao; Shu, Hua

    2012-01-01

    Orthographic influences in spoken word recognition have been previously examined in alphabetic languages. However, it is unknown whether orthographic information affects spoken word recognition in Chinese, which has a clean dissociation between orthography (O) and phonology (P). The present study investigated orthographic effects using event…

  3. Event Recognition Based on Deep Learning in Chinese Texts

    PubMed Central

    Zhang, Yajun; Liu, Zongtian; Zhou, Wen

    2016-01-01

    Event recognition is the most fundamental and critical task in event-based natural language processing systems. Existing event recognition methods based on rules and shallow neural networks have certain limitations. For example, extracting features using methods based on rules is difficult; methods based on shallow neural networks converge too quickly to a local minimum, resulting in low recognition precision. To address these problems, we propose the Chinese emergency event recognition model based on deep learning (CEERM). Firstly, we use a word segmentation system to segment sentences. According to event elements labeled in the CEC 2.0 corpus, we classify words into five categories: trigger words, participants, objects, time and location. Each word is vectorized according to the following six feature layers: part of speech, dependency grammar, length, location, distance between trigger word and core word and trigger word frequency. We obtain deep semantic features of words by training a feature vector set using a deep belief network (DBN), then analyze those features in order to identify trigger words by means of a back propagation neural network. Extensive testing shows that the CEERM achieves excellent recognition performance, with a maximum F-measure value of 85.17%. Moreover, we propose the dynamic-supervised DBN, which adds supervised fine-tuning to a restricted Boltzmann machine layer by monitoring its training performance. Test analysis reveals that the new DBN improves recognition performance and effectively controls the training time. Although the F-measure increases to 88.11%, the training time increases by only 25.35%. PMID:27501231

  4. Event Recognition Based on Deep Learning in Chinese Texts.

    PubMed

    Zhang, Yajun; Liu, Zongtian; Zhou, Wen

    2016-01-01

    Event recognition is the most fundamental and critical task in event-based natural language processing systems. Existing event recognition methods based on rules and shallow neural networks have certain limitations. For example, extracting features using methods based on rules is difficult; methods based on shallow neural networks converge too quickly to a local minimum, resulting in low recognition precision. To address these problems, we propose the Chinese emergency event recognition model based on deep learning (CEERM). Firstly, we use a word segmentation system to segment sentences. According to event elements labeled in the CEC 2.0 corpus, we classify words into five categories: trigger words, participants, objects, time and location. Each word is vectorized according to the following six feature layers: part of speech, dependency grammar, length, location, distance between trigger word and core word and trigger word frequency. We obtain deep semantic features of words by training a feature vector set using a deep belief network (DBN), then analyze those features in order to identify trigger words by means of a back propagation neural network. Extensive testing shows that the CEERM achieves excellent recognition performance, with a maximum F-measure value of 85.17%. Moreover, we propose the dynamic-supervised DBN, which adds supervised fine-tuning to a restricted Boltzmann machine layer by monitoring its training performance. Test analysis reveals that the new DBN improves recognition performance and effectively controls the training time. Although the F-measure increases to 88.11%, the training time increases by only 25.35%.

  5. Effects of Error Correction on Word Recognition and Reading Comprehension.

    ERIC Educational Resources Information Center

    Jenkins, Joseph R.; And Others

    1983-01-01

    Two procedures for correcting oral reading errors, word supply and word drill, were examined to determine their effects on measures of word recognition and comprehension with 17 learning disabled elementary school students. (Author/SW)

  6. Speed and accuracy of dyslexic versus typical word recognition: an eye-movement investigation

    PubMed Central

    Kunert, Richard; Scheepers, Christoph

    2014-01-01

    Developmental dyslexia is often characterized by a dual deficit in both word recognition accuracy and general processing speed. While previous research into dyslexic word recognition may have suffered from speed-accuracy trade-off, the present study employed a novel eye-tracking task that is less prone to such confounds. Participants (10 dyslexics and 12 controls) were asked to look at real word stimuli, and to ignore simultaneously presented non-word stimuli, while their eye-movements were recorded. Improvements in word recognition accuracy over time were modeled in terms of a continuous non-linear function. The words' rhyme consistency and the non-words' lexicality (unpronounceable, pronounceable, pseudohomophone) were manipulated within-subjects. Speed-related measures derived from the model fits confirmed generally slower processing in dyslexics, and showed a rhyme consistency effect in both dyslexics and controls. In terms of overall error rate, dyslexics (but not controls) performed less accurately on rhyme-inconsistent words, suggesting a representational deficit for such words in dyslexics. Interestingly, neither group showed a pseudohomophone effect in speed or accuracy, which might call the task-independent pervasiveness of this effect into question. The present results illustrate the importance of distinguishing between speed- vs. accuracy-related effects for our understanding of dyslexic word recognition. PMID:25346708

  7. The Word Shape Hypothesis Re-Examined: Evidence for an External Feature Advantage in Visual Word Recognition

    ERIC Educational Resources Information Center

    Beech, John R.; Mayall, Kate A.

    2005-01-01

    This study investigates the relative roles of internal and external letter features in word recognition. In Experiment 1 the efficacy of outer word fragments (words with all their horizontal internal features removed) was compared with inner word fragments (words with their outer features removed) as primes in a forward masking paradigm. These…

  8. The Development of Word Recognition in a Second Language.

    ERIC Educational Resources Information Center

    Muljani, D.; Koda, Keiko; Moates, Danny R.

    1998-01-01

    A study investigated differences in English word recognition in native speakers of Indonesian (an alphabetic language) and Chinese (a logographic languages) learning English as a Second Language. Results largely confirmed the hypothesis that an alphabetic first language would predict better word recognition in speakers of an alphabetic language,…

  9. The Role of Antibody in Korean Word Recognition

    ERIC Educational Resources Information Center

    Lee, Chang Hwan; Lee, Yoonhyoung; Kim, Kyungil

    2010-01-01

    A subsyllabic phonological unit, the antibody, has received little attention as a potential fundamental processing unit in word recognition. The psychological reality of the antibody in Korean recognition was investigated by looking at the performance of subjects presented with nonwords and words in the lexical decision task. In Experiment 1, the…

  10. The Effects of Explicit Word Recognition Training on Japanese EFL Learners

    ERIC Educational Resources Information Center

    Burrows, Lance; Holsworth, Michael

    2016-01-01

    This study is a quantitative, quasi-experimental investigation focusing on the effects of word recognition training on word recognition fluency, reading speed, and reading comprehension for 151 Japanese university students at a lower-intermediate reading proficiency level. Four treatment groups were given training in orthographic, phonological,…

  11. Selective inattention to anxiety-linked stimuli.

    PubMed

    Blum, G S; Barbour, J S

    1979-06-01

    The term selective inattention as used here subsumes those phenomena whose primary function is the active blocking or attenuation of partially processed contents en route to conscious expression. Examples are anxiety-motivated forgetting or perceptual distortion and hypnotically induced negative hallucinations. Studies in the field of selective attention have typically been designed to explain what takes place in a task in which the subject is first instructed to attend to a particular stimulus and then to consciously execute that instruction as well as he can. The rejection of content in process is examined only sceondarily as a consequence of the acceptance of relevant information. In the present experiments and theorizing, the emphasis instead is on inhibitory operations that take place automatically, without conscious intent, in response to a potential anxiety reaction. Experiment 1 explored the interaction of anxiety-linked inattention with strength of a target stimulus. Three female subjects were programmed under hypnosis to respond posthypnotically in the On condition with prescribed degrees of anxiety when certain Blacky pictures popped into mind later ,t the end of experimental trials; in the Off conditionall pictures were to become neutral. With the three female subjects still under hypnosis, each of the loaded pictures was then paired with a four-letter work relevant to the individual's own version of what was happening in the picture. The waking recognition task, carried out with amnesia for the prior hypnotic programming, consisted of tachistoscopic exposure of loaded words and physically similar filler words at four durations within a baseline range of recognition accuracy from 50%--75% correct. The data yielded a curvilinear relationship in which the recognition of only the loaded words was significnatly lower in the On condition at the 60%--70% range of recognition accuracy but not at shorter or longer stimulus durations. Experiment 2, for which the prior hypnotic programming of the same three subjects was similar to Experiment 1, used an anagram approach to comparable four-letter words, except that pleasure-loaded words were introduced as a control along with filler words. Four durations of tachistoscopic exposure of the anagrams were used with each individual, and the major dependent variable was response latency measured in milliseconds. An independent measure of perceptual discriminability of the scrambled stimulus letters was obtained to isolate perceptual from cognitive aspects of the task. The results indicated that both low perceivability and high solvability increase the likelihood of response delays specifically in the presence of anxiety-linked stimuli. Experiment 3 was a nonhypnotic replication of Experiment 2, using 12 male and 13 female subjects. The potential affective loading of key anxiety and pleasure words was accomplished by structured scenarios for the Blacky pictures in which subjects were asked to place themselves as vividly as possible...

  12. Interpreting Chicken-Scratch: Lexical Access for Handwritten Words

    PubMed Central

    Barnhart, Anthony S.; Goldinger, Stephen D.

    2014-01-01

    Handwritten word recognition is a field of study that has largely been neglected in the psychological literature, despite its prevalence in society. Whereas studies of spoken word recognition almost exclusively employ natural, human voices as stimuli, studies of visual word recognition use synthetic typefaces, thus simplifying the process of word recognition. The current study examined the effects of handwriting on a series of lexical variables thought to influence bottom-up and top-down processing, including word frequency, regularity, bidirectional consistency, and imageability. The results suggest that the natural physical ambiguity of handwritten stimuli forces a greater reliance on top-down processes, because almost all effects were magnified, relative to conditions with computer print. These findings suggest that processes of word perception naturally adapt to handwriting, compensating for physical ambiguity by increasing top-down feedback. PMID:20695708

  13. Functional Anatomy of Recognition of Chinese Multi-Character Words: Convergent Evidence from Effects of Transposable Nonwords, Lexicality, and Word Frequency.

    PubMed

    Lin, Nan; Yu, Xi; Zhao, Ying; Zhang, Mingxia

    2016-01-01

    This fMRI study aimed to identify the neural mechanisms underlying the recognition of Chinese multi-character words by partialling out the confounding effect of reaction time (RT). For this purpose, a special type of nonword-transposable nonword-was created by reversing the character orders of real words. These nonwords were included in a lexical decision task along with regular (non-transposable) nonwords and real words. Through conjunction analysis on the contrasts of transposable nonwords versus regular nonwords and words versus regular nonwords, the confounding effect of RT was eliminated, and the regions involved in word recognition were reliably identified. The word-frequency effect was also examined in emerged regions to further assess their functional roles in word processing. Results showed significant conjunctional effect and positive word-frequency effect in the bilateral inferior parietal lobules and posterior cingulate cortex, whereas only conjunctional effect was found in the anterior cingulate cortex. The roles of these brain regions in recognition of Chinese multi-character words were discussed.

  14. Functional Anatomy of Recognition of Chinese Multi-Character Words: Convergent Evidence from Effects of Transposable Nonwords, Lexicality, and Word Frequency

    PubMed Central

    Lin, Nan; Yu, Xi; Zhao, Ying; Zhang, Mingxia

    2016-01-01

    This fMRI study aimed to identify the neural mechanisms underlying the recognition of Chinese multi-character words by partialling out the confounding effect of reaction time (RT). For this purpose, a special type of nonword—transposable nonword—was created by reversing the character orders of real words. These nonwords were included in a lexical decision task along with regular (non-transposable) nonwords and real words. Through conjunction analysis on the contrasts of transposable nonwords versus regular nonwords and words versus regular nonwords, the confounding effect of RT was eliminated, and the regions involved in word recognition were reliably identified. The word-frequency effect was also examined in emerged regions to further assess their functional roles in word processing. Results showed significant conjunctional effect and positive word-frequency effect in the bilateral inferior parietal lobules and posterior cingulate cortex, whereas only conjunctional effect was found in the anterior cingulate cortex. The roles of these brain regions in recognition of Chinese multi-character words were discussed. PMID:26901644

  15. Recognition and reading aloud of kana and kanji word: an fMRI study.

    PubMed

    Ino, Tadashi; Nakai, Ryusuke; Azuma, Takashi; Kimura, Toru; Fukuyama, Hidenao

    2009-03-16

    It has been proposed that different brain regions are recruited for processing two Japanese writing systems, namely, kanji (morphograms) and kana (syllabograms). However, this difference may depend upon what type of word was used and also on what type of task was performed. Using fMRI, we investigated brain activation for processing kanji and kana words with similar high familiarity in two tasks: word recognition and reading aloud. During both tasks, words and non-words were presented side by side, and the subjects were required to press a button corresponding to the real word in the word recognition task and were required to read aloud the real word in the reading aloud task. Brain activations were similar between kanji and kana during reading aloud task, whereas during word recognition task in which accurate identification and selection were required, kanji relative to kana activated regions of bilateral frontal, parietal and occipitotemporal cortices, all of which were related mainly to visual word-form analysis and visuospatial attention. Concerning the difference of brain activity between two tasks, differential activation was found only in the regions associated with task-specific sensorimotor processing for kana, whereas visuospatial attention network also showed greater activation during word recognition task than during reading aloud task for kanji. We conclude that the differences in brain activation between kanji and kana depend on the interaction between the script characteristics and the task demands.

  16. Intact suppression of increased false recognition in schizophrenia.

    PubMed

    Weiss, Anthony P; Dodson, Chad S; Goff, Donald C; Schacter, Daniel L; Heckers, Stephan

    2002-09-01

    Recognition memory is impaired in patients with schizophrenia, as they rely largely on item familiarity, rather than conscious recollection, to make mnemonic decisions. False recognition of novel items (foils) is increased in schizophrenia and may relate to this deficit in conscious recollection. By studying pictures of the target word during encoding, healthy adults can suppress false recognition. This study examined the effect of pictorial encoding on subsequent recognition of repeated foils in patients with schizophrenia. The study included 40 patients with schizophrenia and 32 healthy comparison subjects. After incidental encoding of 60 words or pictures, subjects were tested for recognition of target items intermixed with 60 new foils. These new foils were subsequently repeated following either a two- or 24-word delay. Subjects were instructed to label these repeated foils as new and not to mistake them for old target words. Schizophrenic patients showed greater overall false recognition of repeated foils. The rate of false recognition of repeated foils was lower after picture encoding than after word encoding. Despite higher levels of false recognition of repeated new items, patients and comparison subjects demonstrated a similar degree of false recognition suppression after picture, as compared to word, encoding. Patients with schizophrenia displayed greater false recognition of repeated foils than comparison subjects, suggesting both a decrement of item- (or source-) specific recollection and a consequent reliance on familiarity in schizophrenia. Despite these deficits, presenting pictorial information at encoding allowed schizophrenic subjects to suppress false recognition to a similar degree as the comparison group, implying the intact use of a high-level cognitive strategy in this population.

  17. Psychometric Functions for Shortened Administrations of a Speech Recognition Approach Using Tri-Word Presentations and Phonemic Scoring

    ERIC Educational Resources Information Center

    Gelfand, Stanley A.; Gelfand, Jessica T.

    2012-01-01

    Method: Complete psychometric functions for phoneme and word recognition scores at 8 signal-to-noise ratios from -15 dB to 20 dB were generated for the first 10, 20, and 25, as well as all 50, three-word presentations of the Tri-Word or Computer Assisted Speech Recognition Assessment (CASRA) Test (Gelfand, 1998) based on the results of 12…

  18. Acoustic-Phonetic Versus Lexical Processing in Nonnative Listeners Differing in Their Dominant Language.

    PubMed

    Shi, Lu-Feng; Koenig, Laura L

    2016-09-01

    Nonnative listeners have difficulty recognizing English words due to underdeveloped acoustic-phonetic and/or lexical skills. The present study used Boothroyd and Nittrouer's (1988)j factor to tease apart these two components of word recognition. Participants included 15 native English and 29 native Russian listeners. Fourteen and 15 of the Russian listeners reported English (ED) and Russian (RD) to be their dominant language, respectively. Listeners were presented 119 consonant-vowel-consonant real and nonsense words in speech-spectrum noise at +6 dB SNR. Responses were scored for word and phoneme recognition, the logarithmic quotient of which yielded j. Word and phoneme recognition was comparable between native and ED listeners but poorer in RD listeners. Analysis of j indicated less effective use of lexical information in RD than in native and ED listeners. Lexical processing was strongly correlated with the length of residence in the United States. Language background is important for nonnative word recognition. Lexical skills can be regarded as nativelike in ED nonnative listeners. Compromised word recognition in ED listeners is unlikely a result of poor lexical processing. Performance should be interpreted with caution for listeners dominant in their first language, whose word recognition is affected by both lexical and acoustic-phonetic factors.

  19. Semantic Ambiguity: Do Multiple Meanings Inhibit or Facilitate Word Recognition?

    PubMed

    Haro, Juan; Ferré, Pilar

    2018-06-01

    It is not clear whether multiple unrelated meanings inhibit or facilitate word recognition. Some studies have found a disadvantage for words having multiple meanings with respect to unambiguous words in lexical decision tasks (LDT), whereas several others have shown a facilitation for such words. In the present study, we argue that these inconsistent findings may be due to the approach employed to select ambiguous words across studies. To address this issue, we conducted three LDT experiments in which we varied the measure used to classify ambiguous and unambiguous words. The results suggest that multiple unrelated meanings facilitate word recognition. In addition, we observed that the approach employed to select ambiguous words may affect the pattern of experimental results. This evidence has relevant implications for theoretical accounts of ambiguous words processing and representation.

  20. The impact of OCR accuracy on automated cancer classification of pathology reports.

    PubMed

    Zuccon, Guido; Nguyen, Anthony N; Bergheim, Anton; Wickman, Sandra; Grayson, Narelle

    2012-01-01

    To evaluate the effects of Optical Character Recognition (OCR) on the automatic cancer classification of pathology reports. Scanned images of pathology reports were converted to electronic free-text using a commercial OCR system. A state-of-the-art cancer classification system, the Medical Text Extraction (MEDTEX) system, was used to automatically classify the OCR reports. Classifications produced by MEDTEX on the OCR versions of the reports were compared with the classification from a human amended version of the OCR reports. The employed OCR system was found to recognise scanned pathology reports with up to 99.12% character accuracy and up to 98.95% word accuracy. Errors in the OCR processing were found to minimally impact on the automatic classification of scanned pathology reports into notifiable groups. However, the impact of OCR errors is not negligible when considering the extraction of cancer notification items, such as primary site, histological type, etc. The automatic cancer classification system used in this work, MEDTEX, has proven to be robust to errors produced by the acquisition of freetext pathology reports from scanned images through OCR software. However, issues emerge when considering the extraction of cancer notification items.

  1. Continuous multiword recognition performance of young and elderly listeners in ambient noise

    NASA Astrophysics Data System (ADS)

    Sato, Hiroshi

    2005-09-01

    Hearing threshold shift due to aging is known as a dominant factor to degrade speech recognition performance in noisy conditions. On the other hand, cognitive factors of aging-relating speech recognition performance in various speech-to-noise conditions are not well established. In this study, two kinds of speech test were performed to examine how working memory load relates to speech recognition performance. One is word recognition test with high-familiarity, four-syllable Japanese words (single-word test). In this test, each word was presented to listeners; the listeners were asked to write the word down on paper with enough time to answer. In the other test, five continuous word were presented to listeners and listeners were asked to write the word down after just five words were presented (multiword test). Both tests were done in various speech-to-noise ratios under 50-dBA Hoth spectrum noise with more than 50 young and elderly subjects. The results of two experiments suggest that (1) Hearing level is related to scores of both tests. (2) Scores of single-word test are well correlated with those of multiword test. (3) Scores of multiword test are not improved as speech-to-noise ratio improves in the condition where scores of single-word test reach their ceiling.

  2. Standard-Chinese Lexical Neighborhood Test in normal-hearing young children.

    PubMed

    Liu, Chang; Liu, Sha; Zhang, Ning; Yang, Yilin; Kong, Ying; Zhang, Luo

    2011-06-01

    The purposes of the present study were to establish the Standard-Chinese version of Lexical Neighborhood Test (LNT) and to examine the lexical and age effects on spoken-word recognition in normal-hearing children. Six lists of monosyllabic and six lists of disyllabic words (20 words/list) were selected from the database of daily speech materials for normal-hearing (NH) children of ages 3-5 years. The lists were further divided into "easy" and "hard" halves according to the word frequency and neighborhood density in the database based on the theory of Neighborhood Activation Model (NAM). Ninety-six NH children (age ranged between 4.0 and 7.0 years) were divided into three different age groups of 1-year intervals. Speech-perception tests were conducted using the Standard-Chinese monosyllabic and disyllabic LNT. The inter-list performance was found to be equivalent and inter-rater reliability was high with 92.5-95% consistency. Results of word-recognition scores showed that the lexical effects were all significant. Children scored higher with disyllabic words than with monosyllabic words. "Easy" words scored higher than "hard" words. The word-recognition performance also increased with age in each lexical category. A multiple linear regression analysis showed that neighborhood density, age, and word frequency appeared to have increasingly more contributions to Chinese word recognition. The results of the present study indicated that performances of Chinese word recognition were influenced by word frequency, age, and neighborhood density, with word frequency playing a major role. These results were consistent with those in other languages, supporting the application of NAM in the Chinese language. The development of Standard-Chinese version of LNT and the establishment of a database of children of 4-6 years old can provide a reliable means for spoken-word recognition test in children with hearing impairment. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  3. Automatic activation of exercise and sedentary stereotypes.

    PubMed

    Berry, Tanya; Spence, John C

    2009-09-01

    We examined the automatic activation of "sedentary" and "exerciser" stereotypes using a social prime Stroop task. Results showed significantly slower response times between the exercise words and the exercise control words and between the sedentary words and the exercise control words when preceded by an attractive exerciser prime. Words preceded by a normal-weight exerciser prime showed significantly slower response times for sedentary words over sedentary control words and exercise words. An overweight sedentary prime resulted in significantly slower response times for sedentary words over exercise words and exercise control words. These results highlight the need for increased awareness of how active and sedentary lifestyles are portrayed in the media.

  4. Speech variability effects on recognition accuracy associated with concurrent task performance by pilots

    NASA Technical Reports Server (NTRS)

    Simpson, C. A.

    1985-01-01

    In the present study of the responses of pairs of pilots to aircraft warning classification tasks using an isolated word, speaker-dependent speech recognition system, the induced stress was manipulated by means of different scoring procedures for the classification task and by the inclusion of a competitive manual control task. Both speech patterns and recognition accuracy were analyzed, and recognition errors were recorded by type for an isolated word speaker-dependent system and by an offline technique for a connected word speaker-dependent system. While errors increased with task loading for the isolated word system, there was no such effect for task loading in the case of the connected word system.

  5. Visual speech influences speech perception immediately but not automatically.

    PubMed

    Mitterer, Holger; Reinisch, Eva

    2017-02-01

    Two experiments examined the time course of the use of auditory and visual speech cues to spoken word recognition using an eye-tracking paradigm. Results of the first experiment showed that the use of visual speech cues from lipreading is reduced if concurrently presented pictures require a division of attentional resources. This reduction was evident even when listeners' eye gaze was on the speaker rather than the (static) pictures. Experiment 2 used a deictic hand gesture to foster attention to the speaker. At the same time, the visual processing load was reduced by keeping the visual display constant over a fixed number of successive trials. Under these conditions, the visual speech cues from lipreading were used. Moreover, the eye-tracking data indicated that visual information was used immediately and even earlier than auditory information. In combination, these data indicate that visual speech cues are not used automatically, but if they are used, they are used immediately.

  6. Efficient Learning for the Poor: New Insights into Literacy Acquisition for Children

    NASA Astrophysics Data System (ADS)

    Abadzi, Helen

    2008-11-01

    Reading depends on the speed of visual recognition and capacity of short-term memory. To understand a sentence, the mind must read it fast enough to capture it within the limits of the short-term memory. This means that children must attain a minimum speed of fairly accurate reading to understand a passage. Learning to read involves "tricking" the brain into perceiving groups of letters as coherent words. This is achieved most efficiently by pairing small units consistently with sounds rather than learning entire words. To link the letters with sounds, explicit and extensive practice is needed; the more complex the spelling of a language, the more practice is necessary. However, schools of low-income students often waste instructional time and lack reading resources, so students cannot get sufficient practice to automatize reading and may remain illiterate for years. Lack of reading fluency in the early grades creates inefficiencies that affect the entire educational system. Neurocognitive research on reading points to benchmarks and monitoring indicators. All students should attain reading speeds of 45-60 words per minute by the end of grade 2 and 120-150 words per minute for grades 6-8.

  7. Concurrent Correlates of Chinese Word Recognition in Deaf and Hard-of-Hearing Children

    ERIC Educational Resources Information Center

    Ching, Boby Ho-Hong; Nunes, Terezinha

    2015-01-01

    The aim of this study was to explore the relative contributions of phonological, semantic radical, and morphological awareness to Chinese word recognition in deaf and hard-of-hearing (DHH) children. Measures of word recognition, general intelligence, phonological, semantic radical, and morphological awareness were administered to 32 DHH and 35…

  8. Auditory Perception and Word Recognition in Cantonese-Chinese Speaking Children with and without Specific Language Impairment

    ERIC Educational Resources Information Center

    Kidd, Joanna C.; Shum, Kathy K.; Wong, Anita M.-Y.; Ho, Connie S.-H.

    2017-01-01

    Auditory processing and spoken word recognition difficulties have been observed in Specific Language Impairment (SLI), raising the possibility that auditory perceptual deficits disrupt word recognition and, in turn, phonological processing and oral language. In this study, fifty-seven kindergarten children with SLI and fifty-three language-typical…

  9. Formal Models of Word Recognition. Final Report.

    ERIC Educational Resources Information Center

    Travers, Jeffrey R.

    Existing mathematical models of word recognition are reviewed and a new theory is proposed in this research. The new theory integrates earlier proposals within a single framework, sacrificing none of the predictive power of the earlier proposals, but offering a gain in theoretical economy. The theory holds that word recognition is accomplished by…

  10. Surviving Blind Decomposition: A Distributional Analysis of the Time-Course of Complex Word Recognition

    ERIC Educational Resources Information Center

    Schmidtke, Daniel; Matsuki, Kazunaga; Kuperman, Victor

    2017-01-01

    The current study addresses a discrepancy in the psycholinguistic literature about the chronology of information processing during the visual recognition of morphologically complex words. "Form-then-meaning" accounts of complex word recognition claim that morphemes are processed as units of form prior to any influence of their meanings,…

  11. Specifying Theories of Developmental Dyslexia: A Diffusion Model Analysis of Word Recognition

    ERIC Educational Resources Information Center

    Zeguers, Maaike H. T.; Snellings, Patrick; Tijms, Jurgen; Weeda, Wouter D.; Tamboer, Peter; Bexkens, Anika; Huizenga, Hilde M.

    2011-01-01

    The nature of word recognition difficulties in developmental dyslexia is still a topic of controversy. We investigated the contribution of phonological processing deficits and uncertainty to the word recognition difficulties of dyslexic children by mathematical diffusion modeling of visual and auditory lexical decision data. The first study showed…

  12. Individual Differences in Visual Word Recognition: Insights from the English Lexicon Project

    ERIC Educational Resources Information Center

    Yap, Melvin J.; Balota, David A.; Sibley, Daragh E.; Ratcliff, Roger

    2012-01-01

    Empirical work and models of visual word recognition have traditionally focused on group-level performance. Despite the emphasis on the prototypical reader, there is clear evidence that variation in reading skill modulates word recognition performance. In the present study, we examined differences among individuals who contributed to the English…

  13. Learning during processing Word learning doesn’t wait for word recognition to finish

    PubMed Central

    Apfelbaum, Keith S.; McMurray, Bob

    2017-01-01

    Previous research on associative learning has uncovered detailed aspects of the process, including what types of things are learned, how they are learned, and where in the brain such learning occurs. However, perceptual processes, such as stimulus recognition and identification, take time to unfold. Previous studies of learning have not addressed when, during the course of these dynamic recognition processes, learned representations are formed and updated. If learned representations are formed and updated while recognition is ongoing, the result of learning may incorporate spurious, partial information. For example, during word recognition, words take time to be identified, and competing words are often active in parallel. If learning proceeds before this competition resolves, representations may be influenced by the preliminary activations present at the time of learning. In three experiments using word learning as a model domain, we provide evidence that learning reflects the ongoing dynamics of auditory and visual processing during a learning event. These results show that learning can occur before stimulus recognition processes are complete; learning does not wait for ongoing perceptual processing to complete. PMID:27471082

  14. Individual Differences in Visual Word Recognition: Insights from the English Lexicon Project

    PubMed Central

    Yap, Melvin J.; Balota, David A.; Sibley, Daragh E.; Ratcliff, Roger

    2011-01-01

    Empirical work and models of visual word recognition have traditionally focused on group-level performance. Despite the emphasis on the prototypical reader, there is clear evidence that variation in reading skill modulates word recognition performance. In the present study, we examined differences between individuals who contributed to the English Lexicon Project (http://elexicon.wustl.edu), an online behavioral database containing nearly four million word recognition (speeded pronunciation and lexical decision) trials from over 1,200 participants. We observed considerable within- and between-session reliability across distinct sets of items, in terms of overall mean response time (RT), RT distributional characteristics, diffusion model parameters (Ratcliff, Gomez, & McKoon, 2004), and sensitivity to underlying lexical dimensions. This indicates reliably detectable individual differences in word recognition performance. In addition, higher vocabulary knowledge was associated with faster, more accurate word recognition performance, attenuated sensitivity to stimuli characteristics, and more efficient accumulation of information. Finally, in contrast to suggestions in the literature, we did not find evidence that individuals were trading-off in their utilization of lexical and nonlexical information. PMID:21728459

  15. DARPA TIMIT acoustic-phonetic continous speech corpus CD-ROM. NIST speech disc 1-1.1

    NASA Astrophysics Data System (ADS)

    Garofolo, J. S.; Lamel, L. F.; Fisher, W. M.; Fiscus, J. G.; Pallett, D. S.

    1993-02-01

    The Texas Instruments/Massachusetts Institute of Technology (TIMIT) corpus of read speech has been designed to provide speech data for the acquisition of acoustic-phonetic knowledge and for the development and evaluation of automatic speech recognition systems. TIMIT contains speech from 630 speakers representing 8 major dialect divisions of American English, each speaking 10 phonetically-rich sentences. The TIMIT corpus includes time-aligned orthographic, phonetic, and word transcriptions, as well as speech waveform data for each spoken sentence. The release of TIMIT contains several improvements over the Prototype CD-ROM released in December, 1988: (1) full 630-speaker corpus, (2) checked and corrected transcriptions, (3) word-alignment transcriptions, (4) NIST SPHERE-headered waveform files and header manipulation software, (5) phonemic dictionary, (6) new test and training subsets balanced for dialectal and phonetic coverage, and (7) more extensive documentation.

  16. Difficulties in Automatic Speech Recognition of Dysarthric Speakers and Implications for Speech-Based Applications Used by the Elderly: A Literature Review

    ERIC Educational Resources Information Center

    Young, Victoria; Mihailidis, Alex

    2010-01-01

    Despite their growing presence in home computer applications and various telephony services, commercial automatic speech recognition technologies are still not easily employed by everyone; especially individuals with speech disorders. In addition, relatively little research has been conducted on automatic speech recognition performance with older…

  17. Semantic Ambiguity: Do Multiple Meanings Inhibit or Facilitate Word Recognition?

    ERIC Educational Resources Information Center

    Haro, Juan; Ferré, Pilar

    2018-01-01

    It is not clear whether multiple unrelated meanings inhibit or facilitate word recognition. Some studies have found a disadvantage for words having multiple meanings with respect to unambiguous words in lexical decision tasks (LDT), whereas several others have shown a facilitation for such words. In the present study, we argue that these…

  18. Selective attention and recognition: effects of congruency on episodic learning.

    PubMed

    Rosner, Tamara M; D'Angelo, Maria C; MacLellan, Ellen; Milliken, Bruce

    2015-05-01

    Recent research on cognitive control has focused on the learning consequences of high selective attention demands in selective attention tasks (e.g., Botvinick, Cognit Affect Behav Neurosci 7(4):356-366, 2007; Verguts and Notebaert, Psychol Rev 115(2):518-525, 2008). The current study extends these ideas by examining the influence of selective attention demands on remembering. In Experiment 1, participants read aloud the red word in a pair of red and green spatially interleaved words. Half of the items were congruent (the interleaved words had the same identity), and the other half were incongruent (the interleaved words had different identities). Following the naming phase, participants completed a surprise recognition memory test. In this test phase, recognition memory was better for incongruent than for congruent items. In Experiment 2, context was only partially reinstated at test, and again recognition memory was better for incongruent than for congruent items. In Experiment 3, all of the items contained two different words, but in one condition the words were presented close together and interleaved, while in the other condition the two words were spatially separated. Recognition memory was better for the interleaved than for the separated items. This result rules out an interpretation of the congruency effects on recognition in Experiments 1 and 2 that hinges on stronger relational encoding for items that have two different words. Together, the results support the view that selective attention demands for incongruent items lead to encoding that improves recognition.

  19. Comparison of crisp and fuzzy character networks in handwritten word recognition

    NASA Technical Reports Server (NTRS)

    Gader, Paul; Mohamed, Magdi; Chiang, Jung-Hsien

    1992-01-01

    Experiments involving handwritten word recognition on words taken from images of handwritten address blocks from the United States Postal Service mailstream are described. The word recognition algorithm relies on the use of neural networks at the character level. The neural networks are trained using crisp and fuzzy desired outputs. The fuzzy outputs were defined using a fuzzy k-nearest neighbor algorithm. The crisp networks slightly outperformed the fuzzy networks at the character level but the fuzzy networks outperformed the crisp networks at the word level.

  20. Support vector machine for automatic pain recognition

    NASA Astrophysics Data System (ADS)

    Monwar, Md Maruf; Rezaei, Siamak

    2009-02-01

    Facial expressions are a key index of emotion and the interpretation of such expressions of emotion is critical to everyday social functioning. In this paper, we present an efficient video analysis technique for recognition of a specific expression, pain, from human faces. We employ an automatic face detector which detects face from the stored video frame using skin color modeling technique. For pain recognition, location and shape features of the detected faces are computed. These features are then used as inputs to a support vector machine (SVM) for classification. We compare the results with neural network based and eigenimage based automatic pain recognition systems. The experiment results indicate that using support vector machine as classifier can certainly improve the performance of automatic pain recognition system.

  1. Single-Word Recognition Need Not Depend on Single-Word Features: Narrative Coherence Counteracts Effects of Single-Word Features That Lexical Decision Emphasizes

    ERIC Educational Resources Information Center

    Teng, Dan W.; Wallot, Sebastian; Kelty-Stephen, Damian G.

    2016-01-01

    Research on reading comprehension of connected text emphasizes reliance on single-word features that organize a stable, mental lexicon of words and that speed or slow the recognition of each new word. However, the time needed to recognize a word might not actually be as fixed as previous research indicates, and the stability of the mental lexicon…

  2. Acquisition of Malay word recognition skills: lessons from low-progress early readers.

    PubMed

    Lee, Lay Wah; Wheldall, Kevin

    2011-02-01

    Malay is a consistent alphabetic orthography with complex syllable structures. The focus of this research was to investigate word recognition performance in order to inform reading interventions for low-progress early readers. Forty-six Grade 1 students were sampled and 11 were identified as low-progress readers. The results indicated that both syllable awareness and phoneme blending were significant predictors of word recognition, suggesting that both syllable and phonemic grain-sizes are important in Malay word recognition. Item analysis revealed a hierarchical pattern of difficulty based on the syllable and the phonic structure of the words. Error analysis identified the sources of errors to be errors due to inefficient syllable segmentation, oversimplification of syllables, insufficient grapheme-phoneme knowledge and inefficient phonemic code assembly. Evidence also suggests that direct instruction in syllable segmentation, phonemic awareness and grapheme-phoneme correspondence is necessary for low-progress readers to acquire word recognition skills. Finally, a logical sequence to teach grapheme-phoneme decoding in Malay is suggested. Copyright © 2010 John Wiley & Sons, Ltd.

  3. Context effects and false memory for alcohol words in adolescents.

    PubMed

    Zack, Martin; Sharpley, Justin; Dent, Clyde W; Stacy, Alan W

    2009-03-01

    This study assessed incidental recognition of Alcohol and Neutral words in adolescents who encoded the words under distraction. Participants were 171 (87 male) 10th grade students, ages 14-16 (M=15.1) years. Testing was conducted by telephone: Participants listened to a list containing Alcohol and Neutral (Experimental--Group E, n=92) or only Neutral (Control--Group C, n=79) words, while counting backwards from 200 by two's. Recognition was tested immediately thereafter. Group C exhibited higher false recognition of Neutral than Alcohol items, whereas Group E displayed equivalent false rates for both word types. The reported number of alcohol TV ads seen in the past week predicted higher false recognition of Neutral words in Group C and of Alcohol words in Group E. False memory for Alcohol words in Group E was greater in males and high anxiety sensitive participants. These context-dependent biases may contribute to exaggerations in perceived drinking norms previously found to predict alcohol misuse in young drinkers.

  4. Image based book cover recognition and retrieval

    NASA Astrophysics Data System (ADS)

    Sukhadan, Kalyani; Vijayarajan, V.; Krishnamoorthi, A.; Bessie Amali, D. Geraldine

    2017-11-01

    In this we are developing a graphical user interface using MATLAB for the users to check the information related to books in real time. We are taking the photos of the book cover using GUI, then by using MSER algorithm it will automatically detect all the features from the input image, after this it will filter bifurcate non-text features which will be based on morphological difference between text and non-text regions. We implemented a text character alignment algorithm which will improve the accuracy of the original text detection. We will also have a look upon the built in MATLAB OCR recognition algorithm and an open source OCR which is commonly used to perform better detection results, post detection algorithm is implemented and natural language processing to perform word correction and false detection inhibition. Finally, the detection result will be linked to internet to perform online matching. More than 86% accuracy can be obtained by this algorithm.

  5. Effects of lexical characteristics and demographic factors on mandarin chinese open-set word recognition in children with cochlear implants.

    PubMed

    Liu, Haihong; Liu, Sha; Wang, Suju; Liu, Chang; Kong, Ying; Zhang, Ning; Li, Shujing; Yang, Yilin; Han, Demin; Zhang, Luo

    2013-01-01

    The purpose of this study was to examine the open-set word recognition performance of Mandarin Chinese-speaking children who had received a multichannel cochlear implant (CI) and examine the effects of lexical characteristics and demographic factors (i.e., age at implantation and duration of implant use) on Mandarin Chinese open-set word recognition in these children. Participants were 230 prelingually deafened children with CIs. Age at implantation ranged from 0.9 to 16.0 years, with a mean of 3.9 years. The Standard-Chinese version of the Monosyllabic Lexical Neighborhood test and the Multisyllabic Lexical Neighborhood test were used to evaluate the open-set word identification abilities of the children. A two-way analysis of variance was performed to delineate the lexical effects on the open-set word identification, with word difficulty and syllable length as the two main factors. The effects of age at implantation and duration of implant use on open-set, word-recognition performance were examined using correlational/regressional models. First, the average percent-correct scores for the disyllabic "easy" list, disyllabic "hard" list, monosyllabic "easy" list, and monosyllabic "hard" list were 65.0%, 51.3%, 58.9%, and 46.2%, respectively. For both the easy and hard lists, the percentage of words correctly identified was higher for disyllabic words than for monosyllabic words, Second, the CI group scored 26.3%, 31.3%, and 18.8 % points lower than their hearing-age-matched normal-hearing peers for 4, 5, and 6 years of hearing age, respectively. The corresponding gaps between the CI group and the chronological-age-matched normal-hearing group were 47.6, 49.6, and 42.4, respectively. The individual variations in performance were much greater in the CI group than in the normal-hearing group, Third, the children exhibited steady improvements in performance as the duration of implant use increased, especially 1 to 6 years postimplantation. Last, age at implantation had significant effects on postimplantation word-recognition performance. The benefit of early implantation was particularly evident in children 5 years old or younger. First, Mandarin Chinese-speaking pediatric CI users' open-set word recognition was influenced by the lexical characteristics of the stimuli. The score was higher for easy words than for hard words and was higher for disyllabic words than for monosyllabic words, Second, Mandarin-Chinese-speaking pediatric CI users exhibited steady progress in open-set word recognition as the duration of implant use increased. However, the present study also demonstrated that, even after 6 years of CI use, there was a significant deficit in open-set, word-recognition performance in the CI children compared with their normal-hearing peers. Third, age at implantation had significant effects on open-set, word-recognition performance. Early implanted children exhibited better performance than children implanted later.

  6. Relationships between Structural and Acoustic Properties of Maternal Talk and Children's Early Word Recognition

    ERIC Educational Resources Information Center

    Suttora, Chiara; Salerni, Nicoletta; Zanchi, Paola; Zampini, Laura; Spinelli, Maria; Fasolo, Mirco

    2017-01-01

    This study aimed to investigate specific associations between structural and acoustic characteristics of infant-directed (ID) speech and word recognition. Thirty Italian-acquiring children and their mothers were tested when the children were 1;3. Children's word recognition was measured with the looking-while-listening task. Maternal ID speech was…

  7. The Low-Frequency Encoding Disadvantage: Word Frequency Affects Processing Demands

    ERIC Educational Resources Information Center

    Diana, Rachel A.; Reder, Lynne M.

    2006-01-01

    Low-frequency words produce more hits and fewer false alarms than high-frequency words in a recognition task. The low-frequency hit rate advantage has sometimes been attributed to processes that operate during the recognition test (e.g., L. M. Reder et al., 2000). When tasks other than recognition, such as recall, cued recall, or associative…

  8. Knowledge of a Second Language Influences Auditory Word Recognition in the Native Language

    ERIC Educational Resources Information Center

    Lagrou, Evelyne; Hartsuiker, Robert J.; Duyck, Wouter

    2011-01-01

    Many studies in bilingual visual word recognition have demonstrated that lexical access is not language selective. However, research on bilingual word recognition in the auditory modality has been scarce, and it has yielded mixed results with regard to the degree of this language nonselectivity. In the present study, we investigated whether…

  9. Morphing Images: A Potential Tool for Teaching Word Recognition to Children with Severe Learning Difficulties

    ERIC Educational Resources Information Center

    Sheehy, Kieron

    2005-01-01

    Children with severe learning difficulties who fail to begin word recognition can learn to recognise pictures and symbols relatively easily. However, finding an effective means of using pictures to teach word recognition has proved problematic. This research explores the use of morphing software to support the transition from picture to word…

  10. Using Automatic Speech Recognition to Dictate Mathematical Expressions: The Development of the "TalkMaths" Application at Kingston University

    ERIC Educational Resources Information Center

    Wigmore, Angela; Hunter, Gordon; Pflugel, Eckhard; Denholm-Price, James; Binelli, Vincent

    2009-01-01

    Speech technology--especially automatic speech recognition--has now advanced to a level where it can be of great benefit both to able-bodied people and those with various disabilities. In this paper we describe an application "TalkMaths" which, using the output from a commonly-used conventional automatic speech recognition system,…

  11. Examination of the neighborhood activation theory in normal and hearing-impaired listeners.

    PubMed

    Dirks, D D; Takayanagi, S; Moshfegh, A; Noffsinger, P D; Fausti, S A

    2001-02-01

    Experiments were conducted to examine the effects of lexical information on word recognition among normal hearing listeners and individuals with sensorineural hearing loss. The lexical factors of interest were incorporated in the Neighborhood Activation Model (NAM). Central to this model is the concept that words are recognized relationally in the context of other phonemically similar words. NAM suggests that words in the mental lexicon are organized into similarity neighborhoods and the listener is required to select the target word from competing lexical items. Two structural characteristics of similarity neighborhoods that influence word recognition have been identified; "neighborhood density" or the number of phonemically similar words (neighbors) for a particular target item and "neighborhood frequency" or the average frequency of occurrence of all the items within a neighborhood. A third lexical factor, "word frequency" or the frequency of occurrence of a target word in the language, is assumed to optimize the word recognition process by biasing the system toward choosing a high frequency over a low frequency word. Three experiments were performed. In the initial experiments, word recognition for consonant-vowel-consonant (CVC) monosyllables was assessed in young normal hearing listeners by systematically partitioning the items into the eight possible lexical conditions that could be created by two levels of the three lexical factors, word frequency (high and low), neighborhood density (high and low), and average neighborhood frequency (high and low). Neighborhood structure and word frequency were estimated computationally using a large, on-line lexicon-based Webster's Pocket Dictionary. From this program 400 highly familiar, monosyllables were selected and partitioned into eight orthogonal lexical groups (50 words/group). The 400 words were presented randomly to normal hearing listeners in speech-shaped noise (Experiment 1) and "in quiet" (Experiment 2) as well as to an elderly group of listeners with sensorineural hearing loss in the speech-shaped noise (Experiment 3). The results of three experiments verified predictions of NAM in both normal hearing and hearing-impaired listeners. In each experiment, words from low density neighborhoods were recognized more accurately than those from high density neighborhoods. The presence of high frequency neighbors (average neighborhood frequency) produced poorer recognition performance than comparable conditions with low frequency neighbors. Word frequency was found to have a highly significant effect on word recognition. Lexical conditions with high word frequencies produced higher performance scores than conditions with low frequency words. The results supported the basic tenets of NAM theory and identified both neighborhood structural properties and word frequency as significant lexical factors affecting word recognition when listening in noise and "in quiet." The results of the third experiment permit extension of NAM theory to individuals with sensorineural hearing loss. Future development of speech recognition tests should allow for the effects of higher level cognitive (lexical) factors on lower level phonemic processing.

  12. Recognition of chemical entities: combining dictionary-based and grammar-based approaches.

    PubMed

    Akhondi, Saber A; Hettne, Kristina M; van der Horst, Eelke; van Mulligen, Erik M; Kors, Jan A

    2015-01-01

    The past decade has seen an upsurge in the number of publications in chemistry. The ever-swelling volume of available documents makes it increasingly hard to extract relevant new information from such unstructured texts. The BioCreative CHEMDNER challenge invites the development of systems for the automatic recognition of chemicals in text (CEM task) and for ranking the recognized compounds at the document level (CDI task). We investigated an ensemble approach where dictionary-based named entity recognition is used along with grammar-based recognizers to extract compounds from text. We assessed the performance of ten different commercial and publicly available lexical resources using an open source indexing system (Peregrine), in combination with three different chemical compound recognizers and a set of regular expressions to recognize chemical database identifiers. The effect of different stop-word lists, case-sensitivity matching, and use of chunking information was also investigated. We focused on lexical resources that provide chemical structure information. To rank the different compounds found in a text, we used a term confidence score based on the normalized ratio of the term frequencies in chemical and non-chemical journals. The use of stop-word lists greatly improved the performance of the dictionary-based recognition, but there was no additional benefit from using chunking information. A combination of ChEBI and HMDB as lexical resources, the LeadMine tool for grammar-based recognition, and the regular expressions, outperformed any of the individual systems. On the test set, the F-scores were 77.8% (recall 71.2%, precision 85.8%) for the CEM task and 77.6% (recall 71.7%, precision 84.6%) for the CDI task. Missed terms were mainly due to tokenization issues, poor recognition of formulas, and term conjunctions. We developed an ensemble system that combines dictionary-based and grammar-based approaches for chemical named entity recognition, outperforming any of the individual systems that we considered. The system is able to provide structure information for most of the compounds that are found. Improved tokenization and better recognition of specific entity types is likely to further improve system performance.

  13. Recognition of chemical entities: combining dictionary-based and grammar-based approaches

    PubMed Central

    2015-01-01

    Background The past decade has seen an upsurge in the number of publications in chemistry. The ever-swelling volume of available documents makes it increasingly hard to extract relevant new information from such unstructured texts. The BioCreative CHEMDNER challenge invites the development of systems for the automatic recognition of chemicals in text (CEM task) and for ranking the recognized compounds at the document level (CDI task). We investigated an ensemble approach where dictionary-based named entity recognition is used along with grammar-based recognizers to extract compounds from text. We assessed the performance of ten different commercial and publicly available lexical resources using an open source indexing system (Peregrine), in combination with three different chemical compound recognizers and a set of regular expressions to recognize chemical database identifiers. The effect of different stop-word lists, case-sensitivity matching, and use of chunking information was also investigated. We focused on lexical resources that provide chemical structure information. To rank the different compounds found in a text, we used a term confidence score based on the normalized ratio of the term frequencies in chemical and non-chemical journals. Results The use of stop-word lists greatly improved the performance of the dictionary-based recognition, but there was no additional benefit from using chunking information. A combination of ChEBI and HMDB as lexical resources, the LeadMine tool for grammar-based recognition, and the regular expressions, outperformed any of the individual systems. On the test set, the F-scores were 77.8% (recall 71.2%, precision 85.8%) for the CEM task and 77.6% (recall 71.7%, precision 84.6%) for the CDI task. Missed terms were mainly due to tokenization issues, poor recognition of formulas, and term conjunctions. Conclusions We developed an ensemble system that combines dictionary-based and grammar-based approaches for chemical named entity recognition, outperforming any of the individual systems that we considered. The system is able to provide structure information for most of the compounds that are found. Improved tokenization and better recognition of specific entity types is likely to further improve system performance. PMID:25810767

  14. Embedded Words in Visual Word Recognition: Does the Left Hemisphere See the Rain in Brain?

    ERIC Educational Resources Information Center

    McCormick, Samantha F.; Davis, Colin J.; Brysbaert, Marc

    2010-01-01

    To examine whether interhemispheric transfer during foveal word recognition entails a discontinuity between the information presented to the left and right of fixation, we presented target words in such a way that participants fixated immediately left or right of an embedded word (as in "gr*apple", "bull*et") or in the middle…

  15. Lexico-Semantic Structure and the Word-Frequency Effect in Recognition Memory

    ERIC Educational Resources Information Center

    Monaco, Joseph D.; Abbott, L. F.; Kahana, Michael J.

    2007-01-01

    The word-frequency effect (WFE) in recognition memory refers to the finding that more rare words are better recognized than more common words. We demonstrate that a familiarity-discrimination model operating on data from a semantic word-association space yields a robust WFE in data on both hit rates and false-alarm rates. Our modeling results…

  16. Stroop effects from newly learned color words: effects of memory consolidation and episodic context

    PubMed Central

    Geukes, Sebastian; Gaskell, M. Gareth; Zwitserlood, Pienie

    2015-01-01

    The Stroop task is an excellent tool to test whether reading a word automatically activates its associated meaning, and it has been widely used in mono- and bilingual contexts. Despite of its ubiquity, the task has not yet been employed to test the automaticity of recently established word-concept links in novel-word-learning studies, under strict experimental control of learning and testing conditions. In three experiments, we thus paired novel words with native language (German) color words via lexical association and subsequently tested these words in a manual version of the Stroop task. Two crucial findings emerged: When novel word Stroop trials appeared intermixed among native-word trials, the novel-word Stroop effect was observed immediately after the learning phase. If no native color words were present in a Stroop block, the novel-word Stroop effect only emerged 24 h later. These results suggest that the automatic availability of a novel word's meaning depends either on supportive context from the learning episode and/or on sufficient time for memory consolidation. We discuss how these results can be reconciled with the complementary learning systems account of word learning. PMID:25814973

  17. Oscillatory brain dynamics associated with the automatic processing of emotion in words.

    PubMed

    Wang, Lin; Bastiaansen, Marcel

    2014-10-01

    This study examines the automaticity of processing the emotional aspects of words, and characterizes the oscillatory brain dynamics that accompany this automatic processing. Participants read emotionally negative, neutral and positive nouns while performing a color detection task in which only perceptual-level analysis was required. Event-related potentials and time frequency representations were computed from the concurrently measured EEG. Negative words elicited a larger P2 and a larger late positivity than positive and neutral words, indicating deeper semantic/evaluative processing of negative words. In addition, sustained alpha power suppressions were found for the emotional compared to neutral words, in the time range from 500 to 1000ms post-stimulus. These results suggest that sustained attention was allocated to the emotional words, whereas the attention allocated to the neutral words was released after an initial analysis. This seems to hold even when the emotional content of the words is task-irrelevant. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Phonological Activation in Multi-Syllabic Sord Recognition

    ERIC Educational Resources Information Center

    Lee, Chang H.

    2007-01-01

    Three experiments were conducted to test the phonological recoding hypothesis in visual word recognition. Most studies on this issue have been conducted using mono-syllabic words, eventually constructing various models of phonological processing. Yet in many languages including English, the majority of words are multi-syllabic words. English…

  19. Automatic priming of attentional control by relevant colors.

    PubMed

    Ansorge, Ulrich; Becker, Stefanie I

    2012-01-01

    We tested whether color word cues automatically primed attentional control settings during visual search, or whether color words were used in a strategic manner for the control of attention. In Experiment 1, we used color words as cues that were informative or uninformative with respect to the target color. Regardless of the cue's informativeness, distractors similar to the color cue captured more attention. In Experiment 2, the participants either indicated their expectation about the target color or recalled the last target color, which was uncorrelated with the present target color. We observed more attentional capture by distractors that were similar to the participants' predictions and recollections, but no difference between effects of the recollected and predicted colors. In Experiment 3, we used 100%-informative word cues that were congruent with the predicted target color (e.g., the word "red" informed that the target would be red) or incongruent with the predicted target color (e.g., the word "green" informed that the target would be red) and found that informative incongruent word cues primed attention capture by a word-similar distractor. Together, the results suggest that word cues (Exps. 1 and 3) and color representations (Exp. 2) primed attention capture in an automatic manner. This indicates that color cues automatically primed temporary adjustments in attention control settings.

  20. A novel probabilistic framework for event-based speech recognition

    NASA Astrophysics Data System (ADS)

    Juneja, Amit; Espy-Wilson, Carol

    2003-10-01

    One of the reasons for unsatisfactory performance of the state-of-the-art automatic speech recognition (ASR) systems is the inferior acoustic modeling of low-level acoustic-phonetic information in the speech signal. An acoustic-phonetic approach to ASR, on the other hand, explicitly targets linguistic information in the speech signal, but such a system for continuous speech recognition (CSR) is not known to exist. A probabilistic and statistical framework for CSR based on the idea of the representation of speech sounds by bundles of binary valued articulatory phonetic features is proposed. Multiple probabilistic sequences of linguistically motivated landmarks are obtained using binary classifiers of manner phonetic features-syllabic, sonorant and continuant-and the knowledge-based acoustic parameters (APs) that are acoustic correlates of those features. The landmarks are then used for the extraction of knowledge-based APs for source and place phonetic features and their binary classification. Probabilistic landmark sequences are constrained using manner class language models for isolated or connected word recognition. The proposed method could overcome the disadvantages encountered by the early acoustic-phonetic knowledge-based systems that led the ASR community to switch to systems highly dependent on statistical pattern analysis methods and probabilistic language or grammar models.

  1. Hazardous sign detection for safety applications in traffic monitoring

    NASA Astrophysics Data System (ADS)

    Benesova, Wanda; Kottman, Michal; Sidla, Oliver

    2012-01-01

    The transportation of hazardous goods in public streets systems can pose severe safety threats in case of accidents. One of the solutions for these problems is an automatic detection and registration of vehicles which are marked with dangerous goods signs. We present a prototype system which can detect a trained set of signs in high resolution images under real-world conditions. This paper compares two different methods for the detection: bag of visual words (BoW) procedure and our approach presented as pairs of visual words with Hough voting. The results of an extended series of experiments are provided in this paper. The experiments show that the size of visual vocabulary is crucial and can significantly affect the recognition success rate. Different code-book sizes have been evaluated for this detection task. The best result of the first method BoW was 67% successfully recognized hazardous signs, whereas the second method proposed in this paper - pairs of visual words and Hough voting - reached 94% of correctly detected signs. The experiments are designed to verify the usability of the two proposed approaches in a real-world scenario.

  2. Individual differences in online spoken word recognition: Implications for SLI

    PubMed Central

    McMurray, Bob; Samelson, Vicki M.; Lee, Sung Hee; Tomblin, J. Bruce

    2012-01-01

    Thirty years of research has uncovered the broad principles that characterize spoken word processing across listeners. However, there have been few systematic investigations of individual differences. Such an investigation could help refine models of word recognition by indicating which processing parameters are likely to vary, and could also have important implications for work on language impairment. The present study begins to fill this gap by relating individual differences in overall language ability to variation in online word recognition processes. Using the visual world paradigm, we evaluated online spoken word recognition in adolescents who varied in both basic language abilities and non-verbal cognitive abilities. Eye movements to target, cohort and rhyme objects were monitored during spoken word recognition, as an index of lexical activation. Adolescents with poor language skills showed fewer looks to the target and more fixations to the cohort and rhyme competitors. These results were compared to a number of variants of the TRACE model (McClelland & Elman, 1986) that were constructed to test a range of theoretical approaches to language impairment: impairments at sensory and phonological levels; vocabulary size, and generalized slowing. None were strongly supported, and variation in lexical decay offered the best fit. Thus, basic word recognition processes like lexical decay may offer a new way to characterize processing differences in language impairment. PMID:19836014

  3. Nonlinear changes in brain activity during continuous word repetition: an event-related multiparametric functional MR imaging study.

    PubMed

    Hagenbeek, R E; Rombouts, S A R B; Veltman, D J; Van Strien, J W; Witter, M P; Scheltens, P; Barkhof, F

    2007-10-01

    Changes in brain activation as a function of continuous multiparametric word recognition have not been studied before by using functional MR imaging (fMRI), to our knowledge. Our aim was to identify linear changes in brain activation and, what is more interesting, nonlinear changes in brain activation as a function of extended word repetition. Fifteen healthy young right-handed individuals participated in this study. An event-related extended continuous word-recognition task with 30 target words was used to study the parametric effect of word recognition on brain activation. Word-recognition-related brain activation was studied as a function of 9 word repetitions. fMRI data were analyzed with a general linear model with regressors for linearly changing signal intensity and nonlinearly changing signal intensity, according to group average reaction time (RT) and individual RTs. A network generally associated with episodic memory recognition showed either constant or linearly decreasing brain activation as a function of word repetition. Furthermore, both anterior and posterior cingulate cortices and the left middle frontal gyrus followed the nonlinear curve of the group RT, whereas the anterior cingulate cortex was also associated with individual RT. Linear alteration in brain activation as a function of word repetition explained most changes in blood oxygen level-dependent signal intensity. Using a hierarchically orthogonalized model, we found evidence for nonlinear activation associated with both group and individual RTs.

  4. Studies in automatic speech recognition and its application in aerospace

    NASA Astrophysics Data System (ADS)

    Taylor, Michael Robinson

    Human communication is characterized in terms of the spectral and temporal dimensions of speech waveforms. Electronic speech recognition strategies based on Dynamic Time Warping and Markov Model algorithms are described and typical digit recognition error rates are tabulated. The application of Direct Voice Input (DVI) as an interface between man and machine is explored within the context of civil and military aerospace programmes. Sources of physical and emotional stress affecting speech production within military high performance aircraft are identified. Experimental results are reported which quantify fundamental frequency and coarse temporal dimensions of male speech as a function of the vibration, linear acceleration and noise levels typical of aerospace environments; preliminary indications of acoustic phonetic variability reported by other researchers are summarized. Connected whole-word pattern recognition error rates are presented for digits spoken under controlled Gz sinusoidal whole-body vibration. Correlations are made between significant increases in recognition error rate and resonance of the abdomen-thorax and head subsystems of the body. The phenomenon of vibrato style speech produced under low frequency whole-body Gz vibration is also examined. Interactive DVI system architectures and avionic data bus integration concepts are outlined together with design procedures for the efficient development of pilot-vehicle command and control protocols.

  5. Multisensory speech perception in autism spectrum disorder: From phoneme to whole-word perception.

    PubMed

    Stevenson, Ryan A; Baum, Sarah H; Segers, Magali; Ferber, Susanne; Barense, Morgan D; Wallace, Mark T

    2017-07-01

    Speech perception in noisy environments is boosted when a listener can see the speaker's mouth and integrate the auditory and visual speech information. Autistic children have a diminished capacity to integrate sensory information across modalities, which contributes to core symptoms of autism, such as impairments in social communication. We investigated the abilities of autistic and typically-developing (TD) children to integrate auditory and visual speech stimuli in various signal-to-noise ratios (SNR). Measurements of both whole-word and phoneme recognition were recorded. At the level of whole-word recognition, autistic children exhibited reduced performance in both the auditory and audiovisual modalities. Importantly, autistic children showed reduced behavioral benefit from multisensory integration with whole-word recognition, specifically at low SNRs. At the level of phoneme recognition, autistic children exhibited reduced performance relative to their TD peers in auditory, visual, and audiovisual modalities. However, and in contrast to their performance at the level of whole-word recognition, both autistic and TD children showed benefits from multisensory integration for phoneme recognition. In accordance with the principle of inverse effectiveness, both groups exhibited greater benefit at low SNRs relative to high SNRs. Thus, while autistic children showed typical multisensory benefits during phoneme recognition, these benefits did not translate to typical multisensory benefit of whole-word recognition in noisy environments. We hypothesize that sensory impairments in autistic children raise the SNR threshold needed to extract meaningful information from a given sensory input, resulting in subsequent failure to exhibit behavioral benefits from additional sensory information at the level of whole-word recognition. Autism Res 2017. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. Autism Res 2017, 10: 1280-1290. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. © 2017 International Society for Autism Research, Wiley Periodicals, Inc.

  6. Automated Intelligibility Assessment of Pathological Speech Using Phonological Features

    NASA Astrophysics Data System (ADS)

    Middag, Catherine; Martens, Jean-Pierre; Van Nuffelen, Gwen; De Bodt, Marc

    2009-12-01

    It is commonly acknowledged that word or phoneme intelligibility is an important criterion in the assessment of the communication efficiency of a pathological speaker. People have therefore put a lot of effort in the design of perceptual intelligibility rating tests. These tests usually have the drawback that they employ unnatural speech material (e.g., nonsense words) and that they cannot fully exclude errors due to listener bias. Therefore, there is a growing interest in the application of objective automatic speech recognition technology to automate the intelligibility assessment. Current research is headed towards the design of automated methods which can be shown to produce ratings that correspond well with those emerging from a well-designed and well-performed perceptual test. In this paper, a novel methodology that is built on previous work (Middag et al., 2008) is presented. It utilizes phonological features, automatic speech alignment based on acoustic models that were trained on normal speech, context-dependent speaker feature extraction, and intelligibility prediction based on a small model that can be trained on pathological speech samples. The experimental evaluation of the new system reveals that the root mean squared error of the discrepancies between perceived and computed intelligibilities can be as low as 8 on a scale of 0 to 100.

  7. Tuning time-frequency methods for the detection of metered HF speech

    NASA Astrophysics Data System (ADS)

    Nelson, Douglas J.; Smith, Lawrence H.

    2002-12-01

    Speech is metered if the stresses occur at a nearly regular rate. Metered speech is common in poetry, and it can occur naturally in speech, if the speaker is spelling a word or reciting words or numbers from a list. In radio communications, the CQ request, call sign and other codes are frequently metered. In tactical communications and air traffic control, location, heading and identification codes may be metered. Moreover metering may be expected to survive even in HF communications, which are corrupted by noise, interference and mistuning. For this environment, speech recognition and conventional machine-based methods are not effective. We describe Time-Frequency methods which have been adapted successfully to the problem of mitigation of HF signal conditions and detection of metered speech. These methods are based on modeled time and frequency correlation properties of nearly harmonic functions. We derive these properties and demonstrate a performance gain over conventional correlation and spectral methods. Finally, in addressing the problem of HF single sideband (SSB) communications, the problems of carrier mistuning, interfering signals, such as manual Morse, and fast automatic gain control (AGC) must be addressed. We demonstrate simple methods which may be used to blindly mitigate mistuning and narrowband interference, and effectively invert the fast automatic gain function.

  8. Morphological Influences on the Recognition of Monosyllabic Monomorphemic Words

    ERIC Educational Resources Information Center

    Baayen, R. H.; Feldman, L. B.; Schreuder, R.

    2006-01-01

    Balota et al. [Balota, D., Cortese, M., Sergent-Marshall, S., Spieler, D., & Yap, M. (2004). Visual word recognition for single-syllable words. "Journal of Experimental Psychology: General, 133," 283-316] studied lexical processing in word naming and lexical decision using hierarchical multiple regression techniques for a large data set of…

  9. Orthographic Consistency Affects Spoken Word Recognition at Different Grain-Sizes

    ERIC Educational Resources Information Center

    Dich, Nadya

    2014-01-01

    A number of previous studies found that the consistency of sound-to-spelling mappings (feedback consistency) affects spoken word recognition. In auditory lexical decision experiments, words that can only be spelled one way are recognized faster than words with multiple potential spellings. Previous studies demonstrated this by manipulating…

  10. Modelling the Effects of Semantic Ambiguity in Word Recognition

    ERIC Educational Resources Information Center

    Rodd, Jennifer M.; Gaskell, M. Gareth; Marslen-Wilson, William D.

    2004-01-01

    Most words in English are ambiguous between different interpretations; words can mean different things in different contexts. We investigate the implications of different types of semantic ambiguity for connectionist models of word recognition. We present a model in which there is competition to activate distributed semantic representations. The…

  11. Speech Perception, Word Recognition and the Structure of the Lexicon. Research on Speech Perception Progress Report No. 10.

    ERIC Educational Resources Information Center

    Pisoni, David B.; And Others

    The results of three projects concerned with auditory word recognition and the structure of the lexicon are reported in this paper. The first project described was designed to test experimentally several specific predictions derived from MACS, a simulation model of the Cohort Theory of word recognition. The second project description provides the…

  12. Bilingual Word Recognition in Deaf and Hearing Signers: Effects of Proficiency and Language Dominance on Cross-Language Activation

    ERIC Educational Resources Information Center

    Morford, Jill P.; Kroll, Judith F.; Piñar, Pilar; Wilkinson, Erin

    2014-01-01

    Recent evidence demonstrates that American Sign Language (ASL) signs are active during print word recognition in deaf bilinguals who are highly proficient in both ASL and English. In the present study, we investigate whether signs are active during print word recognition in two groups of unbalanced bilinguals: deaf ASL-dominant and hearing…

  13. The effect of scene context on episodic object recognition: parahippocampal cortex mediates memory encoding and retrieval success.

    PubMed

    Hayes, Scott M; Nadel, Lynn; Ryan, Lee

    2007-01-01

    Previous research has investigated intentional retrieval of contextual information and contextual influences on object identification and word recognition, yet few studies have investigated context effects in episodic memory for objects. To address this issue, unique objects embedded in a visually rich scene or on a white background were presented to participants. At test, objects were presented either in the original scene or on a white background. A series of behavioral studies with young adults demonstrated a context shift decrement (CSD)-decreased recognition performance when context is changed between encoding and retrieval. The CSD was not attenuated by encoding or retrieval manipulations, suggesting that binding of object and context may be automatic. A final experiment explored the neural correlates of the CSD, using functional Magnetic Resonance Imaging. Parahippocampal cortex (PHC) activation (right greater than left) during incidental encoding was associated with subsequent memory of objects in the context shift condition. Greater activity in right PHC was also observed during successful recognition of objects previously presented in a scene. Finally, a subset of regions activated during scene encoding, such as bilateral PHC, was reactivated when the object was presented on a white background at retrieval. Although participants were not required to intentionally retrieve contextual information, the results suggest that PHC may reinstate visual context to mediate successful episodic memory retrieval. The CSD is attributed to automatic and obligatory binding of object and context. The results suggest that PHC is important not only for processing of scene information, but also plays a role in successful episodic memory encoding and retrieval. These findings are consistent with the view that spatial information is stored in the hippocampal complex, one of the central tenets of Multiple Trace Theory. (c) 2007 Wiley-Liss, Inc.

  14. Rapid interactions between lexical semantic and word form analysis during word recognition in context: evidence from ERPs.

    PubMed

    Kim, Albert; Lai, Vicky

    2012-05-01

    We used ERPs to investigate the time course of interactions between lexical semantic and sublexical visual word form processing during word recognition. Participants read sentence-embedded pseudowords that orthographically resembled a contextually supported real word (e.g., "She measured the flour so she could bake a ceke…") or did not (e.g., "She measured the flour so she could bake a tont…") along with nonword consonant strings (e.g., "She measured the flour so she could bake a srdt…"). Pseudowords that resembled a contextually supported real word ("ceke") elicited an enhanced positivity at 130 msec (P130), relative to real words (e.g., "She measured the flour so she could bake a cake…"). Pseudowords that did not resemble a plausible real word ("tont") enhanced the N170 component, as did nonword consonant strings ("srdt"). The effect pattern shows that the visual word recognition system is, perhaps, counterintuitively, more rapidly sensitive to minor than to flagrant deviations from contextually predicted inputs. The findings are consistent with rapid interactions between lexical and sublexical representations during word recognition, in which rapid lexical access of a contextually supported word (CAKE) provides top-down excitation of form features ("cake"), highlighting the anomaly of an unexpected word "ceke."

  15. THE EFFECT OF WORD ASSOCIATIONS ON THE RECOGNITION OF FLASHED WORDS.

    ERIC Educational Resources Information Center

    SAMUELS, S. JAY

    THE HYPOTHESIS THAT WHEN ASSOCIATED PAIRS OF WORDS ARE PRESENTED, SPEED OF RECOGNITION WILL BE FASTER THAN WHEN NONASSOCIATED WORD PAIRS ARE PRESENTED OR WHEN A TARGET WORD IS PRESENTED BY ITSELF WAS TESTED. TWENTY UNIVERSITY STUDENTS, INITIALLY SCREENED FOR VISION, WERE ASSIGNED RANDOMLY TO ROWS OF A 5 X 5 REPEATED-MEASURES LATIN SQUARE DESIGN.…

  16. Influences of High and Low Variability on Infant Word Recognition

    ERIC Educational Resources Information Center

    Singh, Leher

    2008-01-01

    Although infants begin to encode and track novel words in fluent speech by 7.5 months, their ability to recognize words is somewhat limited at this stage. In particular, when the surface form of a word is altered, by changing the gender or affective prosody of the speaker, infants begin to falter at spoken word recognition. Given that natural…

  17. Semi-automated contour recognition using DICOMautomaton

    NASA Astrophysics Data System (ADS)

    Clark, H.; Wu, J.; Moiseenko, V.; Lee, R.; Gill, B.; Duzenli, C.; Thomas, S.

    2014-03-01

    Purpose: A system has been developed which recognizes and classifies Digital Imaging and Communication in Medicine contour data with minimal human intervention. It allows researchers to overcome obstacles which tax analysis and mining systems, including inconsistent naming conventions and differences in data age or resolution. Methods: Lexicographic and geometric analysis is used for recognition. Well-known lexicographic methods implemented include Levenshtein-Damerau, bag-of-characters, Double Metaphone, Soundex, and (word and character)-N-grams. Geometrical implementations include 3D Fourier Descriptors, probability spheres, boolean overlap, simple feature comparison (e.g. eccentricity, volume) and rule-based techniques. Both analyses implement custom, domain-specific modules (e.g. emphasis differentiating left/right organ variants). Contour labels from 60 head and neck patients are used for cross-validation. Results: Mixed-lexicographical methods show an effective improvement in more than 10% of recognition attempts compared with a pure Levenshtein-Damerau approach when withholding 70% of the lexicon. Domain-specific and geometrical techniques further boost performance. Conclusions: DICOMautomaton allows users to recognize contours semi-automatically. As usage increases and the lexicon is filled with additional structures, performance improves, increasing the overall utility of the system.

  18. Automatic Speech Recognition Predicts Speech Intelligibility and Comprehension for Listeners With Simulated Age-Related Hearing Loss.

    PubMed

    Fontan, Lionel; Ferrané, Isabelle; Farinas, Jérôme; Pinquier, Julien; Tardieu, Julien; Magnen, Cynthia; Gaillard, Pascal; Aumont, Xavier; Füllgrabe, Christian

    2017-09-18

    The purpose of this article is to assess speech processing for listeners with simulated age-related hearing loss (ARHL) and to investigate whether the observed performance can be replicated using an automatic speech recognition (ASR) system. The long-term goal of this research is to develop a system that will assist audiologists/hearing-aid dispensers in the fine-tuning of hearing aids. Sixty young participants with normal hearing listened to speech materials mimicking the perceptual consequences of ARHL at different levels of severity. Two intelligibility tests (repetition of words and sentences) and 1 comprehension test (responding to oral commands by moving virtual objects) were administered. Several language models were developed and used by the ASR system in order to fit human performances. Strong significant positive correlations were observed between human and ASR scores, with coefficients up to .99. However, the spectral smearing used to simulate losses in frequency selectivity caused larger declines in ASR performance than in human performance. Both intelligibility and comprehension scores for listeners with simulated ARHL are highly correlated with the performances of an ASR-based system. In the future, it needs to be determined if the ASR system is similarly successful in predicting speech processing in noise and by older people with ARHL.

  19. Speaker information affects false recognition of unstudied lexical-semantic associates.

    PubMed

    Luthra, Sahil; Fox, Neal P; Blumstein, Sheila E

    2018-05-01

    Recognition of and memory for a spoken word can be facilitated by a prior presentation of that word spoken by the same talker. However, it is less clear whether this speaker congruency advantage generalizes to facilitate recognition of unheard related words. The present investigation employed a false memory paradigm to examine whether information about a speaker's identity in items heard by listeners could influence the recognition of novel items (critical intruders) phonologically or semantically related to the studied items. In Experiment 1, false recognition of semantically associated critical intruders was sensitive to speaker information, though only when subjects attended to talker identity during encoding. Results from Experiment 2 also provide some evidence that talker information affects the false recognition of critical intruders. Taken together, the present findings indicate that indexical information is able to contact the lexical-semantic network to affect the processing of unheard words.

  20. The impact of inverted text on visual word processing: An fMRI study.

    PubMed

    Sussman, Bethany L; Reddigari, Samir; Newman, Sharlene D

    2018-06-01

    Visual word recognition has been studied for decades. One question that has received limited attention is how different text presentation orientations disrupt word recognition. By examining how word recognition processes may be disrupted by different text orientations it is hoped that new insights can be gained concerning the process. Here, we examined the impact of rotating and inverting text on the neural network responsible for visual word recognition focusing primarily on a region of the occipto-temporal cortex referred to as the visual word form area (VWFA). A lexical decision task was employed in which words and pseudowords were presented in one of three orientations (upright, rotated or inverted). The results demonstrate that inversion caused the greatest disruption of visual word recognition processes. Both rotated and inverted text elicited increased activation in spatial attention regions within the right parietal cortex. However, inverted text recruited phonological and articulatory processing regions within the left inferior frontal and left inferior parietal cortices. Finally, the VWFA was found to not behave similarly to the fusiform face area in that unusual text orientations resulted in increased activation and not decreased activation. It is hypothesized here that the VWFA activation is modulated by feedback from linguistic processes. Copyright © 2018 Elsevier Inc. All rights reserved.

  1. Effects of age and hearing loss on recognition of unaccented and accented multisyllabic words.

    PubMed

    Gordon-Salant, Sandra; Yeni-Komshian, Grace H; Fitzgibbons, Peter J; Cohen, Julie I

    2015-02-01

    The effects of age and hearing loss on recognition of unaccented and accented words of varying syllable length were investigated. It was hypothesized that with increments in length of syllables, there would be atypical alterations in syllable stress in accented compared to native English, and that these altered stress patterns would be sensitive to auditory temporal processing deficits with aging. Sets of one-, two-, three-, and four-syllable words with the same initial syllable were recorded by one native English and two Spanish-accented talkers. Lists of these words were presented in isolation and in sentence contexts to younger and older normal-hearing listeners and to older hearing-impaired listeners. Hearing loss effects were apparent for unaccented and accented monosyllabic words, whereas age effects were observed for recognition of accented multisyllabic words, consistent with the notion that altered syllable stress patterns with accent are sensitive for revealing effects of age. Older listeners also exhibited lower recognition scores for moderately accented words in sentence contexts than in isolation, suggesting that the added demands on working memory for words in sentence contexts impact recognition of accented speech. The general pattern of results suggests that hearing loss, age, and cognitive factors limit the ability to recognize Spanish-accented speech.

  2. Effects of age and hearing loss on recognition of unaccented and accented multisyllabic words

    PubMed Central

    Gordon-Salant, Sandra; Yeni-Komshian, Grace H.; Fitzgibbons, Peter J.; Cohen, Julie I.

    2015-01-01

    The effects of age and hearing loss on recognition of unaccented and accented words of varying syllable length were investigated. It was hypothesized that with increments in length of syllables, there would be atypical alterations in syllable stress in accented compared to native English, and that these altered stress patterns would be sensitive to auditory temporal processing deficits with aging. Sets of one-, two-, three-, and four-syllable words with the same initial syllable were recorded by one native English and two Spanish-accented talkers. Lists of these words were presented in isolation and in sentence contexts to younger and older normal-hearing listeners and to older hearing-impaired listeners. Hearing loss effects were apparent for unaccented and accented monosyllabic words, whereas age effects were observed for recognition of accented multisyllabic words, consistent with the notion that altered syllable stress patterns with accent are sensitive for revealing effects of age. Older listeners also exhibited lower recognition scores for moderately accented words in sentence contexts than in isolation, suggesting that the added demands on working memory for words in sentence contexts impact recognition of accented speech. The general pattern of results suggests that hearing loss, age, and cognitive factors limit the ability to recognize Spanish-accented speech. PMID:25698021

  3. Visual Word Recognition Across the Adult Lifespan

    PubMed Central

    Cohen-Shikora, Emily R.; Balota, David A.

    2016-01-01

    The current study examines visual word recognition in a large sample (N = 148) across the adult lifespan and across a large set of stimuli (N = 1187) in three different lexical processing tasks (pronunciation, lexical decision, and animacy judgments). Although the focus of the present study is on the influence of word frequency, a diverse set of other variables are examined as the system ages and acquires more experience with language. Computational models and conceptual theories of visual word recognition and aging make differing predictions for age-related changes in the system. However, these have been difficult to assess because prior studies have produced inconsistent results, possibly due to sample differences, analytic procedures, and/or task-specific processes. The current study confronts these potential differences by using three different tasks, treating age and word variables as continuous, and exploring the influence of individual differences such as vocabulary, vision, and working memory. The primary finding is remarkable stability in the influence of a diverse set of variables on visual word recognition across the adult age spectrum. This pattern is discussed in reference to previous inconsistent findings in the literature and implications for current models of visual word recognition. PMID:27336629

  4. Aging and IQ effects on associative recognition and priming in item recognition

    PubMed Central

    McKoon, Gail; Ratcliff, Roger

    2012-01-01

    Two ways to examine memory for associative relationships between pairs of words were tested: an explicit method, associative recognition, and an implicit method, priming in item recognition. In an experiment with both kinds of tests, participants were asked to learn pairs of words. For the explicit test, participants were asked to decide whether two words of a test pair had been studied in the same or different pairs. For the implicit test, participants were asked to decide whether single words had or had not been among the studied pairs. Some test words were immediately preceded in the test list by the other word of the same pair and some by a word from a different pair. Diffusion model (Ratcliff, 1978; Ratcliff & McKoon, 2008) analyses were carried out for both tasks for college-age participants, 60–74 year olds, and 75–90 year olds, and for higher- and lower-IQ participants, in order to compare the two measures of associative strength. Results showed parallel behavior of drift rates for associative recognition and priming across ages and across IQ, indicating that they are based, at least to some degree, on the same information in memory. PMID:24976676

  5. Experimental research on showing automatic disappearance pen handwriting based on spectral imaging technology

    NASA Astrophysics Data System (ADS)

    Su, Yi; Xu, Lei; Liu, Ningning; Huang, Wei; Xu, Xiaojing

    2016-10-01

    Purpose to find an efficient, non-destructive examining method for showing the disappearing words after writing with automatic disappearance pen. Method Using the imaging spectrometer to show the potential disappearance words on paper surface according to different properties of reflection absorbed by various substances in different bands. Results the disappeared words by using different disappearance pens to write on the same paper or the same disappearance pen to write on different papers, both can get good show results through the use of the spectral imaging examining methods. Conclusion Spectral imaging technology can show the disappearing words after writing by using the automatic disappearance pen.

  6. Origin of Emotion Effects on ERP Correlates of Emotional Word Processing: The Emotion Duality Approach.

    PubMed

    Imbir, Kamil Konrad; Jarymowicz, Maria Teresa; Spustek, Tomasz; Kuś, Rafał; Żygierewicz, Jarosław

    2015-01-01

    We distinguish two evaluative systems which evoke automatic and reflective emotions. Automatic emotions are direct reactions to stimuli whereas reflective emotions are always based on verbalized (and often abstract) criteria of evaluation. We conducted an electroencephalography (EEG) study in which 25 women were required to read and respond to emotional words which engaged either the automatic or reflective system. Stimulus words were emotional (positive or negative) and neutral. We found an effect of valence on an early response with dipolar fronto-occipital topography; positive words evoked a higher amplitude response than negative words. We also found that topographically specific differences in the amplitude of the late positive complex were related to the system involved in processing. Emotional stimuli engaging the automatic system were associated with significantly higher amplitudes in the left-parietal region; the response to neutral words was similar regardless of the system engaged. A different pattern of effects was observed in the central region, neutral stimuli engaging the reflective system evoked a higher amplitudes response whereas there was no system effect for emotional stimuli. These differences could not be reduced to effects of differences between the arousing properties and concreteness of the words used as stimuli.

  7. Origin of Emotion Effects on ERP Correlates of Emotional Word Processing: The Emotion Duality Approach

    PubMed Central

    Imbir, Kamil Konrad; Jarymowicz, Maria Teresa; Spustek, Tomasz; Kuś, Rafał; Żygierewicz, Jarosław

    2015-01-01

    We distinguish two evaluative systems which evoke automatic and reflective emotions. Automatic emotions are direct reactions to stimuli whereas reflective emotions are always based on verbalized (and often abstract) criteria of evaluation. We conducted an electroencephalography (EEG) study in which 25 women were required to read and respond to emotional words which engaged either the automatic or reflective system. Stimulus words were emotional (positive or negative) and neutral. We found an effect of valence on an early response with dipolar fronto-occipital topography; positive words evoked a higher amplitude response than negative words. We also found that topographically specific differences in the amplitude of the late positive complex were related to the system involved in processing. Emotional stimuli engaging the automatic system were associated with significantly higher amplitudes in the left-parietal region; the response to neutral words was similar regardless of the system engaged. A different pattern of effects was observed in the central region, neutral stimuli engaging the reflective system evoked a higher amplitudes response whereas there was no system effect for emotional stimuli. These differences could not be reduced to effects of differences between the arousing properties and concreteness of the words used as stimuli. PMID:25955719

  8. Evaluating Effects of Divided Hemispheric Processing on Word Recognition in Foveal and Extrafoveal Displays: The Evidence from Arabic

    PubMed Central

    Almabruk, Abubaker A. A.; Paterson, Kevin B.; McGowan, Victoria; Jordan, Timothy R.

    2011-01-01

    Background Previous studies have claimed that a precise split at the vertical midline of each fovea causes all words to the left and right of fixation to project to the opposite, contralateral hemisphere, and this division in hemispheric processing has considerable consequences for foveal word recognition. However, research in this area is dominated by the use of stimuli from Latinate languages, which may induce specific effects on performance. Consequently, we report two experiments using stimuli from a fundamentally different, non-Latinate language (Arabic) that offers an alternative way of revealing effects of split-foveal processing, if they exist. Methods and Findings Words (and pseudowords) were presented to the left or right of fixation, either close to fixation and entirely within foveal vision, or further from fixation and entirely within extrafoveal vision. Fixation location and stimulus presentations were carefully controlled using an eye-tracker linked to a fixation-contingent display. To assess word recognition, Experiment 1 used the Reicher-Wheeler task and Experiment 2 used the lexical decision task. Results Performance in both experiments indicated a functional division in hemispheric processing for words in extrafoveal locations (in recognition accuracy in Experiment 1 and in reaction times and error rates in Experiment 2) but no such division for words in foveal locations. Conclusions These findings from a non-Latinate language provide new evidence that although a functional division in hemispheric processing exists for word recognition outside the fovea, this division does not extend up to the point of fixation. Some implications for word recognition and reading are discussed. PMID:21559084

  9. The Potential of Automatic Word Comparison for Historical Linguistics.

    PubMed

    List, Johann-Mattis; Greenhill, Simon J; Gray, Russell D

    2017-01-01

    The amount of data from languages spoken all over the world is rapidly increasing. Traditional manual methods in historical linguistics need to face the challenges brought by this influx of data. Automatic approaches to word comparison could provide invaluable help to pre-analyze data which can be later enhanced by experts. In this way, computational approaches can take care of the repetitive and schematic tasks leaving experts to concentrate on answering interesting questions. Here we test the potential of automatic methods to detect etymologically related words (cognates) in cross-linguistic data. Using a newly compiled database of expert cognate judgments across five different language families, we compare how well different automatic approaches distinguish related from unrelated words. Our results show that automatic methods can identify cognates with a very high degree of accuracy, reaching 89% for the best-performing method Infomap. We identify the specific strengths and weaknesses of these different methods and point to major challenges for future approaches. Current automatic approaches for cognate detection-although not perfect-could become an important component of future research in historical linguistics.

  10. The Potential of Automatic Word Comparison for Historical Linguistics

    PubMed Central

    Greenhill, Simon J.; Gray, Russell D.

    2017-01-01

    The amount of data from languages spoken all over the world is rapidly increasing. Traditional manual methods in historical linguistics need to face the challenges brought by this influx of data. Automatic approaches to word comparison could provide invaluable help to pre-analyze data which can be later enhanced by experts. In this way, computational approaches can take care of the repetitive and schematic tasks leaving experts to concentrate on answering interesting questions. Here we test the potential of automatic methods to detect etymologically related words (cognates) in cross-linguistic data. Using a newly compiled database of expert cognate judgments across five different language families, we compare how well different automatic approaches distinguish related from unrelated words. Our results show that automatic methods can identify cognates with a very high degree of accuracy, reaching 89% for the best-performing method Infomap. We identify the specific strengths and weaknesses of these different methods and point to major challenges for future approaches. Current automatic approaches for cognate detection—although not perfect—could become an important component of future research in historical linguistics. PMID:28129337

  11. Automatic Text Analysis Based on Transition Phenomena of Word Occurrences

    ERIC Educational Resources Information Center

    Pao, Miranda Lee

    1978-01-01

    Describes a method of selecting index terms directly from a word frequency list, an idea originally suggested by Goffman. Results of the analysis of word frequencies of two articles seem to indicate that the automated selection of index terms from a frequency list holds some promise for automatic indexing. (Author/MBR)

  12. The Effect of the Balance of Orthographic Neighborhood Distribution in Visual Word Recognition

    ERIC Educational Resources Information Center

    Robert, Christelle; Mathey, Stephanie; Zagar, Daniel

    2007-01-01

    The present study investigated whether the balance of neighborhood distribution (i.e., the way orthographic neighbors are spread across letter positions) influences visual word recognition. Three word conditions were compared. Word neighbors were either concentrated on one letter position (e.g.,nasse/basse-lasse-tasse-masse) or were unequally…

  13. Morpho-Semantic Processing in Word Recognition: Evidence from Balanced and Biased Ambiguous Morphemes

    ERIC Educational Resources Information Center

    Tsang, Yiu-Kei; Chen, Hsuan-Chih

    2013-01-01

    The role of morphemic meaning in Chinese word recognition was examined with the masked and unmasked priming paradigms. Target words contained ambiguous morphemes biased toward the dominant or the subordinate meanings. Prime words either contained the same ambiguous morphemes in the subordinate interpretations or were unrelated to the targets. In…

  14. Additive and Interactive Effects on Response Time Distributions in Visual Word Recognition

    ERIC Educational Resources Information Center

    Yap, Melvin J.; Balota, David A.

    2007-01-01

    Across 3 different word recognition tasks, distributional analyses were used to examine the joint effects of stimulus quality and word frequency on underlying response time distributions. Consistent with the extant literature, stimulus quality and word frequency produced additive effects in lexical decision, not only in the means but also in the…

  15. Evidence for Early Morphological Decomposition in Visual Word Recognition

    ERIC Educational Resources Information Center

    Solomyak, Olla; Marantz, Alec

    2010-01-01

    We employ a single-trial correlational MEG analysis technique to investigate early processing in the visual recognition of morphologically complex words. Three classes of affixed words were presented in a lexical decision task: free stems (e.g., taxable), bound roots (e.g., tolerable), and unique root words (e.g., vulnerable, the root of which…

  16. Morphological Structures in Visual Word Recognition: The Case of Arabic

    ERIC Educational Resources Information Center

    Abu-Rabia, Salim; Awwad, Jasmin (Shalhoub)

    2004-01-01

    This research examined the function within lexical access of the main morphemic units from which most Arabic words are assembled, namely roots and word patterns. The present study focused on the derivation of nouns, in particular, whether the lexical representation of Arabic words reflects their morphological structure and whether recognition of a…

  17. Does viotin activate violin more than viocin? On the use of visual cues during visual-word recognition.

    PubMed

    Perea, Manuel; Panadero, Victoria

    2014-01-01

    The vast majority of neural and computational models of visual-word recognition assume that lexical access is achieved via the activation of abstract letter identities. Thus, a word's overall shape should play no role in this process. In the present lexical decision experiment, we compared word-like pseudowords like viotín (same shape as its base word: violín) vs. viocín (different shape) in mature (college-aged skilled readers), immature (normally reading children), and immature/impaired (young readers with developmental dyslexia) word-recognition systems. Results revealed similar response times (and error rates) to consistent-shape and inconsistent-shape pseudowords for both adult skilled readers and normally reading children - this is consistent with current models of visual-word recognition. In contrast, young readers with developmental dyslexia made significantly more errors to viotín-like pseudowords than to viocín-like pseudowords. Thus, unlike normally reading children, young readers with developmental dyslexia are sensitive to a word's visual cues, presumably because of poor letter representations.

  18. Lexical leverage: Category knowledge boosts real-time novel word recognition in two-year- olds

    PubMed Central

    Borovsky, Arielle; Ellis, Erica M.; Evans, Julia L.; Elman, Jeffrey L.

    2016-01-01

    Recent research suggests that infants tend to add words to their vocabulary that are semantically related to other known words, though it is not clear why this pattern emerges. In this paper, we explore whether infants to leverage their existing vocabulary and semantic knowledge when interpreting novel label-object mappings in real-time. We initially identified categorical domains for which individual 24-month-old infants have relatively higher and lower levels of knowledge, irrespective of overall vocabulary size. Next, we taught infants novel words in these higher and lower knowledge domains and then asked if their subsequent real-time recognition of these items varied as a function of their category knowledge. While our participants successfully acquired the novel label -object mappings in our task, there were important differences in the way infants recognized these words in real time. Namely, infants showed more robust recognition of high (vs. low) domain knowledge words. These findings suggest that dense semantic structure facilitates early word learning and real-time novel word recognition. PMID:26452444

  19. How does Interhemispheric Communication in Visual Word Recognition Work? Deciding between Early and Late Integration Accounts of the Split Fovea Theory

    ERIC Educational Resources Information Center

    Van der Haegen, Lise; Brysbaert, Marc; Davis, Colin J.

    2009-01-01

    It has recently been shown that interhemispheric communication is needed for the processing of foveally presented words. In this study, we examine whether the integration of information happens at an early stage, before word recognition proper starts, or whether the integration is part of the recognition process itself. Two lexical decision…

  20. Assessing spoken word recognition in children who are deaf or hard of hearing: a translational approach.

    PubMed

    Kirk, Karen Iler; Prusick, Lindsay; French, Brian; Gotch, Chad; Eisenberg, Laurie S; Young, Nancy

    2012-06-01

    Under natural conditions, listeners use both auditory and visual speech cues to extract meaning from speech signals containing many sources of variability. However, traditional clinical tests of spoken word recognition routinely employ isolated words or sentences produced by a single talker in an auditory-only presentation format. The more central cognitive processes used during multimodal integration, perceptual normalization, and lexical discrimination that may contribute to individual variation in spoken word recognition performance are not assessed in conventional tests of this kind. In this article, we review our past and current research activities aimed at developing a series of new assessment tools designed to evaluate spoken word recognition in children who are deaf or hard of hearing. These measures are theoretically motivated by a current model of spoken word recognition and also incorporate "real-world" stimulus variability in the form of multiple talkers and presentation formats. The goal of this research is to enhance our ability to estimate real-world listening skills and to predict benefit from sensory aid use in children with varying degrees of hearing loss. American Academy of Audiology.

  1. Semantic contribution to verbal short-term memory: are pleasant words easier to remember than neutral words in serial recall and serial recognition?

    PubMed

    Monnier, Catherine; Syssau, Arielle

    2008-01-01

    In the four experiments reported here, we examined the role of word pleasantness on immediate serial recall and immediate serial recognition. In Experiment 1, we compared verbal serial recall of pleasant and neutral words, using a limited set of items. In Experiment 2, we replicated Experiment 1 with an open set of words (i.e., new items were used on every trial). In Experiments 3 and 4, we assessed immediate serial recognition of pleasant and neutral words, using item sets from Experiments 1 and 2. Pleasantness was found to have a facilitation effect on both immediate serial recall and immediate serial recognition. This study supplies some new supporting arguments in favor of a semantic contribution to verbal short-term memory performance. The pleasantness effect observed in immediate serial recognition showed that, contrary to a number of earlier findings, performance on this task can also turn out to be dependent on semantic factors. The results are discussed in relation to nonlinguistic and psycholinguistic models of short-term memory.

  2. Iconic gestures prime related concepts: an ERP study.

    PubMed

    Wu, Ying Croon; Coulson, Seana

    2007-02-01

    To assess priming by iconic gestures, we recorded EEG (at 29 scalp sites) in two experiments while adults watched short, soundless videos of spontaneously produced, cospeech iconic gestures followed by related or unrelated probe words. In Experiment 1, participants classified the relatedness between gestures and words. In Experiment 2, they attended to stimuli, and performed an incidental recognition memory test on words presented during the EEG recording session. Event-related potentials (ERPs) time-locked to the onset of probe words were measured, along with response latencies and word recognition rates. Although word relatedness did not affect reaction times or recognition rates, contextually related probe words elicited less-negative ERPs than did unrelated ones between 300 and 500 msec after stimulus onset (N400) in both experiments. These findings demonstrate sensitivity to semantic relations between iconic gestures and words in brain activity engendered during word comprehension.

  3. Evidence for the activation of sensorimotor information during visual word recognition: the body-object interaction effect.

    PubMed

    Siakaluk, Paul D; Pexman, Penny M; Aguilera, Laura; Owen, William J; Sears, Christopher R

    2008-01-01

    We examined the effects of sensorimotor experience in two visual word recognition tasks. Body-object interaction (BOI) ratings were collected for a large set of words. These ratings assess perceptions of the ease with which a human body can physically interact with a word's referent. A set of high BOI words (e.g., mask) and a set of low BOI words (e.g., ship) were created, matched on imageability and concreteness. Facilitatory BOI effects were observed in lexical decision and phonological lexical decision tasks: responses were faster for high BOI words than for low BOI words. We discuss how our findings may be accounted for by (a) semantic feedback within the visual word recognition system, and (b) an embodied view of cognition (e.g., Barsalou's perceptual symbol systems theory), which proposes that semantic knowledge is grounded in sensorimotor interactions with the environment.

  4. Sensory experience ratings (SERs) for 1,659 French words: Relationships with other psycholinguistic variables and visual word recognition.

    PubMed

    Bonin, Patrick; Méot, Alain; Ferrand, Ludovic; Bugaïska, Aurélia

    2015-09-01

    We collected sensory experience ratings (SERs) for 1,659 French words in adults. Sensory experience for words is a recently introduced variable that corresponds to the degree to which words elicit sensory and perceptual experiences (Juhasz & Yap Behavior Research Methods, 45, 160-168, 2013; Juhasz, Yap, Dicke, Taylor, & Gullick Quarterly Journal of Experimental Psychology, 64, 1683-1691, 2011). The relationships of the sensory experience norms with other psycholinguistic variables (e.g., imageability and age of acquisition) were analyzed. We also investigated the degree to which SER predicted performance in visual word recognition tasks (lexical decision, word naming, and progressive demasking). The analyses indicated that SER reliably predicted response times in lexical decision, but not in word naming or progressive demasking. The findings are discussed in relation to the status of SER, the role of semantic code activation in visual word recognition, and the embodied view of cognition.

  5. Applicability of the Compensatory Encoding Model in Foreign Language Reading: An Investigation with Chinese College English Language Learners

    PubMed Central

    Han, Feifei

    2017-01-01

    While some first language (L1) reading models suggest that inefficient word recognition and small working memory tend to inhibit higher-level comprehension processes; the Compensatory Encoding Model maintains that slow word recognition and small working memory do not normally hinder reading comprehension, as readers are able to operate metacognitive strategies to compensate for inefficient word recognition and working memory limitation as long as readers process a reading task without time constraint. Although empirical evidence is accumulated for support of the Compensatory Encoding Model in L1 reading, there is lack of research for testing of the Compensatory Encoding Model in foreign language (FL) reading. This research empirically tested the Compensatory Encoding Model in English reading among Chinese college English language learners (ELLs). Two studies were conducted. Study one focused on testing whether reading condition varying time affects the relationship between word recognition, working memory, and reading comprehension. Students were tested on a computerized English word recognition test, a computerized Operation Span task, and reading comprehension in time constraint and non-time constraint reading. The correlation and regression analyses showed that the strength of association was much stronger between word recognition, working memory, and reading comprehension in time constraint than that in non-time constraint reading condition. Study two examined whether FL readers were able to operate metacognitive reading strategies as a compensatory way of reading comprehension for inefficient word recognition and working memory limitation in non-time constraint reading. The participants were tested on the same computerized English word recognition test and Operation Span test. They were required to think aloud while reading and to complete the comprehension questions. The think-aloud protocols were coded for concurrent use of reading strategies, classified into language-oriented strategies, content-oriented strategies, re-reading, pausing, and meta-comment. The correlation analyses showed that while word recognition and working memory were only significantly related to frequency of language-oriented strategies, re-reading, and pausing, but not with reading comprehension. Jointly viewed, the results of the two studies, complimenting each other, supported the applicability of the Compensatory Encoding Model in FL reading with Chinese college ELLs. PMID:28522984

  6. Applicability of the Compensatory Encoding Model in Foreign Language Reading: An Investigation with Chinese College English Language Learners.

    PubMed

    Han, Feifei

    2017-01-01

    While some first language (L1) reading models suggest that inefficient word recognition and small working memory tend to inhibit higher-level comprehension processes; the Compensatory Encoding Model maintains that slow word recognition and small working memory do not normally hinder reading comprehension, as readers are able to operate metacognitive strategies to compensate for inefficient word recognition and working memory limitation as long as readers process a reading task without time constraint. Although empirical evidence is accumulated for support of the Compensatory Encoding Model in L1 reading, there is lack of research for testing of the Compensatory Encoding Model in foreign language (FL) reading. This research empirically tested the Compensatory Encoding Model in English reading among Chinese college English language learners (ELLs). Two studies were conducted. Study one focused on testing whether reading condition varying time affects the relationship between word recognition, working memory, and reading comprehension. Students were tested on a computerized English word recognition test, a computerized Operation Span task, and reading comprehension in time constraint and non-time constraint reading. The correlation and regression analyses showed that the strength of association was much stronger between word recognition, working memory, and reading comprehension in time constraint than that in non-time constraint reading condition. Study two examined whether FL readers were able to operate metacognitive reading strategies as a compensatory way of reading comprehension for inefficient word recognition and working memory limitation in non-time constraint reading. The participants were tested on the same computerized English word recognition test and Operation Span test. They were required to think aloud while reading and to complete the comprehension questions. The think-aloud protocols were coded for concurrent use of reading strategies, classified into language-oriented strategies, content-oriented strategies, re-reading, pausing, and meta-comment. The correlation analyses showed that while word recognition and working memory were only significantly related to frequency of language-oriented strategies, re-reading, and pausing, but not with reading comprehension. Jointly viewed, the results of the two studies, complimenting each other, supported the applicability of the Compensatory Encoding Model in FL reading with Chinese college ELLs.

  7. Understanding native Russian listeners' errors on an English word recognition test: model-based analysis of phoneme confusion.

    PubMed

    Shi, Lu-Feng; Morozova, Natalia

    2012-08-01

    Word recognition is a basic component in a comprehensive hearing evaluation, but data are lacking for listeners speaking two languages. This study obtained such data for Russian natives in the US and analysed the data using the perceptual assimilation model (PAM) and speech learning model (SLM). Listeners were randomly presented 200 NU-6 words in quiet. Listeners responded verbally and in writing. Performance was scored on words and phonemes (word-initial consonants, vowels, and word-final consonants). Seven normal-hearing, adult monolingual English natives (NM), 16 English-dominant (ED), and 15 Russian-dominant (RD) Russian natives participated. ED and RD listeners differed significantly in their language background. Consistent with the SLM, NM outperformed ED listeners and ED outperformed RD listeners, whether responses were scored on words or phonemes. NM and ED listeners shared similar phoneme error patterns, whereas RD listeners' errors had unique patterns that could be largely understood via the PAM. RD listeners had particular difficulty differentiating vowel contrasts /i-I/, /æ-ε/, and /ɑ-Λ/, word-initial consonant contrasts /p-h/ and /b-f/, and word-final contrasts /f-v/. Both first-language phonology and second-language learning history affect word and phoneme recognition. Current findings may help clinicians differentiate word recognition errors due to language background from hearing pathologies.

  8. Novel word acquisition in aphasia: Facing the word-referent ambiguity of natural language learning contexts.

    PubMed

    Peñaloza, Claudia; Mirman, Daniel; Tuomiranta, Leena; Benetello, Annalisa; Heikius, Ida-Maria; Järvinen, Sonja; Majos, Maria C; Cardona, Pedro; Juncadella, Montserrat; Laine, Matti; Martin, Nadine; Rodríguez-Fornells, Antoni

    2016-06-01

    Recent research suggests that some people with aphasia preserve some ability to learn novel words and to retain them in the long-term. However, this novel word learning ability has been studied only in the context of single word-picture pairings. We examined the ability of people with chronic aphasia to learn novel words using a paradigm that presents new word forms together with a limited set of different possible visual referents and requires the identification of the correct word-object associations on the basis of online feedback. We also studied the relationship between word learning ability and aphasia severity, word processing abilities, and verbal short-term memory (STM). We further examined the influence of gross lesion location on new word learning. The word learning task was first validated with a group of forty-five young adults. Fourteen participants with chronic aphasia were administered the task and underwent tests of immediate and long-term recognition memory at 1 week. Their performance was compared to that of a group of fourteen matched controls using growth curve analysis. The learning curve and recognition performance of the aphasia group was significantly below the matched control group, although above-chance recognition performance and case-by-case analyses indicated that some participants with aphasia had learned the correct word-referent mappings. Verbal STM but not word processing abilities predicted word learning ability after controlling for aphasia severity. Importantly, participants with lesions in the left frontal cortex performed significantly worse than participants with lesions that spared the left frontal region both during word learning and on the recognition tests. Our findings indicate that some people with aphasia can preserve the ability to learn a small novel lexicon in an ambiguous word-referent context. This learning and recognition memory ability was associated with verbal STM capacity, aphasia severity and the integrity of the left inferior frontal region. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Novel word acquisition in aphasia: Facing the word-referent ambiguity of natural language learning contexts

    PubMed Central

    Peñaloza, Claudia; Mirman, Daniel; Tuomiranta, Leena; Benetello, Annalisa; Heikius, Ida-Maria; Järvinen, Sonja; Majos, Maria C.; Cardona, Pedro; Juncadella, Montserrat; Laine, Matti; Martin, Nadine; Rodríguez-Fornells, Antoni

    2017-01-01

    Recent research suggests that some people with aphasia preserve some ability to learn novel words and to retain them in the long-term. However, this novel word learning ability has been studied only in the context of single word-picture pairings. We examined the ability of people with chronic aphasia to learn novel words using a paradigm that presents new word forms together with a limited set of different possible visual referents and requires the identification of the correct word-object associations on the basis of online feedback. We also studied the relationship between word learning ability and aphasia severity, word processing abilities, and verbal short-term memory (STM). We further examined the influence of gross lesion location on new word learning. The word learning task was first validated with a group of forty-five young adults. Fourteen participants with chronic aphasia were administered the task and underwent tests of immediate and long-term recognition memory at 1 week. Their performance was compared to that of a group of fourteen matched controls using growth curve analysis. The learning curve and recognition performance of the aphasia group was significantly below the matched control group, although above-chance recognition performance and case-by-case analyses indicated that some participants with aphasia had learned the correct word-referent mappings. Verbal STM but not word processing abilities predicted word learning ability after controlling for aphasia severity. Importantly, participants with lesions in the left frontal cortex performed significantly worse than participants with lesions that spared the left frontal region both during word learning and on the recognition tests. Our findings indicate that some people with aphasia can preserve the ability to learn a small novel lexicon in an ambiguous word-referent context. This learning and recognition memory ability was associated with verbal STM capacity, aphasia severity and the integrity of the left inferior frontal region. PMID:27085892

  10. Tracking the Time Course of Word-Frequency Effects in Auditory Word Recognition with Event-Related Potentials

    ERIC Educational Resources Information Center

    Dufour, Sophie; Brunelliere, Angele; Frauenfelder, Ulrich H.

    2013-01-01

    Although the word-frequency effect is one of the most established findings in spoken-word recognition, the precise processing locus of this effect is still a topic of debate. In this study, we used event-related potentials (ERPs) to track the time course of the word-frequency effect. In addition, the neighborhood density effect, which is known to…

  11. The locus of word frequency effects in skilled spelling-to-dictation.

    PubMed

    Chua, Shi Min; Liow, Susan J Rickard

    2014-01-01

    In spelling-to-dictation tasks, skilled spellers consistently initiate spelling of high-frequency words faster than that of low-frequency words. Tainturier and Rapp's model of spelling shows three possible loci for this frequency effect: spoken word recognition, orthographic retrieval, and response execution of the first letter. Thus far, researchers have attributed the effect solely to orthographic retrieval without considering spoken word recognition or response execution. To investigate word frequency effects at each of these three loci, Experiment 1 involved a delayed spelling-to-dictation task and Experiment 2 involved a delayed/uncertain task. In Experiment 1, no frequency effect was found in the 1200-ms delayed condition, suggesting that response execution is not affected by word frequency. In Experiment 2, no frequency effect was found in the delayed/uncertain task that reflects the orthographic retrieval, whereas a frequency effect was found in the comparison immediate/uncertain task that reflects both spoken word recognition and orthographic retrieval. The results of this two-part study suggest that frequency effects in spoken word recognition play a substantial role in skilled spelling-to-dictation. Discrepancies between these findings and previous research, and the limitations of the present study, are discussed.

  12. The role of semantically related distractors during encoding and retrieval of words in long-term memory.

    PubMed

    Meade, Melissa E; Fernandes, Myra A

    2016-07-01

    We examined the influence of divided attention (DA) on recognition of words when the concurrent task was semantically related or unrelated to the to-be-recognised target words. Participants were asked to either study or retrieve a target list of semantically related words while simultaneously making semantic decisions (i.e., size judgements) to another set of related or unrelated words heard concurrently. We manipulated semantic relatedness of distractor to target words, and whether DA occurred during the encoding or retrieval phase of memory. Recognition accuracy was significantly diminished relative to full attention, following DA conditions at encoding, regardless of relatedness of distractors to study words. However, response times (RTs) were slower with related compared to unrelated distractors. Similarly, under DA at retrieval, recognition RTs were slower when distractors were semantically related than unrelated to target words. Unlike the effect from DA at encoding, recognition accuracy was worse under DA at retrieval when the distractors were related compared to unrelated to the target words. Results suggest that availability of general attentional resources is critical for successful encoding, whereas successful retrieval is particularly reliant on access to a semantic code, making it sensitive to related distractors under DA conditions.

  13. Brief report: accuracy and response time for the recognition of facial emotions in a large sample of children with autism spectrum disorders.

    PubMed

    Fink, Elian; de Rosnay, Marc; Wierda, Marlies; Koot, Hans M; Begeer, Sander

    2014-09-01

    The empirical literature has presented inconsistent evidence for deficits in the recognition of basic emotion expressions in children with autism spectrum disorders (ASD), which may be due to the focus on research with relatively small sample sizes. Additionally, it is proposed that although children with ASD may correctly identify emotion expression they rely on more deliberate, more time-consuming strategies in order to accurately recognize emotion expressions when compared to typically developing children. In the current study, we examine both emotion recognition accuracy and response time in a large sample of children, and explore the moderating influence of verbal ability on these findings. The sample consisted of 86 children with ASD (M age = 10.65) and 114 typically developing children (M age = 10.32) between 7 and 13 years of age. All children completed a pre-test (emotion word-word matching), and test phase consisting of basic emotion recognition, whereby they were required to match a target emotion expression to the correct emotion word; accuracy and response time were recorded. Verbal IQ was controlled for in the analyses. We found no evidence of a systematic deficit in emotion recognition accuracy or response time for children with ASD, controlling for verbal ability. However, when controlling for children's accuracy in word-word matching, children with ASD had significantly lower emotion recognition accuracy when compared to typically developing children. The findings suggest that the social impairments observed in children with ASD are not the result of marked deficits in basic emotion recognition accuracy or longer response times. However, children with ASD may be relying on other perceptual skills (such as advanced word-word matching) to complete emotion recognition tasks at a similar level as typically developing children.

  14. Testing Measurement Invariance across Groups of Children with and without Attention-Deficit/ Hyperactivity Disorder: Applications for Word Recognition and Spelling Tasks

    PubMed Central

    Lúcio, Patrícia S.; Salum, Giovanni; Swardfager, Walter; Mari, Jair de Jesus; Pan, Pedro M.; Bressan, Rodrigo A.; Gadelha, Ary; Rohde, Luis A.; Cogo-Moreira, Hugo

    2017-01-01

    Although studies have consistently demonstrated that children with attention-deficit/hyperactivity disorder (ADHD) perform significantly lower than controls on word recognition and spelling tests, such studies rely on the assumption that those groups are comparable in these measures. This study investigates comparability of word recognition and spelling tests based on diagnostic status for ADHD through measurement invariance methods. The participants (n = 1,935; 47% female; 11% ADHD) were children aged 6–15 with normal IQ (≥70). Measurement invariance was investigated through Confirmatory Factor Analysis and Multiple Indicators Multiple Causes models. Measurement invariance was attested in both methods, demonstrating the direct comparability of the groups. Children with ADHD were 0.51 SD lower in word recognition and 0.33 SD lower in spelling tests than controls. Results suggest that differences in performance on word recognition and spelling tests are related to true mean differences based on ADHD diagnostic status. Implications for clinical practice and research are discussed. PMID:29118733

  15. Testing Measurement Invariance across Groups of Children with and without Attention-Deficit/ Hyperactivity Disorder: Applications for Word Recognition and Spelling Tasks.

    PubMed

    Lúcio, Patrícia S; Salum, Giovanni; Swardfager, Walter; Mari, Jair de Jesus; Pan, Pedro M; Bressan, Rodrigo A; Gadelha, Ary; Rohde, Luis A; Cogo-Moreira, Hugo

    2017-01-01

    Although studies have consistently demonstrated that children with attention-deficit/hyperactivity disorder (ADHD) perform significantly lower than controls on word recognition and spelling tests, such studies rely on the assumption that those groups are comparable in these measures. This study investigates comparability of word recognition and spelling tests based on diagnostic status for ADHD through measurement invariance methods. The participants ( n = 1,935; 47% female; 11% ADHD) were children aged 6-15 with normal IQ (≥70). Measurement invariance was investigated through Confirmatory Factor Analysis and Multiple Indicators Multiple Causes models. Measurement invariance was attested in both methods, demonstrating the direct comparability of the groups. Children with ADHD were 0.51 SD lower in word recognition and 0.33 SD lower in spelling tests than controls. Results suggest that differences in performance on word recognition and spelling tests are related to true mean differences based on ADHD diagnostic status. Implications for clinical practice and research are discussed.

  16. False recognition production indexes in Spanish for 60 DRM lists with three critical words.

    PubMed

    Beato, Maria Soledad; Díez, Emiliano

    2011-06-01

    A normative study was conducted using the Deese/Roediger-McDermott paradigm (DRM) to obtain false recognition for 60 six-word lists in Spanish, designed with a completely new methodology. For the first time, lists included words (e.g., bridal, newlyweds, bond, commitment, couple, to marry) simultaneously associated with three critical words (e.g., love, wedding, marriage). Backward associative strength between lists and critical words was taken into account when creating the lists. The results showed that all lists produced false recognition. Moreover, some lists had a high false recognition rate (e.g., 65%; jail, inmate, prison: bars, prisoner, cell, offender, penitentiary, imprisonment). This is an aspect of special interest for those DRM experiments that, for example, record brain electrical activity. This type of list will enable researchers to raise the signal-to-noise ratio in false recognition event-related potential studies as they increase the number of critical trials per list, and it will be especially useful for the design of future research.

  17. Word recognition in Alzheimer's disease: Effects of semantic degeneration.

    PubMed

    Cuetos, Fernando; Arce, Noemí; Martínez, Carmen; Ellis, Andrew W

    2017-03-01

    Impairments of word recognition in Alzheimer's disease (AD) have been less widely investigated than impairments affecting word retrieval and production. In particular, we know little about what makes individual words easier or harder for patients with AD to recognize. We used a lexical selection task in which participants were shown sets of four items, each set consisting of one word and three non-words. The task was simply to point to the word on each trial. Forty patients with mild-to-moderate AD were significantly impaired on this task relative to matched controls who made very few errors. The number of patients with AD able to recognize each word correctly was predicted by the frequency, age of acquisition, and imageability of the words, but not by their length or number of orthographic neighbours. Patient Mini-Mental State Examination and phonological fluency scores also predicted the number of words recognized. We propose that progressive degradation of central semantic representations in AD differentially affects the ability to recognize low-imageability, low-frequency, late-acquired words, with the same factors affecting word recognition as affecting word retrieval. © 2015 The British Psychological Society.

  18. Contextual diversity facilitates learning new words in the classroom.

    PubMed

    Rosa, Eva; Tapia, José Luis; Perea, Manuel

    2017-01-01

    In the field of word recognition and reading, it is commonly assumed that frequently repeated words create more accessible memory traces than infrequently repeated words, thus capturing the word-frequency effect. Nevertheless, recent research has shown that a seemingly related factor, contextual diversity (defined as the number of different contexts [e.g., films] in which a word appears), is a better predictor than word-frequency in word recognition and sentence reading experiments. Recent research has shown that contextual diversity plays an important role when learning new words in a laboratory setting with adult readers. In the current experiment, we directly manipulated contextual diversity in a very ecological scenario: at school, when Grade 3 children were learning words in the classroom. The new words appeared in different contexts/topics (high-contextual diversity) or only in one of them (low-contextual diversity). Results showed that words encountered in different contexts were learned and remembered more effectively than those presented in redundant contexts. We discuss the practical (educational [e.g., curriculum design]) and theoretical (models of word recognition) implications of these findings.

  19. Contextual diversity facilitates learning new words in the classroom

    PubMed Central

    Tapia, José Luis; Perea, Manuel

    2017-01-01

    In the field of word recognition and reading, it is commonly assumed that frequently repeated words create more accessible memory traces than infrequently repeated words, thus capturing the word-frequency effect. Nevertheless, recent research has shown that a seemingly related factor, contextual diversity (defined as the number of different contexts [e.g., films] in which a word appears), is a better predictor than word-frequency in word recognition and sentence reading experiments. Recent research has shown that contextual diversity plays an important role when learning new words in a laboratory setting with adult readers. In the current experiment, we directly manipulated contextual diversity in a very ecological scenario: at school, when Grade 3 children were learning words in the classroom. The new words appeared in different contexts/topics (high-contextual diversity) or only in one of them (low-contextual diversity). Results showed that words encountered in different contexts were learned and remembered more effectively than those presented in redundant contexts. We discuss the practical (educational [e.g., curriculum design]) and theoretical (models of word recognition) implications of these findings. PMID:28586354

  20. Medical Named Entity Recognition for Indonesian Language Using Word Representations

    NASA Astrophysics Data System (ADS)

    Rahman, Arief

    2018-03-01

    Nowadays, Named Entity Recognition (NER) system is used in medical texts to obtain important medical information, like diseases, symptoms, and drugs. While most NER systems are applied to formal medical texts, informal ones like those from social media (also called semi-formal texts) are starting to get recognition as a gold mine for medical information. We propose a theoretical Named Entity Recognition (NER) model for semi-formal medical texts in our medical knowledge management system by comparing two kinds of word representations: cluster-based word representation and distributed representation.

  1. The impact of left and right intracranial tumors on picture and word recognition memory.

    PubMed

    Goldstein, Bram; Armstrong, Carol L; Modestino, Edward; Ledakis, George; John, Cameron; Hunter, Jill V

    2004-02-01

    This study investigated the effects of left and right intracranial tumors on picture and word recognition memory. We hypothesized that left hemispheric (LH) patients would exhibit greater word recognition memory impairment than right hemispheric (RH) patients, with no significant hemispheric group picture recognition memory differences. The LH patient group obtained a significantly slower mean picture recognition reaction time than the RH group. The LH group had a higher proportion of tumors extending into the temporal lobes, possibly accounting for their greater pictorial processing impairments. Dual coding and enhanced visual imagery may have contributed to the patient groups' similar performance on the remainder of the measures.

  2. Word-Level and Sentence-Level Automaticity in English as a Foreign Language (EFL) Learners: a Comparative Study

    ERIC Educational Resources Information Center

    Ma, Dongmei; Yu, Xiaoru; Zhang, Haomin

    2017-01-01

    The present study aimed to investigate second language (L2) word-level and sentence-level automatic processing among English as a foreign language students through a comparative analysis of students with different proficiency levels. As a multidimensional and dynamic construct, automaticity is conceptualized as processing speed, stability, and…

  3. The Contributions of Vocabulary and Letter Writing Automaticity to Word Reading and Spelling for Kindergartners

    ERIC Educational Resources Information Center

    Kim, Young-Suk; Al Otaiba, Stephanie; Puranik, Cynthia; Folsom, Jessica Sidler; Gruelich, Luana

    2014-01-01

    In the present study we examined the relation between alphabet knowledge fluency (letter names and sounds) and letter writing automaticity, and unique relations of letter writing automaticity and semantic knowledge (i.e., vocabulary) to word reading and spelling over and above code-related skills such as phonological awareness and alphabet…

  4. Resolving Quasi-Synonym Relationships in Automatic Thesaurus Construction Using Fuzzy Rough Sets and an Inverse Term Frequency Similarity Function

    ERIC Educational Resources Information Center

    Davault, Julius M., III.

    2009-01-01

    One of the problems associated with automatic thesaurus construction is with determining the semantic relationship between word pairs. Quasi-synonyms provide a type of equivalence relationship: words are similar only for purposes of information retrieval. Determining such relationships in a thesaurus is hard to achieve automatically. The term…

  5. Sight Word Recognition among Young Children At-Risk: Picture-Supported vs. Word-Only

    ERIC Educational Resources Information Center

    Meadan, Hedda; Stoner, Julia B.; Parette, Howard P.

    2008-01-01

    A quasi-experimental design was used to investigate the impact of Picture Communication Symbols (PCS) on sight word recognition by young children identified as "at risk" for academic and social-behavior difficulties. Ten pre-primer and 10 primer Dolch words were presented to 23 students in the intervention group and 8 students in the…

  6. Word Recognition Error Analysis: Comparing Isolated Word List and Oral Passage Reading

    ERIC Educational Resources Information Center

    Flynn, Lindsay J.; Hosp, John L.; Hosp, Michelle K.; Robbins, Kelly P.

    2011-01-01

    The purpose of this study was to determine the relation between word recognition errors made at a letter-sound pattern level on a word list and on a curriculum-based measurement oral reading fluency measure (CBM-ORF) for typical and struggling elementary readers. The participants were second, third, and fourth grade typical and struggling readers…

  7. The Role of Native-Language Phonology in the Auditory Word Identification and Visual Word Recognition of Russian-English Bilinguals

    ERIC Educational Resources Information Center

    Shafiro, Valeriy; Kharkhurin, Anatoliy V.

    2009-01-01

    Abstract Does native language phonology influence visual word processing in a second language? This question was investigated in two experiments with two groups of Russian-English bilinguals, differing in their English experience, and a monolingual English control group. Experiment 1 tested visual word recognition following semantic…

  8. Word Recognition is Affected by the Meaning of Orthographic Neighbours: Evidence from Semantic Decision Tasks

    ERIC Educational Resources Information Center

    Boot, Inge; Pecher, Diane

    2008-01-01

    Many models of word recognition predict that neighbours of target words will be activated during word processing. Cascaded models can make the additional prediction that semantic features of those neighbours get activated before the target has been uniquely identified. In two semantic decision tasks neighbours that were congruent (i.e., from the…

  9. Semantic Ambiguity Effects in L2 Word Recognition

    ERIC Educational Resources Information Center

    Ishida, Tomomi

    2018-01-01

    The present study examined the ambiguity effects in second language (L2) word recognition. Previous studies on first language (L1) lexical processing have observed that ambiguous words are recognized faster and more accurately than unambiguous words on lexical decision tasks. In this research, L1 and L2 speakers of English were asked whether a…

  10. The Role of Derivative Suffix Productivity in the Visual Word Recognition of Complex Words

    ERIC Educational Resources Information Center

    Lázaro, Miguel; Sainz, Javier; Illera, Víctor

    2015-01-01

    In this article we present two lexical decision experiments that examine the role of base frequency and of derivative suffix productivity in visual recognition of Spanish words. In the first experiment we find that complex words with productive derivative suffixes result in lower response times than those with unproductive derivative suffixes.…

  11. Effects of Visual and Auditory Perceptual Aptitudes and Letter Discrimination Pretraining on Word Recognition.

    ERIC Educational Resources Information Center

    Janssen, David Rainsford

    This study investigated alternate methods of letter discrimination pretraining and word recognition training in young children. Seventy kindergarten children were trained to recognize eight printed words in a vocabulary list by a mixed-list paired-associate method. Four of the stimulus words had visual response choices (pictures) and four had…

  12. From Numbers to Letters: Feedback Regularization in Visual Word Recognition

    ERIC Educational Resources Information Center

    Molinaro, Nicola; Dunabeitia, Jon Andoni; Marin-Gutierrez, Alejandro; Carreiras, Manuel

    2010-01-01

    Word reading in alphabetic languages involves letter identification, independently of the format in which these letters are written. This process of letter "regularization" is sensitive to word context, leading to the recognition of a word even when numbers that resemble letters are inserted among other real letters (e.g., M4TERI4L). The present…

  13. Morphological Processing during Visual Word Recognition in Hebrew as a First and a Second Language

    ERIC Educational Resources Information Center

    Norman, Tal; Degani, Tamar; Peleg, Orna

    2017-01-01

    The present study examined whether sublexical morphological processing takes place during visual word-recognition in Hebrew, and whether morphological decomposition of written words depends on lexical activation of the complete word. Furthermore, it examined whether morphological processing is similar when reading Hebrew as a first language (L1)…

  14. Individual differences in language and working memory affect children's speech recognition in noise.

    PubMed

    McCreery, Ryan W; Spratford, Meredith; Kirby, Benjamin; Brennan, Marc

    2017-05-01

    We examined how cognitive and linguistic skills affect speech recognition in noise for children with normal hearing. Children with better working memory and language abilities were expected to have better speech recognition in noise than peers with poorer skills in these domains. As part of a prospective, cross-sectional study, children with normal hearing completed speech recognition in noise for three types of stimuli: (1) monosyllabic words, (2) syntactically correct but semantically anomalous sentences and (3) semantically and syntactically anomalous word sequences. Measures of vocabulary, syntax and working memory were used to predict individual differences in speech recognition in noise. Ninety-six children with normal hearing, who were between 5 and 12 years of age. Higher working memory was associated with better speech recognition in noise for all three stimulus types. Higher vocabulary abilities were associated with better recognition in noise for sentences and word sequences, but not for words. Working memory and language both influence children's speech recognition in noise, but the relationships vary across types of stimuli. These findings suggest that clinical assessment of speech recognition is likely to reflect underlying cognitive and linguistic abilities, in addition to a child's auditory skills, consistent with the Ease of Language Understanding model.

  15. Reading component skills in dyslexia: word recognition, comprehension and processing speed.

    PubMed

    de Oliveira, Darlene G; da Silva, Patrícia B; Dias, Natália M; Seabra, Alessandra G; Macedo, Elizeu C

    2014-01-01

    The cognitive model of reading comprehension (RC) posits that RC is a result of the interaction between decoding and linguistic comprehension. Recently, the notion of decoding skill was expanded to include word recognition. In addition, some studies suggest that other skills could be integrated into this model, like processing speed, and have consistently indicated that this skill influences and is an important predictor of the main components of the model, such as vocabulary for comprehension and phonological awareness of word recognition. The following study evaluated the components of the RC model and predictive skills in children and adolescents with dyslexia. 40 children and adolescents (8-13 years) were divided in a Dyslexic Group (DG; 18 children, MA = 10.78, SD = 1.66) and control group (CG 22 children, MA = 10.59, SD = 1.86). All were students from the 2nd to 8th grade of elementary school and groups were equivalent in school grade, age, gender, and IQ. Oral and RC, word recognition, processing speed, picture naming, receptive vocabulary, and phonological awareness were assessed. There were no group differences regarding the accuracy in oral and RC, phonological awareness, naming, and vocabulary scores. DG performed worse than the CG in word recognition (general score and orthographic confusion items) and were slower in naming. Results corroborated the literature regarding word recognition and processing speed deficits in dyslexia. However, dyslexics can achieve normal scores on RC test. Data supports the importance of delimitation of different reading strategies embedded in the word recognition component. The role of processing speed in reading problems remain unclear.

  16. Effect of Context and Hearing Loss on Time-Gated Word Recognition in Children.

    PubMed

    Lewis, Dawna; Kopun, Judy; McCreery, Ryan; Brennan, Marc; Nishi, Kanae; Cordrey, Evan; Stelmachowicz, Pat; Moeller, Mary Pat

    The purpose of this study was to examine word recognition in children who are hard of hearing (CHH) and children with normal hearing (CNH) in response to time-gated words presented in high- versus low-predictability sentences (HP, LP), where semantic cues were manipulated. Findings inform our understanding of how CHH combine cognitive-linguistic and acoustic-phonetic cues to support spoken word recognition. It was hypothesized that both groups of children would be able to make use of linguistic cues provided by HP sentences to support word recognition. CHH were expected to require greater acoustic information (more gates) than CNH to correctly identify words in the LP condition. In addition, it was hypothesized that error patterns would differ across groups. Sixteen CHH with mild to moderate hearing loss and 16 age-matched CNH participated (5 to 12 years). Test stimuli included 15 LP and 15 HP age-appropriate sentences. The final word of each sentence was divided into segments and recombined with the sentence frame to create series of sentences in which the final word was progressively longer by the gated increments. Stimuli were presented monaurally through headphones and children were asked to identify the target word at each successive gate. They also were asked to rate their confidence in their word choice using a five- or three-point scale. For CHH, the signals were processed through a hearing aid simulator. Standardized language measures were used to assess the contribution of linguistic skills. Analysis of language measures revealed that the CNH and CHH performed within the average range on language abilities. Both groups correctly recognized a significantly higher percentage of words in the HP condition than in the LP condition. Although CHH performed comparably with CNH in terms of successfully recognizing the majority of words, differences were observed in the amount of acoustic-phonetic information needed to achieve accurate word recognition. CHH needed more gates than CNH to identify words in the LP condition. CNH were significantly lower in rating their confidence in the LP condition than in the HP condition. CHH, however, were not significantly different in confidence between the conditions. Error patterns for incorrect word responses across gates and predictability varied depending on hearing status. The results of this study suggest that CHH with age-appropriate language abilities took advantage of context cues in the HP sentences to guide word recognition in a manner similar to CNH. However, in the LP condition, they required more acoustic information (more gates) than CNH for word recognition. Differences in the structure of incorrect word responses and their nomination patterns across gates for CHH compared with their peers with NH suggest variations in how these groups use limited acoustic information to select word candidates.

  17. Evidence for the Activation of Sensorimotor Information during Visual Word Recognition: The Body-Object Interaction Effect

    ERIC Educational Resources Information Center

    Siakaluk, Paul D.; Pexman, Penny M.; Aguilera, Laura; Owen, William J.; Sears, Christopher R.

    2008-01-01

    We examined the effects of sensorimotor experience in two visual word recognition tasks. Body-object interaction (BOI) ratings were collected for a large set of words. These ratings assess perceptions of the ease with which a human body can physically interact with a word's referent. A set of high BOI words (e.g., "mask") and a set of low BOI…

  18. Reading laterally: the cerebral hemispheric use of spatial frequencies in visual word recognition.

    PubMed

    Tadros, Karine; Dupuis-Roy, Nicolas; Fiset, Daniel; Arguin, Martin; Gosselin, Frédéric

    2013-01-04

    It is generally accepted that the left hemisphere (LH) is more capable for reading than the right hemisphere (RH). Left hemifield presentations (initially processed by the RH) lead to a globally higher error rate, slower word identification, and a significantly stronger word length effect (i.e., slower reaction times for longer words). Because the visuo-perceptual mechanisms of the brain for word recognition are primarily localized in the LH (Cohen et al., 2003), it is possible that this part of the brain possesses better spatial frequency (SF) tuning for processing the visual properties of words than the RH. The main objective of this study is to determine the SF tuning functions of the LH and RH for word recognition. Each word image was randomly sampled in the SF domain using the SF bubbles method (Willenbockel et al., 2010) and was presented laterally to the left or right visual hemifield. As expected, the LH requires less visual information than the RH to reach the same level of performance, illustrating the well-known LH advantage for word recognition. Globally, the SF tuning of both hemispheres is similar. However, these seemingly identical tuning functions hide important differences. Most importantly, we argue that the RH requires higher SFs to identify longer words because of crowding.

  19. No one way ticket from orthography to semantics in recognition memory: N400 and P200 effects of associations.

    PubMed

    Stuellein, Nicole; Radach, Ralph R; Jacobs, Arthur M; Hofmann, Markus J

    2016-05-15

    Computational models of word recognition already successfully used associative spreading from orthographic to semantic levels to account for false memories. But can they also account for semantic effects on event-related potentials in a recognition memory task? To address this question, target words in the present study had either many or few semantic associates in the stimulus set. We found larger P200 amplitudes and smaller N400 amplitudes for old words in comparison to new words. Words with many semantic associates led to larger P200 amplitudes and a smaller N400 in comparison to words with a smaller number of semantic associations. We also obtained inverted response time and accuracy effects for old and new words: faster response times and fewer errors were found for old words that had many semantic associates, whereas new words with a large number of semantic associates produced slower response times and more errors. Both behavioral and electrophysiological results indicate that semantic associations between words can facilitate top-down driven lexical access and semantic integration in recognition memory. Our results support neurophysiologically plausible predictions of the Associative Read-Out Model, which suggests top-down connections from semantic to orthographic layers. Copyright © 2016 Elsevier B.V. All rights reserved.

  20. Automatic indexing of compound words based on mutual information for Korean text retrieval

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pan Koo Kim; Yoo Kun Cho

    In this paper, we present an automatic indexing technique for compound words suitable to an aggulutinative language, specifically Korean. Firstly, we present the construction conditions to compose compound words as indexing terms. Also we present the decomposition rules applicable to consecutive nouns to extract all contents of text. Finally we propose a measure to estimate the usefulness of a term, mutual information, to calculate the degree of word association of compound words, based on the information theoretic notion. By applying this method, our system has raised the precision rate of compound words from 72% to 87%.

  1. Modeling open-set spoken word recognition in postlingually deafened adults after cochlear implantation: some preliminary results with the neighborhood activation model.

    PubMed

    Meyer, Ted A; Frisch, Stefan A; Pisoni, David B; Miyamoto, Richard T; Svirsky, Mario A

    2003-07-01

    Do cochlear implants provide enough information to allow adult cochlear implant users to understand words in ways that are similar to listeners with acoustic hearing? Can we use a computational model to gain insight into the underlying mechanisms used by cochlear implant users to recognize spoken words? The Neighborhood Activation Model has been shown to be a reasonable model of word recognition for listeners with normal hearing. The Neighborhood Activation Model assumes that words are recognized in relation to other similar-sounding words in a listener's lexicon. The probability of correctly identifying a word is based on the phoneme perception probabilities from a listener's closed-set consonant and vowel confusion matrices modified by the relative frequency of occurrence of the target word compared with similar-sounding words (neighbors). Common words with few similar-sounding neighbors are more likely to be selected as responses than less common words with many similar-sounding neighbors. Recent studies have shown that several of the assumptions of the Neighborhood Activation Model also hold true for cochlear implant users. Closed-set consonant and vowel confusion matrices were obtained from 26 postlingually deafened adults who use cochlear implants. Confusion matrices were used to represent input errors to the Neighborhood Activation Model. Responses to the different stimuli were then generated by the Neighborhood Activation Model after incorporating the frequency of occurrence counts of the stimuli and their neighbors. Model outputs were compared with obtained performance measures on the Consonant-Vowel Nucleus-Consonant word test. Information transmission analysis was used to assess whether the Neighborhood Activation Model was able to successfully generate and predict word and individual phoneme recognition by cochlear implant users. The Neighborhood Activation Model predicted Consonant-Vowel Nucleus-Consonant test words at levels similar to those correctly identified by the cochlear implant users. The Neighborhood Activation Model also predicted phoneme feature information well. The results obtained suggest that the Neighborhood Activation Model provides a reasonable explanation of word recognition by postlingually deafened adults after cochlear implantation. It appears that multichannel cochlear implants give cochlear implant users access to their mental lexicons in a manner that is similar to listeners with acoustic hearing. The lexical properties of the test stimuli used to assess performance are important to spoken-word recognition and should be included in further models of the word recognition process.

  2. Orthographic neighborhood effects in recognition and recall tasks in a transparent orthography.

    PubMed

    Justi, Francis R R; Jaeger, Antonio

    2017-04-01

    The number of orthographic neighbors of a word influences its probability of being retrieved in recognition and free recall memory tests. Even though this phenomenon is well demonstrated for English words, it has yet to be demonstrated for languages with more predictable grapheme-phoneme mappings than English. To address this issue, 4 experiments were conducted to investigate effects of number of orthographic neighbors (N) and effects of frequency of occurrence of orthographic neighbors (NF) on memory retrieval of Brazilian Portuguese words. One hundred twenty-four Brazilian Portuguese speakers performed first a lexical-decision task (LDT) on words that were factorially manipulated according to N and NF, and intermixed with either nonpronounceable nonwords without orthographic neighbors (Experiments 1A and 2A), or with pronounceable nonwords with a large number of orthographic neighbors (Experiments 1B and 2B). The words were later used as probes on either recognition (Experiments 1A and 1B) or recall tests (Experiments 2A and 2B). Words with 1 orthographic neighbor were consistently better remembered than words with several orthographic neighbors in all recognition and recall tests. Notably, whereas in Experiment 1A higher false alarm rates were yielded for words with several rather than 1 orthographic neighbor, in Experiment 1B higher false alarm rates were yielded for words with 1 rather than several orthographic neighbors. Effects of NF, on the other hand, were not consistent among memory tasks. The effects of N on the recognition and recall tests conducted here are interpreted in light of dual process models of recognition. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  3. Conducting spoken word recognition research online: Validation and a new timing method.

    PubMed

    Slote, Joseph; Strand, Julia F

    2016-06-01

    Models of spoken word recognition typically make predictions that are then tested in the laboratory against the word recognition scores of human subjects (e.g., Luce & Pisoni Ear and Hearing, 19, 1-36, 1998). Unfortunately, laboratory collection of large sets of word recognition data can be costly and time-consuming. Due to the numerous advantages of online research in speed, cost, and participant diversity, some labs have begun to explore the use of online platforms such as Amazon's Mechanical Turk (AMT) to source participation and collect data (Buhrmester, Kwang, & Gosling Perspectives on Psychological Science, 6, 3-5, 2011). Many classic findings in cognitive psychology have been successfully replicated online, including the Stroop effect, task-switching costs, and Simon and flanker interference (Crump, McDonnell, & Gureckis PLoS ONE, 8, e57410, 2013). However, tasks requiring auditory stimulus delivery have not typically made use of AMT. In the present study, we evaluated the use of AMT for collecting spoken word identification and auditory lexical decision data. Although online users were faster and less accurate than participants in the lab, the results revealed strong correlations between the online and laboratory measures for both word identification accuracy and lexical decision speed. In addition, the scores obtained in the lab and online were equivalently correlated with factors that have been well established to predict word recognition, including word frequency and phonological neighborhood density. We also present and analyze a method for precise auditory reaction timing that is novel to behavioral research. Taken together, these findings suggest that AMT can be a viable alternative to the traditional laboratory setting as a source of participation for some spoken word recognition research.

  4. The influence of lexical characteristics and talker accent on the recognition of English words by speakers of Japanese.

    PubMed

    Yoneyama, Kiyoko; Munson, Benjamin

    2017-02-01

    Whether or not the influence of listeners' language proficiency on L2 speech recognition was affected by the structure of the lexicon was examined. This specific experiment examined the effect of word frequency (WF) and phonological neighborhood density (PND) on word recognition in native speakers of English and second-language (L2) speakers of English whose first language was Japanese. The stimuli included English words produced by a native speaker of English and English words produced by a native speaker of Japanese (i.e., with Japanese-accented English). The experiment was inspired by the finding of Imai, Flege, and Walley [(2005). J. Acoust. Soc. Am. 117, 896-907] that the influence of talker accent on speech intelligibility for L2 learners of English whose L1 is Spanish varies as a function of words' PND. In the currently study, significant interactions between stimulus accentedness and listener group on the accuracy and speed of spoken word recognition were found, as were significant effects of PND and WF on word-recognition accuracy. However, no significant three-way interaction among stimulus talker, listener group, and PND on either measure was found. Results are discussed in light of recent findings on cross-linguistic differences in the nature of the effects of PND on L2 phonological and lexical processing.

  5. Bridging automatic speech recognition and psycholinguistics: Extending Shortlist to an end-to-end model of human speech recognition (L)

    NASA Astrophysics Data System (ADS)

    Scharenborg, Odette; ten Bosch, Louis; Boves, Lou; Norris, Dennis

    2003-12-01

    This letter evaluates potential benefits of combining human speech recognition (HSR) and automatic speech recognition by building a joint model of an automatic phone recognizer (APR) and a computational model of HSR, viz., Shortlist [Norris, Cognition 52, 189-234 (1994)]. Experiments based on ``real-life'' speech highlight critical limitations posed by some of the simplifying assumptions made in models of human speech recognition. These limitations could be overcome by avoiding hard phone decisions at the output side of the APR, and by using a match between the input and the internal lexicon that flexibly copes with deviations from canonical phonemic representations.

  6. Application of image recognition-based automatic hyphae detection in fungal keratitis.

    PubMed

    Wu, Xuelian; Tao, Yuan; Qiu, Qingchen; Wu, Xinyi

    2018-03-01

    The purpose of this study is to evaluate the accuracy of two methods in diagnosis of fungal keratitis, whereby one method is automatic hyphae detection based on images recognition and the other method is corneal smear. We evaluate the sensitivity and specificity of the method in diagnosis of fungal keratitis, which is automatic hyphae detection based on image recognition. We analyze the consistency of clinical symptoms and the density of hyphae, and perform quantification using the method of automatic hyphae detection based on image recognition. In our study, 56 cases with fungal keratitis (just single eye) and 23 cases with bacterial keratitis were included. All cases underwent the routine inspection of slit lamp biomicroscopy, corneal smear examination, microorganism culture and the assessment of in vivo confocal microscopy images before starting medical treatment. Then, we recognize the hyphae images of in vivo confocal microscopy by using automatic hyphae detection based on image recognition to evaluate its sensitivity and specificity and compare with the method of corneal smear. The next step is to use the index of density to assess the severity of infection, and then find the correlation with the patients' clinical symptoms and evaluate consistency between them. The accuracy of this technology was superior to corneal smear examination (p < 0.05). The sensitivity of the technology of automatic hyphae detection of image recognition was 89.29%, and the specificity was 95.65%. The area under the ROC curve was 0.946. The correlation coefficient between the grading of the severity in the fungal keratitis by the automatic hyphae detection based on image recognition and the clinical grading is 0.87. The technology of automatic hyphae detection based on image recognition was with high sensitivity and specificity, able to identify fungal keratitis, which is better than the method of corneal smear examination. This technology has the advantages when compared with the conventional artificial identification of confocal microscope corneal images, of being accurate, stable and does not rely on human expertise. It was the most useful to the medical experts who are not familiar with fungal keratitis. The technology of automatic hyphae detection based on image recognition can quantify the hyphae density and grade this property. Being noninvasive, it can provide an evaluation criterion to fungal keratitis in a timely, accurate, objective and quantitative manner.

  7. Is nevtral NEUTRAL? Visual similarity effects in the early phases of written-word recognition.

    PubMed

    Marcet, Ana; Perea, Manuel

    2017-08-01

    For simplicity, contemporary models of written-word recognition and reading have unspecified feature/letter levels-they predict that the visually similar substituted-letter nonword PEQPLE is as effective at activating the word PEOPLE as the visually dissimilar substituted-letter nonword PEYPLE. Previous empirical evidence on the effects of visual similarly across letters during written-word recognition is scarce and nonconclusive. To examine whether visual similarity across letters plays a role early in word processing, we conducted two masked priming lexical decision experiments (stimulus-onset asynchrony = 50 ms). The substituted-letter primes were visually very similar to the target letters (u/v in Experiment 1 and i/j in Experiment 2; e.g., nevtral-NEUTRAL). For comparison purposes, we included an identity prime condition (neutral-NEUTRAL) and a dissimilar-letter prime condition (neztral-NEUTRAL). Results showed that the similar-letter prime condition produced faster word identification times than the dissimilar-letter prime condition. We discuss how models of written-word recognition should be amended to capture visual similarity effects across letters.

  8. Word attributes and lateralization revisited: implications for dual coding and discrete versus continuous processing.

    PubMed

    Boles, D B

    1989-01-01

    Three attributes of words are their imageability, concreteness, and familiarity. From a literature review and several experiments, I previously concluded (Boles, 1983a) that only familiarity affects the overall near-threshold recognition of words, and that none of the attributes affects right-visual-field superiority for word recognition. Here these conclusions are modified by two experiments demonstrating a critical mediating influence of intentional versus incidental memory instructions. In Experiment 1, subjects were instructed to remember the words they were shown, for subsequent recall. The results showed effects of both imageability and familiarity on overall recognition, as well as an effect of imageability on lateralization. In Experiment 2, word-memory instructions were deleted and the results essentially reinstated the findings of Boles (1983a). It is concluded that right-hemisphere imagery processes can participate in word recognition under intentional memory instructions. Within the dual coding theory (Paivio, 1971), the results argue that both discrete and continuous processing modes are available, that the modes can be used strategically, and that continuous processing can occur prior to response stages.

  9. Impaired Word and Face Recognition in Older Adults with Type 2 Diabetes.

    PubMed

    Jones, Nicola; Riby, Leigh M; Smith, Michael A

    2016-07-01

    Older adults with type 2 diabetes mellitus (DM2) exhibit accelerated decline in some domains of cognition including verbal episodic memory. Few studies have investigated the influence of DM2 status in older adults on recognition memory for more complex stimuli such as faces. In the present study we sought to compare recognition memory performance for words, objects and faces under conditions of relatively low and high cognitive load. Healthy older adults with good glucoregulatory control (n = 13) and older adults with DM2 (n = 24) were administered recognition memory tasks in which stimuli (faces, objects and words) were presented under conditions of either i) low (stimulus presented without a background pattern) or ii) high (stimulus presented against a background pattern) cognitive load. In a subsequent recognition phase, the DM2 group recognized fewer faces than healthy controls. Further, the DM2 group exhibited word recognition deficits in the low cognitive load condition. The recognition memory impairment observed in patients with DM2 has clear implications for day-to-day functioning. Although these deficits were not amplified under conditions of increased cognitive load, the present study emphasizes that recognition memory impairment for both words and more complex stimuli such as face are a feature of DM2 in older adults. Copyright © 2016 IMSS. Published by Elsevier Inc. All rights reserved.

  10. Talker and lexical effects on audiovisual word recognition by adults with cochlear implants.

    PubMed

    Kaiser, Adam R; Kirk, Karen Iler; Lachs, Lorin; Pisoni, David B

    2003-04-01

    The present study examined how postlingually deafened adults with cochlear implants combine visual information from lipreading with auditory cues in an open-set word recognition task. Adults with normal hearing served as a comparison group. Word recognition performance was assessed using lexically controlled word lists presented under auditory-only, visual-only, and combined audiovisual presentation formats. Effects of talker variability were studied by manipulating the number of talkers producing the stimulus tokens. Lexical competition was investigated using sets of lexically easy and lexically hard test words. To assess the degree of audiovisual integration, a measure of visual enhancement, R(a), was used to assess the gain in performance provided in the audiovisual presentation format relative to the maximum possible performance obtainable in the auditory-only format. Results showed that word recognition performance was highest for audiovisual presentation followed by auditory-only and then visual-only stimulus presentation. Performance was better for single-talker lists than for multiple-talker lists, particularly under the audiovisual presentation format. Word recognition performance was better for the lexically easy than for the lexically hard words regardless of presentation format. Visual enhancement scores were higher for single-talker conditions compared to multiple-talker conditions and tended to be somewhat better for lexically easy words than for lexically hard words. The pattern of results suggests that information from the auditory and visual modalities is used to access common, multimodal lexical representations in memory. The findings are discussed in terms of the complementary nature of auditory and visual sources of information that specify the same underlying gestures and articulatory events in speech.

  11. Talker and Lexical Effects on Audiovisual Word Recognition by Adults With Cochlear Implants

    PubMed Central

    Kaiser, Adam R.; Kirk, Karen Iler; Lachs, Lorin; Pisoni, David B.

    2012-01-01

    The present study examined how postlingually deafened adults with cochlear implants combine visual information from lipreading with auditory cues in an open-set word recognition task. Adults with normal hearing served as a comparison group. Word recognition performance was assessed using lexically controlled word lists presented under auditory-only, visual-only, and combined audiovisual presentation formats. Effects of talker variability were studied by manipulating the number of talkers producing the stimulus tokens. Lexical competition was investigated using sets of lexically easy and lexically hard test words. To assess the degree of audiovisual integration, a measure of visual enhancement, Ra, was used to assess the gain in performance provided in the audiovisual presentation format relative to the maximum possible performance obtainable in the auditory-only format. Results showed that word recognition performance was highest for audiovisual presentation followed by auditory-only and then visual-only stimulus presentation. Performance was better for single-talker lists than for multiple-talker lists, particularly under the audiovisual presentation format. Word recognition performance was better for the lexically easy than for the lexically hard words regardless of presentation format. Visual enhancement scores were higher for single-talker conditions compared to multiple-talker conditions and tended to be somewhat better for lexically easy words than for lexically hard words. The pattern of results suggests that information from the auditory and visual modalities is used to access common, multimodal lexical representations in memory. The findings are discussed in terms of the complementary nature of auditory and visual sources of information that specify the same underlying gestures and articulatory events in speech. PMID:14700380

  12. Document image cleanup and binarization

    NASA Astrophysics Data System (ADS)

    Wu, Victor; Manmatha, Raghaven

    1998-04-01

    Image binarization is a difficult task for documents with text over textured or shaded backgrounds, poor contrast, and/or considerable noise. Current optical character recognition (OCR) and document analysis technology do not handle such documents well. We have developed a simple yet effective algorithm for document image clean-up and binarization. The algorithm consists of two basic steps. In the first step, the input image is smoothed using a low-pass filter. The smoothing operation enhances the text relative to any background texture. This is because background texture normally has higher frequency than text does. The smoothing operation also removes speckle noise. In the second step, the intensity histogram of the smoothed image is computed and a threshold automatically selected as follows. For black text, the first peak of the histogram corresponds to text. Thresholding the image at the value of the valley between the first and second peaks of the histogram binarizes the image well. In order to reliably identify the valley, the histogram is smoothed by a low-pass filter before the threshold is computed. The algorithm has been applied to some 50 images from a wide variety of source: digitized video frames, photos, newspapers, advertisements in magazines or sales flyers, personal checks, etc. There are 21820 characters and 4406 words in these images. 91 percent of the characters and 86 percent of the words are successfully cleaned up and binarized. A commercial OCR was applied to the binarized text when it consisted of fonts which were OCR recognizable. The recognition rate was 84 percent for the characters and 77 percent for the words.

  13. The effects of age and divided attention on spontaneous recognition.

    PubMed

    Anderson, Benjamin A; Jacoby, Larry L; Thomas, Ruthann C; Balota, David A

    2011-05-01

    Studies of recognition typically involve tests in which the participant's memory for a stimulus is directly questioned. There are occasions however, in which memory occurs more spontaneously (e.g., an acquaintance seeming familiar out of context). Spontaneous recognition was investigated in a novel paradigm involving study of pictures and words followed by recognition judgments on stimuli with an old or new word superimposed over an old or new picture. Participants were instructed to make their recognition decision on either the picture or word and to ignore the distracting stimulus. Spontaneous recognition was measured as the influence of old vs. new distracters on target recognition. Across two experiments, older adults and younger adults placed under divided-attention showed a greater tendency to spontaneously recognize old distracters as compared to full-attention younger adults. The occurrence of spontaneous recognition is discussed in relation to ability to constrain retrieval to goal-relevant information.

  14. Think the thought, walk the walk - social priming reduces the Stroop effect.

    PubMed

    Goldfarb, Liat; Aisenberg, Daniela; Henik, Avishai

    2011-02-01

    In the Stroop task, participants name the color of the ink that a color word is written in and ignore the meaning of the word. Naming the color of an incongruent color word (e.g., RED printed in blue) is slower than naming the color of a congruent color word (e.g., RED printed in red). This robust effect is known as the Stroop effect and it suggests that the intentional instruction - "do not read the word" - has limited influence on one's behavior, as word reading is being executed via an automatic path. Herein is examined the influence of a non-intentional instruction - "do not read the word" - on the Stroop effect. Social concept priming tends to trigger automatic behavior that is in line with the primed concept. Here participants were primed with the social concept "dyslexia" before performing the Stroop task. Because dyslectic people are perceived as having reading difficulties, the Stroop effect was reduced and even failed to reach significance after the dyslectic person priming. A similar effect was replicated in a further experiment, and overall it suggests that the human cognitive system has more success in decreasing the influence of another automatic process via an automatic path rather than via an intentional path. Copyright © 2010 Elsevier B.V. All rights reserved.

  15. Effects of hydrocortisone on false memory recognition in healthy men and women.

    PubMed

    Duesenberg, Moritz; Weber, Juliane; Schaeuffele, Carmen; Fleischer, Juliane; Hellmann-Regen, Julian; Roepke, Stefan; Moritz, Steffen; Otte, Christian; Wingenfeld, Katja

    2016-12-01

    Most of the studies focusing on the effect of stress on false memories by using psychosocial and physiological stressors yielded diverse results. In the present study, we systematically tested the effect of exogenous hydrocortisone using a false memory paradigm. In this placebo-controlled study, 37 healthy men and 38 healthy women (mean age 24.59 years) received either 10 mg of hydrocortisone or placebo 75 min before using the false memory, that is, Deese-Roediger-McDermott (DRM), paradigm. We used emotionally charged and neutral DRM-based word lists to look for false recognition rates in comparison to true recognition rates. Overall, we expected an increase in false memory after hydrocortisone compared to placebo. No differences between the cortisol and the placebo group were revealed for false and for true recognition performance. In general, false recognition rates were lower compared to true recognition rates. Furthermore, we found a valence effect (neutral, positive, negative, disgust word stimuli), indicating higher rates of true and false recognition for emotional compared to neutral words. We further found an interaction effect between sex and recognition. Post hoc t tests showed that for true recognition women showed a significantly better memory performance than men, independent of treatment. This study does not support the hypothesis that cortisol decreases the ability to distinguish between old versus novel words in young healthy individuals. However, sex and emotional valence of word stimuli appear to be important moderators. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  16. Adults' Self-Directed Learning of an Artificial Lexicon: The Dynamics of Neighborhood Reorganization

    ERIC Educational Resources Information Center

    Bardhan, Neil Prodeep

    2010-01-01

    Artificial lexicons have previously been used to examine the time course of the learning and recognition of spoken words, the role of segment type in word learning, and the integration of context during spoken word recognition. However, in all of these studies the experimenter determined the frequency and order of the words to be learned. In three…

  17. Phonological Contribution during Visual Word Recognition in Child Readers. An Intermodal Priming Study in Grades 3 and 5

    ERIC Educational Resources Information Center

    Sauval, Karinne; Casalis, Séverine; Perre, Laetitia

    2017-01-01

    This study investigated the phonological contribution during visual word recognition in child readers as a function of general reading expertise (third and fifth grades) and specific word exposure (frequent and less-frequent words). An intermodal priming in lexical decision task was performed. Auditory primes (identical and unrelated) were used in…

  18. The Interaction of Lexical Semantics and Cohort Competition in Spoken Word Recognition: An fMRI Study

    ERIC Educational Resources Information Center

    Zhuang, Jie; Randall, Billi; Stamatakis, Emmanuel A.; Marslen-Wilson, William D.; Tyler, Lorraine K.

    2011-01-01

    Spoken word recognition involves the activation of multiple word candidates on the basis of the initial speech input--the "cohort"--and selection among these competitors. Selection may be driven primarily by bottom-up acoustic-phonetic inputs or it may be modulated by other aspects of lexical representation, such as a word's meaning…

  19. Russian Character Recognition using Self-Organizing Map

    NASA Astrophysics Data System (ADS)

    Gunawan, D.; Arisandi, D.; Ginting, F. M.; Rahmat, R. F.; Amalia, A.

    2017-01-01

    The World Tourism Organization (UNWTO) in 2014 released that there are 28 million visitors who visit Russia. Most of the visitors might have problem in typing Russian word when using digital dictionary. This is caused by the letters, called Cyrillic that used by the Russian and the countries around it, have different shape than Latin letters. The visitors might not familiar with Cyrillic. This research proposes an alternative way to input the Cyrillic words. Instead of typing the Cyrillic words directly, camera can be used to capture image of the words as input. The captured image is cropped, then several pre-processing steps are applied such as noise filtering, binary image processing, segmentation and thinning. Next, the feature extraction process is applied to the image. Cyrillic letters recognition in the image is done by utilizing Self-Organizing Map (SOM) algorithm. SOM successfully recognizes 89.09% Cyrillic letters from the computer-generated images. On the other hand, SOM successfully recognizes 88.89% Cyrillic letters from the image captured by the smartphone’s camera. For the word recognition, SOM successfully recognized 292 words and partially recognized 58 words from the image captured by the smartphone’s camera. Therefore, the accuracy of the word recognition using SOM is 83.42%

  20. The ties that bind what is known to the recognition of what is new.

    PubMed

    Nelson, D L; Zhang, N; McKinney, V M

    2001-09-01

    Recognition success varies with how information is encoded (e.g., level of processing) and with what is already known as a result of past learning (e.g., word frequency). This article presents the results of experiments showing that preexisting connections involving the associates of studied words facilitate their recognition regardless of whether the words are intentionally encoded or are incidentally encoded under semantic or nonsemantic conditions. Words are more likely to be recognized when they have either more resonant connections coming back to them from their associates or more connections among their associates. Such results occur even though attention is never drawn to these associates. Regression analyses showed that these connections affect recognition independently of frequency, so the present results add to the literature showing that prior lexical knowledge contributes to episodic recognition. In addition, equations that use free-association data to derive composite strength indices of resonance and connectivity were evaluated. Implications for theories of recognition are discussed.

  1. Predicting individual differences in reading comprehension: a twin study

    PubMed Central

    Cutting, Laurie; Deater-Deckard, Kirby; DeThorne, Laura S.; Justice, Laura M.; Schatschneider, Chris; Thompson, Lee A.; Petrill, Stephen A.

    2010-01-01

    We examined the Simple View of reading from a behavioral genetic perspective. Two aspects of word decoding (phonological decoding and word recognition), two aspects of oral language skill (listening comprehension and vocabulary), and reading comprehension were assessed in a twin sample at age 9. Using latent factor models, we found that overlap among phonological decoding, word recognition, listening comprehension, vocabulary, and reading comprehension was primarily due to genetic influences. Shared environmental influences accounted for associations among word recognition, listening comprehension, vocabulary, and reading comprehension. Independent of phonological decoding and word recognition, there was a separate genetic link between listening comprehension, vocabulary, and reading comprehension and a specific shared environmental link between vocabulary and reading comprehension. There were no residual genetic or environmental influences on reading comprehension. The findings provide evidence for a genetic basis to the “Simple View” of reading. PMID:20814768

  2. When the Eyes No Longer Lead: Familiarity and Length Effects on Eye-Voice Span

    PubMed Central

    Silva, Susana; Reis, Alexandra; Casaca, Luís; Petersson, Karl M.; Faísca, Luís

    2016-01-01

    During oral reading, the eyes tend to be ahead of the voice (eye-voice span, EVS). It has been hypothesized that the extent to which this happens depends on the automaticity of reading processes, namely on the speed of print-to-sound conversion. We tested whether EVS is affected by another automaticity component – immunity from interference. To that end, we manipulated word familiarity (high-frequency, low-frequency, and pseudowords, PW) and word length as proxies of immunity from interference, and we used linear mixed effects models to measure the effects of both variables on the time interval at which readers do parallel processing by gazing at word N + 1 while not having articulated word N yet (offset EVS). Parallel processing was enhanced by automaticity, as shown by familiarity × length interactions on offset EVS, and it was impeded by lack of automaticity, as shown by the transformation of offset EVS into voice-eye span (voice ahead of the offset of the eyes) in PWs. The relation between parallel processing and automaticity was strengthened by the fact that offset EVS predicted reading velocity. Our findings contribute to understand how the offset EVS, an index that is obtained in oral reading, may tap into different components of automaticity that underlie reading ability, oral or silent. In addition, we compared the duration of the offset EVS with the average reference duration of stages in word production, and we saw that the offset EVS may accommodate for more than the articulatory programming stage of word N. PMID:27853446

  3. Hypoactive medial prefrontal cortex functioning in adults reporting childhood emotional maltreatment.

    PubMed

    van Harmelen, Anne-Laura; van Tol, Marie-José; Dalgleish, Tim; van der Wee, Nic J A; Veltman, Dick J; Aleman, André; Spinhoven, Philip; Penninx, Brenda W J H; Elzinga, Bernet M

    2014-12-01

    Childhood emotional maltreatment (CEM) has adverse effects on medial prefrontal cortex (mPFC) morphology, a structure that is crucial for cognitive functioning and (emotional) memory and which modulates the limbic system. In addition, CEM has been linked to amygdala hyperactivity during emotional face processing. However, no study has yet investigated the functional neural correlates of neutral and emotional memory in adults reporting CEM. Using functional magnetic resonance imaging, we investigated CEM-related differential activations in mPFC during the encoding and recognition of positive, negative and neutral words. The sample (N = 194) consisted of patients with depression and/or anxiety disorders and healthy controls (HC) reporting CEM (n = 96) and patients and HC reporting no abuse (n = 98). We found a consistent pattern of mPFC hypoactivation during encoding and recognition of positive, negative and neutral words in individuals reporting CEM. These results were not explained by psychopathology or severity of depression or anxiety symptoms, or by gender, level of neuroticism, parental psychopathology, negative life events, antidepressant use or decreased mPFC volume in the CEM group. These findings indicate mPFC hypoactivity in individuals reporting CEM during emotional and neutral memory encoding and recognition. Our findings suggest that CEM may increase individuals' risk to the development of psychopathology on differential levels of processing in the brain; blunted mPFC activation during higher order processing and enhanced amygdala activation during automatic/lower order emotion processing. These findings are vital in understanding the long-term consequences of CEM. © The Author (2014). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  4. Ease of identifying words degraded by visual noise.

    PubMed

    Barber, P; de la Mahotière, C

    1982-08-01

    A technique is described for investigating word recognition involving the superimposition of 'noise' on the visual target word. For this task a word is printed in the form of letters made up of separate elements; noise consists of additional elements which serve to reduce the ease whereby the words may be recognized, and a threshold-like measure can be obtained in terms of the amount of noise. A word frequency effect was obtained for the noise task, and for words presented tachistoscopically but in conventional typography. For the tachistoscope task, however, the frequency effect depended on the method of presentation. A second study showed no effect of inspection interval on performance on the noise task. A word-frequency effect was also found in a third experiment with tachistoscopic exposure of the noise task stimuli in undegraded form. The question of whether common processes are drawn on by tasks entailing different ways of varying ease of recognition is addressed, and the suitability of different tasks for word recognition research is discussed.

  5. Rapid extraction of gist from visual text and its influence on word recognition.

    PubMed

    Asano, Michiko; Yokosawa, Kazuhiko

    2011-01-01

    Two experiments explored rapid extraction of gist from a visual text and its influence on word recognition. In both, a short text (sentence) containing a target word was presented for 200 ms and was followed by a target recognition task. Results showed that participants recognized contextually anomalous word targets less frequently than contextually consistent counterparts (Experiment 1). This context effect was obtained when sentences contained the same semantic content but with disrupted syntactic structure (Experiment 2). Results demonstrate that words in a briefly presented visual sentence are processed in parallel and that rapid extraction of sentence gist relies on a primitive representation of sentence context (termed protocontext) that is semantically activated by the simultaneous presentation of multiple words (i.e., a sentence) before syntactic processing.

  6. Textual emotion recognition for enhancing enterprise computing

    NASA Astrophysics Data System (ADS)

    Quan, Changqin; Ren, Fuji

    2016-05-01

    The growing interest in affective computing (AC) brings a lot of valuable research topics that can meet different application demands in enterprise systems. The present study explores a sub area of AC techniques - textual emotion recognition for enhancing enterprise computing. Multi-label emotion recognition in text is able to provide a more comprehensive understanding of emotions than single label emotion recognition. A representation of 'emotion state in text' is proposed to encompass the multidimensional emotions in text. It ensures the description in a formal way of the configurations of basic emotions as well as of the relations between them. Our method allows recognition of the emotions for the words bear indirect emotions, emotion ambiguity and multiple emotions. We further investigate the effect of word order for emotional expression by comparing the performances of bag-of-words model and sequence model for multi-label sentence emotion recognition. The experiments show that the classification results under sequence model are better than under bag-of-words model. And homogeneous Markov model showed promising results of multi-label sentence emotion recognition. This emotion recognition system is able to provide a convenient way to acquire valuable emotion information and to improve enterprise competitive ability in many aspects.

  7. Extrinsic Cognitive Load Impairs Spoken Word Recognition in High- and Low-Predictability Sentences.

    PubMed

    Hunter, Cynthia R; Pisoni, David B

    Listening effort (LE) induced by speech degradation reduces performance on concurrent cognitive tasks. However, a converse effect of extrinsic cognitive load on recognition of spoken words in sentences has not been shown. The aims of the present study were to (a) examine the impact of extrinsic cognitive load on spoken word recognition in a sentence recognition task and (b) determine whether cognitive load and/or LE needed to understand spectrally degraded speech would differentially affect word recognition in high- and low-predictability sentences. Downstream effects of speech degradation and sentence predictability on the cognitive load task were also examined. One hundred twenty young adults identified sentence-final spoken words in high- and low-predictability Speech Perception in Noise sentences. Cognitive load consisted of a preload of short (low-load) or long (high-load) sequences of digits, presented visually before each spoken sentence and reported either before or after identification of the sentence-final word. LE was varied by spectrally degrading sentences with four-, six-, or eight-channel noise vocoding. Level of spectral degradation and order of report (digits first or words first) were between-participants variables. Effects of cognitive load, sentence predictability, and speech degradation on accuracy of sentence-final word identification as well as recall of preload digit sequences were examined. In addition to anticipated main effects of sentence predictability and spectral degradation on word recognition, we found an effect of cognitive load, such that words were identified more accurately under low load than high load. However, load differentially affected word identification in high- and low-predictability sentences depending on the level of sentence degradation. Under severe spectral degradation (four-channel vocoding), the effect of cognitive load on word identification was present for high-predictability sentences but not for low-predictability sentences. Under mild spectral degradation (eight-channel vocoding), the effect of load was present for low-predictability sentences but not for high-predictability sentences. There were also reliable downstream effects of speech degradation and sentence predictability on recall of the preload digit sequences. Long digit sequences were more easily recalled following spoken sentences that were less spectrally degraded. When digits were reported after identification of sentence-final words, short digit sequences were recalled more accurately when the spoken sentences were predictable. Extrinsic cognitive load can impair recognition of spectrally degraded spoken words in a sentence recognition task. Cognitive load affected word identification in both high- and low-predictability sentences, suggesting that load may impact both context use and lower-level perceptual processes. Consistent with prior work, LE also had downstream effects on memory for visual digit sequences. Results support the proposal that extrinsic cognitive load and LE induced by signal degradation both draw on a central, limited pool of cognitive resources that is used to recognize spoken words in sentences under adverse listening conditions.

  8. A speech-controlled environmental control system for people with severe dysarthria.

    PubMed

    Hawley, Mark S; Enderby, Pam; Green, Phil; Cunningham, Stuart; Brownsell, Simon; Carmichael, James; Parker, Mark; Hatzis, Athanassios; O'Neill, Peter; Palmer, Rebecca

    2007-06-01

    Automatic speech recognition (ASR) can provide a rapid means of controlling electronic assistive technology. Off-the-shelf ASR systems function poorly for users with severe dysarthria because of the increased variability of their articulations. We have developed a limited vocabulary speaker dependent speech recognition application which has greater tolerance to variability of speech, coupled with a computerised training package which assists dysarthric speakers to improve the consistency of their vocalisations and provides more data for recogniser training. These applications, and their implementation as the interface for a speech-controlled environmental control system (ECS), are described. The results of field trials to evaluate the training program and the speech-controlled ECS are presented. The user-training phase increased the recognition rate from 88.5% to 95.4% (p<0.001). Recognition rates were good for people with even the most severe dysarthria in everyday usage in the home (mean word recognition rate 86.9%). Speech-controlled ECS were less accurate (mean task completion accuracy 78.6% versus 94.8%) but were faster to use than switch-scanning systems, even taking into account the need to repeat unsuccessful operations (mean task completion time 7.7s versus 16.9s, p<0.001). It is concluded that a speech-controlled ECS is a viable alternative to switch-scanning systems for some people with severe dysarthria and would lead, in many cases, to more efficient control of the home.

  9. Rapid automatic keyword extraction for information retrieval and analysis

    DOEpatents

    Rose, Stuart J [Richland, WA; Cowley,; E, Wendy [Richland, WA; Crow, Vernon L [Richland, WA; Cramer, Nicholas O [Richland, WA

    2012-03-06

    Methods and systems for rapid automatic keyword extraction for information retrieval and analysis. Embodiments can include parsing words in an individual document by delimiters, stop words, or both in order to identify candidate keywords. Word scores for each word within the candidate keywords are then calculated based on a function of co-occurrence degree, co-occurrence frequency, or both. Based on a function of the word scores for words within the candidate keyword, a keyword score is calculated for each of the candidate keywords. A portion of the candidate keywords are then extracted as keywords based, at least in part, on the candidate keywords having the highest keyword scores.

  10. Influence of automatic word reading on motor control.

    PubMed

    Gentilucci, M; Gangitano, M

    1998-02-01

    We investigated the possible influence of automatic word reading on processes of visuo-motor transformation. Six subjects were required to reach and grasp a rod on whose visible face the word 'long' or 'short' was printed. Word reading was not explicitly required. In order to induce subjects to visually analyse the object trial by trial, object position and size were randomly varied during the experimental session. The kinematics of the reaching component was affected by word presentation. Peak acceleration, peak velocity, and peak deceleration of arm were higher for the word 'long' with respect to the word 'short'. That is, during the initial movement phase subjects automatically associated the meaning of the word with the distance to be covered and activated a motor program for a farther and/or nearer object position. During the final movement phase, subjects modified the braking forces (deceleration) in order to correct the initial error. No effect of the words on the grasp component was observed. These results suggest a possible influence of cognitive functions on motor control and seem to contrast with the notion that the analyses executed in the ventral and dorsal cortical visual streams are different and independent.

  11. The Fractal Patterns of Words in a Text: A Method for Automatic Keyword Extraction.

    PubMed

    Najafi, Elham; Darooneh, Amir H

    2015-01-01

    A text can be considered as a one dimensional array of words. The locations of each word type in this array form a fractal pattern with certain fractal dimension. We observe that important words responsible for conveying the meaning of a text have dimensions considerably different from one, while the fractal dimensions of unimportant words are close to one. We introduce an index quantifying the importance of the words in a given text using their fractal dimensions and then ranking them according to their importance. This index measures the difference between the fractal pattern of a word in the original text relative to a shuffled version. Because the shuffled text is meaningless (i.e., words have no importance), the difference between the original and shuffled text can be used to ascertain degree of fractality. The degree of fractality may be used for automatic keyword detection. Words with the degree of fractality higher than a threshold value are assumed to be the retrieved keywords of the text. We measure the efficiency of our method for keywords extraction, making a comparison between our proposed method and two other well-known methods of automatic keyword extraction.

  12. The Fractal Patterns of Words in a Text: A Method for Automatic Keyword Extraction

    PubMed Central

    Najafi, Elham; Darooneh, Amir H.

    2015-01-01

    A text can be considered as a one dimensional array of words. The locations of each word type in this array form a fractal pattern with certain fractal dimension. We observe that important words responsible for conveying the meaning of a text have dimensions considerably different from one, while the fractal dimensions of unimportant words are close to one. We introduce an index quantifying the importance of the words in a given text using their fractal dimensions and then ranking them according to their importance. This index measures the difference between the fractal pattern of a word in the original text relative to a shuffled version. Because the shuffled text is meaningless (i.e., words have no importance), the difference between the original and shuffled text can be used to ascertain degree of fractality. The degree of fractality may be used for automatic keyword detection. Words with the degree of fractality higher than a threshold value are assumed to be the retrieved keywords of the text. We measure the efficiency of our method for keywords extraction, making a comparison between our proposed method and two other well-known methods of automatic keyword extraction. PMID:26091207

  13. The Influence of Phonotactic Probability on Word Recognition in Toddlers

    ERIC Educational Resources Information Center

    MacRoy-Higgins, Michelle; Shafer, Valerie L.; Schwartz, Richard G.; Marton, Klara

    2014-01-01

    This study examined the influence of phonotactic probability on word recognition in English-speaking toddlers. Typically developing toddlers completed a preferential looking paradigm using familiar words, which consisted of either high or low phonotactic probability sound sequences. The participants' looking behavior was recorded in response to…

  14. Individual Differences in Online Spoken Word Recognition: Implications for SLI

    ERIC Educational Resources Information Center

    McMurray, Bob; Samelson, Vicki M.; Lee, Sung Hee; Tomblin, J. Bruce

    2010-01-01

    Thirty years of research has uncovered the broad principles that characterize spoken word processing across listeners. However, there have been few systematic investigations of individual differences. Such an investigation could help refine models of word recognition by indicating which processing parameters are likely to vary, and could also have…

  15. Influences of Lexical Processing on Reading.

    ERIC Educational Resources Information Center

    Yang, Yu-Fen; Kuo, Hsing-Hsiu

    2003-01-01

    Investigates how early lexical processing (word recognition) could influence reading. Finds that less-proficient readers could not finish the task of word recognition within time limits and their accuracy rates were quite low, whereas the proficient readers processed the physical words immediately and translated them into meanings quickly in order…

  16. Spreading Activation in an Attractor Network with Latching Dynamics: Automatic Semantic Priming Revisited

    PubMed Central

    Lerner, Itamar; Bentin, Shlomo; Shriki, Oren

    2012-01-01

    Localist models of spreading activation (SA) and models assuming distributed-representations offer very different takes on semantic priming, a widely investigated paradigm in word recognition and semantic memory research. In the present study we implemented SA in an attractor neural network model with distributed representations and created a unified framework for the two approaches. Our models assumes a synaptic depression mechanism leading to autonomous transitions between encoded memory patterns (latching dynamics), which account for the major characteristics of automatic semantic priming in humans. Using computer simulations we demonstrated how findings that challenged attractor-based networks in the past, such as mediated and asymmetric priming, are a natural consequence of our present model’s dynamics. Puzzling results regarding backward priming were also given a straightforward explanation. In addition, the current model addresses some of the differences between semantic and associative relatedness and explains how these differences interact with stimulus onset asynchrony in priming experiments. PMID:23094718

  17. A Prerequisite to L1 Homophone Effects in L2 Spoken-Word Recognition

    ERIC Educational Resources Information Center

    Nakai, Satsuki; Lindsay, Shane; Ota, Mitsuhiko

    2015-01-01

    When both members of a phonemic contrast in L2 (second language) are perceptually mapped to a single phoneme in one's L1 (first language), L2 words containing a member of that contrast can spuriously activate L2 words in spoken-word recognition. For example, upon hearing cattle, Dutch speakers of English are reported to experience activation…

  18. Cross-modal working memory binding and word recognition skills: how specific is the link?

    PubMed

    Wang, Shinmin; Allen, Richard J

    2018-04-01

    Recent research has suggested that the creation of temporary bound representations of information from different sources within working memory uniquely relates to word recognition abilities in school-age children. However, it is unclear to what extent this link is attributable specifically to the binding ability for cross-modal information. This study examined the performance of Grade 3 (8-9 years old) children on binding tasks requiring either temporary association formation of two visual items (i.e., within-modal binding) or pairs of visually presented abstract shapes and auditorily presented nonwords (i.e., cross-modal binding). Children's word recognition skills were related to performance on the cross-modal binding task but not on the within-modal binding task. Further regression models showed that cross-modal binding memory was a significant predictor of word recognition when memory for its constituent elements, general abilities, and crucially, within-modal binding memory were taken into account. These findings may suggest a specific link between the ability to bind information across modalities within working memory and word recognition skills.

  19. A benefit of context reinstatement to recognition memory in aging: the role of familiarity processes.

    PubMed

    Ward, Emma V; Maylor, Elizabeth A; Poirier, Marie; Korko, Malgorzata; Ruud, Jens C M

    2017-11-01

    Reinstatement of encoding context facilitates memory for targets in young and older individuals (e.g., a word studied on a particular background scene is more likely to be remembered later if it is presented on the same rather than a different scene or no scene), yet older adults are typically inferior at recalling and recognizing target-context pairings. This study examined the mechanisms of the context effect in normal aging. Age differences in word recognition by context condition (original, switched, none, new), and the ability to explicitly remember target-context pairings were investigated using word-scene pairs (Experiment 1) and word-word pairs (Experiment 2). Both age groups benefited from context reinstatement in item recognition, although older adults were significantly worse than young adults at identifying original pairings and at discriminating between original and switched pairings. In Experiment 3, participants were given a three-alternative forced-choice recognition task that allowed older individuals to draw upon intact familiarity processes in selecting original pairings. Performance was age equivalent. Findings suggest that heightened familiarity associated with context reinstatement is useful for boosting recognition memory in aging.

  20. Associative and semantic priming effects occur at very short stimulus-onset asynchronies in lexical decision and naming.

    PubMed

    Perea, M; Gotor, A

    1997-02-01

    Prior research has found significant associative/semantic priming effects at very short stimulus-onset asynchronies (SOAs) in experimental tasks such as lexical decision, but not in naming tasks (however, see Lukatela and Turvey, 1994). In this paper, the time course of associative priming effects was analyzed a several very short SOAs (33, 50, and 67 ms), using the masked priming paradigm (Forster and Davis, 1984), both in lexical decision (Experiment 1) and naming (Experiment 2). The results show small--but significant--associative priming effects in both tasks. Additionally, using the masked priming procedure at the 67 ms SOA. Experiments 3 and 4, shows facilitatory priming effects for both associatively and semantically (unassociated) related pairs in lexical decision and naming tasks. That is, automatic priming can be semantic. Taken together our data appear to support interactive models of word recognition in which semantic activation may influence the early stages of word processing.

  1. Masked priming and ERPs dissociate maturation of orthographic and semantic components of visual word recognition in children

    PubMed Central

    Eddy, Marianna D.; Grainger, Jonathan; Holcomb, Phillip J.; Mitra, Priya; Gabrieli, John D. E.

    2014-01-01

    This study examined the time-course of reading single words in children and adults using masked repetition priming and the recording of event-related potentials. The N250 and N400 repetition priming effects were used to characterize form- and meaning-level processing, respectively. Children had larger amplitude N250 effects than adults for both shorter and longer duration primes. Children did not differ from adults on the N400 effect. The difference on the N250 suggests that automaticity for form processing is still maturing in children relative to adults, while the lack of differentiation on the N400 effect suggests that meaning processing is relatively mature by late childhood. The overall similarity in the children’s repetition priming effects to adults’ effects is in line with theories of reading acquisition, according to which children rapidly transition to an orthographic strategy for fast access to semantic information from print. PMID:24313638

  2. Effect of Context and Hearing Loss on Time-Gated Word Recognition in Children

    PubMed Central

    Lewis, Dawna E.; Kopun, Judy; McCreery, Ryan; Brennan, Marc; Nishi, Kanae; Cordrey, Evan; Stelmachowicz, Pat; Moeller, Mary Pat

    2016-01-01

    Objectives The purpose of this study was to examine word recognition in children who are hard of hearing (CHH) and children with normal hearing (CNH) in response to time-gated words presented in high- vs. low-predictability sentences (HP, LP), where semantic cues were manipulated. Findings inform our understanding of how CHH combine cognitive-linguistic and acoustic-phonetic cues to support spoken word recognition. It was hypothesized that both groups of children would be able to make use of linguistic cues provided by HP sentences to support word recognition. CHH were expected to require greater acoustic information (more gates) than CNH to correctly identify words in the LP condition. In addition, it was hypothesized that error patterns would differ across groups. Design Sixteen CHH with mild-to-moderate hearing loss and 16 age-matched CNH participated (5–12 yrs). Test stimuli included 15 LP and 15 HP age-appropriate sentences. The final word of each sentence was divided into segments and recombined with the sentence frame to create series of sentences in which the final word was progressively longer by the gated increments. Stimuli were presented monaurally through headphones and children were asked to identify the target word at each successive gate. They also were asked to rate their confidence in their word choice using a 5- or 3-point scale. For CHH, the signals were processed through a hearing aid simulator. Standardized language measures were used to assess the contribution of linguistic skills. Results Analysis of language measures revealed that the CNH and CHH performed within the average range on language abilities. Both groups correctly recognized a significantly higher percentage of words in the HP condition than in the LP condition. Although CHH performed comparably to CNH in terms of successfully recognizing the majority of words, differences were observed in the amount of acoustic-phonetic information needed to achieve accurate word recognition. CHH needed more gates than CNH to identify words in the LP condition. CNH were significantly lower in rating their confidence in the LP condition than in the HP condition. CHH, however, were not significantly different in confidence between the conditions. Error patterns for incorrect word responses across gates and predictability varied depending on hearing status. Conclusions The results of this study suggest that CHH with age-appropriate language abilities took advantage of context cues in the HP sentences to guide word recognition in a manner similar to CNH. However, in the LP condition, they required more acoustic information (more gates) than CNH for word recognition. Differences in the structure of incorrect word responses and their nomination patterns across gates for CHH compared to their peers with normal hearing suggest variations in how these groups use limited acoustic information to select word candidates. PMID:28045838

  3. The Role of the Association in Recognition Memory.

    ERIC Educational Resources Information Center

    Underwood, Benton J.

    The purpose of the eight experiments was to assess the role which associations between two words played in recognition decisions. The evidence on weak associations established in the laboratory indicated that association was playing a small role, but that the recognition performance on pairs of words was highly predictable from frequency…

  4. Memory effects of sleep, emotional valence, arousal and novelty in children.

    PubMed

    Vermeulen, Marije C M; van der Heijden, Kristiaan B; Benjamins, Jeroen S; Swaab, Hanna; van Someren, Eus J W

    2017-06-01

    Effectiveness of memory consolidation is determined by multiple factors, including sleep after learning, emotional valence, arousal and novelty. Few studies investigated how the effect of sleep compares with (and interacts with) these other factors, of which virtually none are in children. The present study did so by repeated assessment of declarative memory in 386 children (45% boys) aged 9-11 years through an online word-pair task. Children were randomly assigned to either a morning or evening learning session of 30 unrelated word-pairs with positive, neutral or negative valenced cues and neutral targets. After immediately assessing baseline recognition, delayed recognition was recorded either 12 or 24 h later, resulting in four different assessment schedules. One week later, the procedure was repeated with exactly the same word-pairs to evaluate whether effects differed for relearning versus original novel learning. Mixed-effect logistic regression models were used to evaluate how the probability of correct recognition was affected by sleep, valence, arousal, novelty and their interactions. Both immediate and delayed recognition were worse for pairs with negatively valenced or less arousing cue words. Relearning improved immediate and delayed word-pair recognition. In contrast to these effects, sleep did not affect recognition, nor did sleep moderate the effects of arousal, valence and novelty. The findings suggest a robust inclination of children to specifically forget the pairing of words to negatively valenced cue words. In agreement with a recent meta-analysis, children seem to depend less on sleep for the consolidation of information than has been reported for adults, irrespective of the emotional valence, arousal and novelty of word-pairs. © 2017 European Sleep Research Society.

  5. Cognitive Factors Affecting Free Recall, Cued Recall, and Recognition Tasks in Alzheimer's Disease

    PubMed Central

    Yamagishi, Takashi; Sato, Takuya; Sato, Atsushi; Imamura, Toru

    2012-01-01

    Background/Aims Our aim was to identify cognitive factors affecting free recall, cued recall, and recognition tasks in patients with Alzheimer's disease (AD). Subjects: We recruited 349 consecutive AD patients who attended a memory clinic. Methods Each patient was assessed using the Alzheimer's Disease Assessment Scale (ADAS) and the extended 3-word recall test. In this task, each patient was asked to freely recall 3 previously presented words. If patients could not recall 1 or more of the target words, the examiner cued their recall by providing the category of the target word and then provided a forced-choice recognition of the target word with 2 distracters. The patients were divided into groups according to the results of the free recall, cued recall, and recognition tasks. Multivariate logistic regression analysis for repeated measures was carried out to evaluate the net effects of cognitive factors on the free recall, cued recall, and recognition tasks after controlling for the effects of age and recent memory deficit. Results Performance on the ADAS Orientation task was found to be related to performance on the free and cued recall tasks, performance on the ADAS Following Commands task was found to be related to performance on the cued recall task, and performance on the ADAS Ideational Praxis task was found to be related to performance on the free recall, cued recall, and recognition tasks. Conclusion The extended 3-word recall test reflects deficits in a wider range of memory and other cognitive processes, including memory retention after interference, divided attention, and executive functions, compared with word-list recall tasks. The characteristics of the extended 3-word recall test may be advantageous for evaluating patients’ memory impairments in daily living. PMID:22962551

  6. Cognitive factors affecting free recall, cued recall, and recognition tasks in Alzheimer's disease.

    PubMed

    Yamagishi, Takashi; Sato, Takuya; Sato, Atsushi; Imamura, Toru

    2012-01-01

    Our aim was to identify cognitive factors affecting free recall, cued recall, and recognition tasks in patients with Alzheimer's disease (AD). We recruited 349 consecutive AD patients who attended a memory clinic. Each patient was assessed using the Alzheimer's Disease Assessment Scale (ADAS) and the extended 3-word recall test. In this task, each patient was asked to freely recall 3 previously presented words. If patients could not recall 1 or more of the target words, the examiner cued their recall by providing the category of the target word and then provided a forced-choice recognition of the target word with 2 distracters. The patients were divided into groups according to the results of the free recall, cued recall, and recognition tasks. Multivariate logistic regression analysis for repeated measures was carried out to evaluate the net effects of cognitive factors on the free recall, cued recall, and recognition tasks after controlling for the effects of age and recent memory deficit. Performance on the ADAS Orientation task was found to be related to performance on the free and cued recall tasks, performance on the ADAS Following Commands task was found to be related to performance on the cued recall task, and performance on the ADAS Ideational Praxis task was found to be related to performance on the free recall, cued recall, and recognition tasks. The extended 3-word recall test reflects deficits in a wider range of memory and other cognitive processes, including memory retention after interference, divided attention, and executive functions, compared with word-list recall tasks. The characteristics of the extended 3-word recall test may be advantageous for evaluating patients' memory impairments in daily living.

  7. Reconsidering the role of temporal order in spoken word recognition.

    PubMed

    Toscano, Joseph C; Anderson, Nathaniel D; McMurray, Bob

    2013-10-01

    Models of spoken word recognition assume that words are represented as sequences of phonemes. We evaluated this assumption by examining phonemic anadromes, words that share the same phonemes but differ in their order (e.g., sub and bus). Using the visual-world paradigm, we found that listeners show more fixations to anadromes (e.g., sub when bus is the target) than to unrelated words (well) and to words that share the same vowel but not the same set of phonemes (sun). This contrasts with the predictions of existing models and suggests that words are not defined as strict sequences of phonemes.

  8. Coordination of Word Recognition and Oculomotor Control During Reading: The Role of Implicit Lexical Decisions

    PubMed Central

    Choi, Wonil; Gordon, Peter C.

    2013-01-01

    The coordination of word-recognition and oculomotor processes during reading was evaluated in two eye-tracking experiments that examined how word skipping, where a word is not fixated during first-pass reading, is affected by the lexical status of a letter string in the parafovea and ease of recognizing that string. Ease of lexical recognition was manipulated through target-word frequency (Experiment 1) and through repetition priming between prime-target pairs embedded in a sentence (Experiment 2). Using the gaze-contingent boundary technique the target word appeared in the parafovea either with full preview or with transposed-letter (TL) preview. The TL preview strings were nonwords in Experiment 1 (e.g., bilnk created from the target blink), but were words in Experiment 2 (e.g., sacred created from the target scared). Experiment 1 showed greater skipping for high-frequency than low-frequency target words in the full preview condition but not in the TL preview (nonword) condition. Experiment 2 showed greater skipping for target words that repeated an earlier prime word than for those that did not, with this repetition priming occurring both with preview of the full target and with preview of the target’s TL neighbor word. However, time to progress from the word after the target was greater following skips of the TL preview word, whose meaning was anomalous in the sentence context, than following skips of the full preview word whose meaning fit sensibly into the sentence context. Together, the results support the idea that coordination between word-recognition and oculomotor processes occurs at the level of implicit lexical decisions. PMID:23106372

  9. Recollection and familiarity for words and faces: a study comparing Remember-Know judgements and the Process Dissociation Procedure.

    PubMed

    Espinosa-García, María; Vaquero, Joaquín M M; Milliken, Bruce; Tudela, Pío

    2017-01-01

    Measures of recollection and familiarity often differ depending on the paradigm utilised. Remember-Know (R-K) and Process Dissociation Procedure (PDP) methods have been commonly used but rarely compared within a single study. In the current experiments, R-K and PDP were compared by examining the effect of attention at study and time to respond at test on recollection and familiarity using the same experimental procedures for each paradigm. We also included faces in addition to words to test the generality of the findings often obtained using words. The results from the R-K paradigm revealed that recollection and familiarity were similarly affected by attention at study and time to respond at test. However, in the case of PDP, the measures of recollection and familiarity showed a different pattern of results. The effects observed for recollection were similar to those obtained with the R-K method, whereas familiarity was affected by time to respond but not by attention at study. These results are discussed in relation to the controlled-automatic processing distinction and the contribution of each paradigm to research on recognition memory.

  10. Speaker-Machine Interaction in Automatic Speech Recognition. Technical Report.

    ERIC Educational Resources Information Center

    Makhoul, John I.

    The feasibility and limitations of speaker adaptation in improving the performance of a "fixed" (speaker-independent) automatic speech recognition system were examined. A fixed vocabulary of 55 syllables is used in the recognition system which contains 11 stops and fricatives and five tense vowels. The results of an experiment on speaker…

  11. Modeling Open-Set Spoken Word Recognition in Postlingually Deafened Adults after Cochlear Implantation: Some Preliminary Results with the Neighborhood Activation Model

    PubMed Central

    Meyer, Ted A.; Frisch, Stefan A.; Pisoni, David B.; Miyamoto, Richard T.; Svirsky, Mario A.

    2012-01-01

    Hypotheses Do cochlear implants provide enough information to allow adult cochlear implant users to understand words in ways that are similar to listeners with acoustic hearing? Can we use a computational model to gain insight into the underlying mechanisms used by cochlear implant users to recognize spoken words? Background The Neighborhood Activation Model has been shown to be a reasonable model of word recognition for listeners with normal hearing. The Neighborhood Activation Model assumes that words are recognized in relation to other similar-sounding words in a listener’s lexicon. The probability of correctly identifying a word is based on the phoneme perception probabilities from a listener’s closed-set consonant and vowel confusion matrices modified by the relative frequency of occurrence of the target word compared with similar-sounding words (neighbors). Common words with few similar-sounding neighbors are more likely to be selected as responses than less common words with many similar-sounding neighbors. Recent studies have shown that several of the assumptions of the Neighborhood Activation Model also hold true for cochlear implant users. Methods Closed-set consonant and vowel confusion matrices were obtained from 26 postlingually deafened adults who use cochlear implants. Confusion matrices were used to represent input errors to the Neighborhood Activation Model. Responses to the different stimuli were then generated by the Neighborhood Activation Model after incorporating the frequency of occurrence counts of the stimuli and their neighbors. Model outputs were compared with obtained performance measures on the Consonant-Vowel Nucleus-Consonant word test. Information transmission analysis was used to assess whether the Neighborhood Activation Model was able to successfully generate and predict word and individual phoneme recognition by cochlear implant users. Results The Neighborhood Activation Model predicted Consonant-Vowel Nucleus-Consonant test words at levels similar to those correctly identified by the cochlear implant users. The Neighborhood Activation Model also predicted phoneme feature information well. Conclusion The results obtained suggest that the Neighborhood Activation Model provides a reasonable explanation of word recognition by postlingually deafened adults after cochlear implantation. It appears that multichannel cochlear implants give cochlear implant users access to their mental lexicons in a manner that is similar to listeners with acoustic hearing. The lexical properties of the test stimuli used to assess performance are important to spoken-word recognition and should be included in further models of the word recognition process. PMID:12851554

  12. Tracking speech comprehension in space and time.

    PubMed

    Pulvermüller, Friedemann; Shtyrov, Yury; Ilmoniemi, Risto J; Marslen-Wilson, William D

    2006-07-01

    A fundamental challenge for the cognitive neuroscience of language is to capture the spatio-temporal patterns of brain activity that underlie critical functional components of the language comprehension process. We combine here psycholinguistic analysis, whole-head magnetoencephalography (MEG), the Mismatch Negativity (MMN) paradigm, and state-of-the-art source localization techniques (Equivalent Current Dipole and L1 Minimum-Norm Current Estimates) to locate the process of spoken word recognition at a specific moment in space and time. The magnetic MMN to words presented as rare "deviant stimuli" in an oddball paradigm among repetitive "standard" speech stimuli, peaked 100-150 ms after the information in the acoustic input, was sufficient for word recognition. The latency with which words were recognized corresponded to that of an MMN source in the left superior temporal cortex. There was a significant correlation (r = 0.7) of latency measures of word recognition in individual study participants with the latency of the activity peak of the superior temporal source. These results demonstrate a correspondence between the behaviorally determined recognition point for spoken words and the cortical activation in left posterior superior temporal areas. Both the MMN calculated in the classic manner, obtained by subtracting standard from deviant stimulus response recorded in the same experiment, and the identity MMN (iMMN), defined as the difference between the neuromagnetic responses to the same stimulus presented as standard and deviant stimulus, showed the same significant correlation with word recognition processes.

  13. Application of automatic threshold in dynamic target recognition with low contrast

    NASA Astrophysics Data System (ADS)

    Miao, Hua; Guo, Xiaoming; Chen, Yu

    2014-11-01

    Hybrid photoelectric joint transform correlator can realize automatic real-time recognition with high precision through the combination of optical devices and electronic devices. When recognizing targets with low contrast using photoelectric joint transform correlator, because of the difference of attitude, brightness and grayscale between target and template, only four to five frames of dynamic targets can be recognized without any processing. CCD camera is used to capture the dynamic target images and the capturing speed of CCD is 25 frames per second. Automatic threshold has many advantages like fast processing speed, effectively shielding noise interference, enhancing diffraction energy of useful information and better reserving outline of target and template, so this method plays a very important role in target recognition with optical correlation method. However, the automatic obtained threshold by program can not achieve the best recognition results for dynamic targets. The reason is that outline information is broken to some extent. Optimal threshold is obtained by manual intervention in most cases. Aiming at the characteristics of dynamic targets, the processing program of improved automatic threshold is finished by multiplying OTSU threshold of target and template by scale coefficient of the processed image, and combining with mathematical morphology. The optimal threshold can be achieved automatically by improved automatic threshold processing for dynamic low contrast target images. The recognition rate of dynamic targets is improved through decreased background noise effect and increased correlation information. A series of dynamic tank images with the speed about 70 km/h are adapted as target images. The 1st frame of this series of tanks can correlate only with the 3rd frame without any processing. Through OTSU threshold, the 80th frame can be recognized. By automatic threshold processing of the joint images, this number can be increased to 89 frames. Experimental results show that the improved automatic threshold processing has special application value for the recognition of dynamic target with low contrast.

  14. Semantic Neighborhood Effects for Abstract versus Concrete Words

    PubMed Central

    Danguecan, Ashley N.; Buchanan, Lori

    2016-01-01

    Studies show that semantic effects may be task-specific, and thus, that semantic representations are flexible and dynamic. Such findings are critical to the development of a comprehensive theory of semantic processing in visual word recognition, which should arguably account for how semantic effects may vary by task. It has been suggested that semantic effects are more directly examined using tasks that explicitly require meaning processing relative to those for which meaning processing is not necessary (e.g., lexical decision task). The purpose of the present study was to chart the processing of concrete versus abstract words in the context of a global co-occurrence variable, semantic neighborhood density (SND), by comparing word recognition response times (RTs) across four tasks varying in explicit semantic demands: standard lexical decision task (with non-pronounceable non-words), go/no-go lexical decision task (with pronounceable non-words), progressive demasking task, and sentence relatedness task. The same experimental stimulus set was used across experiments and consisted of 44 concrete and 44 abstract words, with half of these being low SND, and half being high SND. In this way, concreteness and SND were manipulated in a factorial design using a number of visual word recognition tasks. A consistent RT pattern emerged across tasks, in which SND effects were found for abstract (but not necessarily concrete) words. Ultimately, these findings highlight the importance of studying interactive effects in word recognition, and suggest that linguistic associative information is particularly important for abstract words. PMID:27458422

  15. Semantic Neighborhood Effects for Abstract versus Concrete Words.

    PubMed

    Danguecan, Ashley N; Buchanan, Lori

    2016-01-01

    Studies show that semantic effects may be task-specific, and thus, that semantic representations are flexible and dynamic. Such findings are critical to the development of a comprehensive theory of semantic processing in visual word recognition, which should arguably account for how semantic effects may vary by task. It has been suggested that semantic effects are more directly examined using tasks that explicitly require meaning processing relative to those for which meaning processing is not necessary (e.g., lexical decision task). The purpose of the present study was to chart the processing of concrete versus abstract words in the context of a global co-occurrence variable, semantic neighborhood density (SND), by comparing word recognition response times (RTs) across four tasks varying in explicit semantic demands: standard lexical decision task (with non-pronounceable non-words), go/no-go lexical decision task (with pronounceable non-words), progressive demasking task, and sentence relatedness task. The same experimental stimulus set was used across experiments and consisted of 44 concrete and 44 abstract words, with half of these being low SND, and half being high SND. In this way, concreteness and SND were manipulated in a factorial design using a number of visual word recognition tasks. A consistent RT pattern emerged across tasks, in which SND effects were found for abstract (but not necessarily concrete) words. Ultimately, these findings highlight the importance of studying interactive effects in word recognition, and suggest that linguistic associative information is particularly important for abstract words.

  16. Word associations contribute to machine learning in automatic scoring of degree of emotional tones in dream reports.

    PubMed

    Amini, Reza; Sabourin, Catherine; De Koninck, Joseph

    2011-12-01

    Scientific study of dreams requires the most objective methods to reliably analyze dream content. In this context, artificial intelligence should prove useful for an automatic and non subjective scoring technique. Past research has utilized word search and emotional affiliation methods, to model and automatically match human judges' scoring of dream report's negative emotional tone. The current study added word associations to improve the model's accuracy. Word associations were established using words' frequency of co-occurrence with their defining words as found in a dictionary and an encyclopedia. It was hypothesized that this addition would facilitate the machine learning model and improve its predictability beyond those of previous models. With a sample of 458 dreams, this model demonstrated an improvement in accuracy from 59% to 63% (kappa=.485) on the negative emotional tone scale, and for the first time reached an accuracy of 77% (kappa=.520) on the positive scale. Copyright © 2011 Elsevier Inc. All rights reserved.

  17. Face and Word Recognition Can Be Selectively Affected by Brain Injury or Developmental Disorders.

    PubMed

    Robotham, Ro J; Starrfelt, Randi

    2017-01-01

    Face and word recognition have traditionally been thought to rely on highly specialised and relatively independent cognitive processes. Some of the strongest evidence for this has come from patients with seemingly category-specific visual perceptual deficits such as pure prosopagnosia, a selective face recognition deficit, and pure alexia, a selective word recognition deficit. Together, the patterns of impaired reading with preserved face recognition and impaired face recognition with preserved reading constitute a double dissociation. The existence of these selective deficits has been questioned over the past decade. It has been suggested that studies describing patients with these pure deficits have failed to measure the supposedly preserved functions using sensitive enough measures, and that if tested using sensitive measurements, all patients with deficits in one visual category would also have deficits in the other. The implications of this would be immense, with most textbooks in cognitive neuropsychology requiring drastic revisions. In order to evaluate the evidence for dissociations, we review studies that specifically investigate whether face or word recognition can be selectively affected by acquired brain injury or developmental disorders. We only include studies published since 2004, as comprehensive reviews of earlier studies are available. Most of the studies assess the supposedly preserved functions using sensitive measurements. We found convincing evidence that reading can be preserved in acquired and developmental prosopagnosia and also evidence (though weaker) that face recognition can be preserved in acquired or developmental dyslexia, suggesting that face and word recognition are at least in part supported by independent processes.

  18. Audiovisual speech facilitates voice learning.

    PubMed

    Sheffert, Sonya M; Olson, Elizabeth

    2004-02-01

    In this research, we investigated the effects of voice and face information on the perceptual learning of talkers and on long-term memory for spoken words. In the first phase, listeners were trained over several days to identify voices from words presented auditorily or audiovisually. The training data showed that visual information about speakers enhanced voice learning, revealing cross-modal connections in talker processing akin to those observed in speech processing. In the second phase, the listeners completed an auditory or audiovisual word recognition memory test in which equal numbers of words were spoken by familiar and unfamiliar talkers. The data showed that words presented by familiar talkers were more likely to be retrieved from episodic memory, regardless of modality. Together, these findings provide new information about the representational code underlying familiar talker recognition and the role of stimulus familiarity in episodic word recognition.

  19. Structuring Broadcast Audio for Information Access

    NASA Astrophysics Data System (ADS)

    Gauvain, Jean-Luc; Lamel, Lori

    2003-12-01

    One rapidly expanding application area for state-of-the-art speech recognition technology is the automatic processing of broadcast audiovisual data for information access. Since much of the linguistic information is found in the audio channel, speech recognition is a key enabling technology which, when combined with information retrieval techniques, can be used for searching large audiovisual document collections. Audio indexing must take into account the specificities of audio data such as needing to deal with the continuous data stream and an imperfect word transcription. Other important considerations are dealing with language specificities and facilitating language portability. At Laboratoire d'Informatique pour la Mécanique et les Sciences de l'Ingénieur (LIMSI), broadcast news transcription systems have been developed for seven languages: English, French, German, Mandarin, Portuguese, Spanish, and Arabic. The transcription systems have been integrated into prototype demonstrators for several application areas such as audio data mining, structuring audiovisual archives, selective dissemination of information, and topic tracking for media monitoring. As examples, this paper addresses the spoken document retrieval and topic tracking tasks.

  20. The Effective Use of Symbols in Teaching Word Recognition to Children with Severe Learning Difficulties: A Comparison of Word Alone, Integrated Picture Cueing and the Handle Technique.

    ERIC Educational Resources Information Center

    Sheehy, Kieron

    2002-01-01

    A comparison is made between a new technique (the Handle Technique), Integrated Picture Cueing, and a Word Alone Method. Results show using a new combination of teaching strategies enabled logographic symbols to be used effectively in teaching word recognition to 12 children with severe learning difficulties. (Contains references.) (Author/CR)

  1. Spoken Word Recognition in Adolescents with Autism Spectrum Disorders and Specific Language Impairment

    ERIC Educational Resources Information Center

    Loucas, Tom; Riches, Nick; Baird, Gillian; Pickles, Andrew; Simonoff, Emily; Chandler, Susie; Charman, Tony

    2013-01-01

    Spoken word recognition, during gating, appears intact in specific language impairment (SLI). This study used gating to investigate the process in adolescents with autism spectrum disorders plus language impairment (ALI). Adolescents with ALI, SLI, and typical language development (TLD), matched on nonverbal IQ listened to gated words that varied…

  2. Limited Role of Contextual Information in Adult Word Recognition. Technical Report No. 411.

    ERIC Educational Resources Information Center

    Durgunoglu, Aydin Y.

    Recognizing a word in a meaningful text involves processes that combine information from many different sources, and both bottom-up processes (such as feature extraction and letter recognition) and top-down processes (contextual information) are thought to interact when skilled readers recognize words. Two similar experiments investigated word…

  3. Memory for Pictures, Words, and Spatial Location in Older Adults: Evidence for Pictorial Superiority.

    ERIC Educational Resources Information Center

    Park, Denise Cortis; And Others

    1983-01-01

    Tested recognition memory for items and spatial location by varying picture and word stimuli across four slide quadrants. Results showed a pictorial superiority effect for item recognition and a greater ability to remember the spatial location of pictures versus words for both old and young adults (N=95). (WAS)

  4. Age-of-Acquisition Effects in Visual Word Recognition: Evidence from Expert Vocabularies

    ERIC Educational Resources Information Center

    Stadthagen-Gonzalez, Hans; Bowers, Jeffrey S.; Damian, Markus F.

    2004-01-01

    Three experiments assessed the contributions of age-of-acquisition (AoA) and frequency to visual word recognition. Three databases were created from electronic journals in chemistry, psychology and geology in order to identify technical words that are extremely frequent in each discipline but acquired late in life. In Experiment 1, psychologists…

  5. Foveational Complexity in Single Word Identification: Contralateral Visual Pathways Are Advantaged over Ipsilateral Pathways

    ERIC Educational Resources Information Center

    Obregon, Mateo; Shillcock, Richard

    2012-01-01

    Recognition of a single word is an elemental task in innumerable cognitive psychology experiments, but involves unexpected complexity. We test a controversial claim that the human fovea is vertically divided, with each half projecting to either the contralateral or ipsilateral hemisphere, thereby influencing foveal word recognition. We report a…

  6. The Temporal Dynamics of Spoken Word Recognition in Adverse Listening Conditions

    ERIC Educational Resources Information Center

    Brouwer, Susanne; Bradlow, Ann R.

    2016-01-01

    This study examined the temporal dynamics of spoken word recognition in noise and background speech. In two visual-world experiments, English participants listened to target words while looking at four pictures on the screen: a target (e.g. "candle"), an onset competitor (e.g. "candy"), a rhyme competitor (e.g.…

  7. Using Constant Time Delay to Teach Braille Word Recognition

    ERIC Educational Resources Information Center

    Hooper, Jonathan; Ivy, Sarah; Hatton, Deborah

    2014-01-01

    Introduction: Constant time delay has been identified as an evidence-based practice to teach print sight words and picture recognition (Browder, Ahlbrim-Delzell, Spooner, Mims, & Baker, 2009). For the study presented here, we tested the effectiveness of constant time delay to teach new braille words. Methods: A single-subject multiple baseline…

  8. Spoken Word Recognition of Chinese Words in Continuous Speech

    ERIC Educational Resources Information Center

    Yip, Michael C. W.

    2015-01-01

    The present study examined the role of positional probability of syllables played in recognition of spoken word in continuous Cantonese speech. Because some sounds occur more frequently at the beginning position or ending position of Cantonese syllables than the others, so these kinds of probabilistic information of syllables may cue the locations…

  9. Is Syntactic-Category Processing Obligatory in Visual Word Recognition? Evidence from Chinese

    ERIC Educational Resources Information Center

    Wong, Andus Wing-Kuen; Chen, Hsuan-Chih

    2012-01-01

    Three experiments were conducted to investigate how syntactic-category and semantic information is processed in visual word recognition. The stimuli were two-character Chinese words in which semantic and syntactic-category ambiguities were factorially manipulated. A lexical decision task was employed in Experiment 1, whereas a semantic relatedness…

  10. Got Rhythm...For Better and for Worse. Cross-Modal Effects of Auditory Rhythm on Visual Word Recognition

    ERIC Educational Resources Information Center

    Brochard, Renaud; Tassin, Maxime; Zagar, Daniel

    2013-01-01

    The present research aimed to investigate whether, as previously observed with pictures, background auditory rhythm would also influence visual word recognition. In a lexical decision task, participants were presented with bisyllabic visual words, segmented into two successive groups of letters, while an irrelevant strongly metric auditory…

  11. Genetic and Environmental Influences on Individual Differences in Printed Word Recognition.

    ERIC Educational Resources Information Center

    Gayan, Javier; Olson, Richard K.

    2003-01-01

    Explored genetic and environmental etiologies of individual differences in printed word recognition and related skills in identical and fraternal twin 8- to 18-year-olds. Found evidence for moderate genetic influences common between IQ, phoneme awareness, and word-reading skills and for stronger IQ-independent genetic influences that were common…

  12. L2 Gender Facilitation and Inhibition in Spoken Word Recognition

    ERIC Educational Resources Information Center

    Behney, Jennifer N.

    2011-01-01

    This dissertation investigates the role of grammatical gender facilitation and inhibition in second language (L2) learners' spoken word recognition. Native speakers of languages that have grammatical gender are sensitive to gender marking when hearing and recognizing a word. Gender facilitation refers to when a given noun that is preceded by an…

  13. Syllables and bigrams: orthographic redundancy and syllabic units affect visual word recognition at different processing levels.

    PubMed

    Conrad, Markus; Carreiras, Manuel; Tamm, Sascha; Jacobs, Arthur M

    2009-04-01

    Over the last decade, there has been increasing evidence for syllabic processing during visual word recognition. If syllabic effects prove to be independent from orthographic redundancy, this would seriously challenge the ability of current computational models to account for the processing of polysyllabic words. Three experiments are presented to disentangle effects of the frequency of syllabic units and orthographic segments in lexical decision. In Experiment 1 the authors obtained an inhibitory syllable frequency effect that was unaffected by the presence or absence of a bigram trough at the syllable boundary. In Experiments 2 and 3 an inhibitory effect of initial syllable frequency but a facilitative effect of initial bigram frequency emerged when manipulating 1 of the 2 measures and controlling for the other in Spanish words starting with consonant-vowel syllables. The authors conclude that effects of syllable frequency and letter-cluster frequency are independent and arise at different processing levels of visual word recognition. Results are discussed within the framework of an interactive activation model of visual word recognition. (c) 2009 APA, all rights reserved.

  14. Parallel language activation and cognitive control during spoken word recognition in bilinguals

    PubMed Central

    Blumenfeld, Henrike K.; Marian, Viorica

    2013-01-01

    Accounts of bilingual cognitive advantages suggest an associative link between cross-linguistic competition and inhibitory control. We investigate this link by examining English-Spanish bilinguals’ parallel language activation during auditory word recognition and nonlinguistic Stroop performance. Thirty-one English-Spanish bilinguals and 30 English monolinguals participated in an eye-tracking study. Participants heard words in English (e.g., comb) and identified corresponding pictures from a display that included pictures of a Spanish competitor (e.g., conejo, English rabbit). Bilinguals with higher Spanish proficiency showed more parallel language activation and smaller Stroop effects than bilinguals with lower Spanish proficiency. Across all bilinguals, stronger parallel language activation between 300–500ms after word onset was associated with smaller Stroop effects; between 633–767ms, reduced parallel language activation was associated with smaller Stroop effects. Results suggest that bilinguals who perform well on the Stroop task show increased cross-linguistic competitor activation during early stages of word recognition and decreased competitor activation during later stages of word recognition. Findings support the hypothesis that cross-linguistic competition impacts domain-general inhibition. PMID:24244842

  15. Synthesis of Common Arabic Handwritings to Aid Optical Character Recognition Research.

    PubMed

    Dinges, Laslo; Al-Hamadi, Ayoub; Elzobi, Moftah; El-Etriby, Sherif

    2016-03-11

    Document analysis tasks such as pattern recognition, word spotting or segmentation, require comprehensive databases for training and validation. Not only variations in writing style but also the used list of words is of importance in the case that training samples should reflect the input of a specific area of application. However, generation of training samples is expensive in the sense of manpower and time, particularly if complete text pages including complex ground truth are required. This is why there is a lack of such databases, especially for Arabic, the second most popular language. However, Arabic handwriting recognition involves different preprocessing, segmentation and recognition methods. Each requires particular ground truth or samples to enable optimal training and validation, which are often not covered by the currently available databases. To overcome this issue, we propose a system that synthesizes Arabic handwritten words and text pages and generates corresponding detailed ground truth. We use these syntheses to validate a new, segmentation based system that recognizes handwritten Arabic words. We found that a modification of an Active Shape Model based character classifiers-that we proposed earlier-improves the word recognition accuracy. Further improvements are achieved, by using a vocabulary of the 50,000 most common Arabic words for error correction.

  16. Synthesis of Common Arabic Handwritings to Aid Optical Character Recognition Research

    PubMed Central

    Dinges, Laslo; Al-Hamadi, Ayoub; Elzobi, Moftah; El-etriby, Sherif

    2016-01-01

    Document analysis tasks such as pattern recognition, word spotting or segmentation, require comprehensive databases for training and validation. Not only variations in writing style but also the used list of words is of importance in the case that training samples should reflect the input of a specific area of application. However, generation of training samples is expensive in the sense of manpower and time, particularly if complete text pages including complex ground truth are required. This is why there is a lack of such databases, especially for Arabic, the second most popular language. However, Arabic handwriting recognition involves different preprocessing, segmentation and recognition methods. Each requires particular ground truth or samples to enable optimal training and validation, which are often not covered by the currently available databases. To overcome this issue, we propose a system that synthesizes Arabic handwritten words and text pages and generates corresponding detailed ground truth. We use these syntheses to validate a new, segmentation based system that recognizes handwritten Arabic words. We found that a modification of an Active Shape Model based character classifiers—that we proposed earlier—improves the word recognition accuracy. Further improvements are achieved, by using a vocabulary of the 50,000 most common Arabic words for error correction. PMID:26978368

  17. The optimal viewing position effect in printed versus cursive words: Evidence of a reading cost for the cursive font.

    PubMed

    Danna, Jérémy; Massendari, Delphine; Furnari, Benjamin; Ducrot, Stéphanie

    2018-06-13

    Two eye-movement experiments were conducted to examine the effects of font type on the recognition of words presented in central vision, using a variable-viewing-position technique. Two main questions were addressed: (1) Is the optimal viewing position (OVP) for word recognition modulated by font type? (2) Is the cursive font more appropriate than the printed font in word recognition in children who exclusively write using a cursive script? In order to disentangle the role of perceptual difficulty associated with the cursive font and the impact of writing habits, we tested French adults (Experiment 1) and second-grade French children, the latter having exclusively learned to write in cursive (Experiment 2). Results revealed that the printed font is more appropriate than the cursive for recognizing words in both adults and children: adults were slightly less accurate in cursive than in printed stimuli recognition and children were slower to identify cursive stimuli than printed stimuli. Eye-movement measures also revealed that the OVP curves were flattened in cursive font in both adults and children. We concluded that the perceptual difficulty of the cursive font degrades word recognition by impacting the OVP stability. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. Predicting word-recognition performance in noise by young listeners with normal hearing using acoustic, phonetic, and lexical variables.

    PubMed

    McArdle, Rachel; Wilson, Richard H

    2008-06-01

    To analyze the 50% correct recognition data that were from the Wilson et al (this issue) study and that were obtained from 24 listeners with normal hearing; also to examine whether acoustic, phonetic, or lexical variables can predict recognition performance for monosyllabic words presented in speech-spectrum noise. The specific variables are as follows: (a) acoustic variables (i.e., effective root-mean-square sound pressure level, duration), (b) phonetic variables (i.e., consonant features such as manner, place, and voicing for initial and final phonemes; vowel phonemes), and (c) lexical variables (i.e., word frequency, word familiarity, neighborhood density, neighborhood frequency). The descriptive, correlational study will examine the influence of acoustic, phonetic, and lexical variables on speech recognition in noise performance. Regression analysis demonstrated that 45% of the variance in the 50% point was accounted for by acoustic and phonetic variables whereas only 3% of the variance was accounted for by lexical variables. These findings suggest that monosyllabic word-recognition-in-noise is more dependent on bottom-up processing than on top-down processing. The results suggest that when speech-in-noise testing is used in a pre- and post-hearing-aid-fitting format, the use of monosyllabic words may be sensitive to changes in audibility resulting from amplification.

  19. Hierarchical singleton-type recurrent neural fuzzy networks for noisy speech recognition.

    PubMed

    Juang, Chia-Feng; Chiou, Chyi-Tian; Lai, Chun-Lung

    2007-05-01

    This paper proposes noisy speech recognition using hierarchical singleton-type recurrent neural fuzzy networks (HSRNFNs). The proposed HSRNFN is a hierarchical connection of two singleton-type recurrent neural fuzzy networks (SRNFNs), where one is used for noise filtering and the other for recognition. The SRNFN is constructed by recurrent fuzzy if-then rules with fuzzy singletons in the consequences, and their recurrent properties make them suitable for processing speech patterns with temporal characteristics. In n words recognition, n SRNFNs are created for modeling n words, where each SRNFN receives the current frame feature and predicts the next one of its modeling word. The prediction error of each SRNFN is used as recognition criterion. In filtering, one SRNFN is created, and each SRNFN recognizer is connected to the same SRNFN filter, which filters noisy speech patterns in the feature domain before feeding them to the SRNFN recognizer. Experiments with Mandarin word recognition under different types of noise are performed. Other recognizers, including multilayer perceptron (MLP), time-delay neural networks (TDNNs), and hidden Markov models (HMMs), are also tested and compared. These experiments and comparisons demonstrate good results with HSRNFN for noisy speech recognition tasks.

  20. The Role of Morphology in Word Recognition of Hebrew as a Templatic Language

    ERIC Educational Resources Information Center

    Oganyan, Marina

    2017-01-01

    Research on recognition of complex words has primarily focused on affixational complexity in concatenative languages. This dissertation investigates both templatic and affixational complexity in Hebrew, a templatic language, with particular focus on the role of the root and template morphemes in recognition. It also explores the role of morphology…

  1. Using Recall to Reduce False Recognition: Diagnostic and Disqualifying Monitoring

    ERIC Educational Resources Information Center

    Gallo, David A.

    2004-01-01

    Whether recall of studied words (e.g., parsley, rosemary, thyme) could reduce false recognition of related lures (e.g., basil) was investigated. Subjects studied words from several categories for a final recognition memory test. Half of the subjects were given standard test instructions, and half were instructed to use recall to reduce false…

  2. Perceptual learning for speech in noise after application of binary time-frequency masks

    PubMed Central

    Ahmadi, Mahnaz; Gross, Vauna L.; Sinex, Donal G.

    2013-01-01

    Ideal time-frequency (TF) masks can reject noise and improve the recognition of speech-noise mixtures. An ideal TF mask is constructed with prior knowledge of the target speech signal. The intelligibility of a processed speech-noise mixture depends upon the threshold criterion used to define the TF mask. The study reported here assessed the effect of training on the recognition of speech in noise after processing by ideal TF masks that did not restore perfect speech intelligibility. Two groups of listeners with normal hearing listened to speech-noise mixtures processed by TF masks calculated with different threshold criteria. For each group, a threshold criterion that initially produced word recognition scores between 0.56–0.69 was chosen for training. Listeners practiced with one set of TF-masked sentences until their word recognition performance approached asymptote. Perceptual learning was quantified by comparing word-recognition scores in the first and last training sessions. Word recognition scores improved with practice for all listeners with the greatest improvement observed for the same materials used in training. PMID:23464038

  3. Using Psychometric Technology in Educational Assessment: The Case of a Schema-Based Isomorphic Approach to the Automatic Generation of Quantitative Reasoning Items

    ERIC Educational Resources Information Center

    Arendasy, Martin; Sommer, Markus

    2007-01-01

    This article deals with the investigation of the psychometric quality and constructs validity of algebra word problems generated by means of a schema-based version of the automatic min-max approach. Based on review of the research literature in algebra word problem solving and automatic item generation this new approach is introduced as a…

  4. Advances to the development of a basic Mexican sign-to-speech and text language translator

    NASA Astrophysics Data System (ADS)

    Garcia-Bautista, G.; Trujillo-Romero, F.; Diaz-Gonzalez, G.

    2016-09-01

    Sign Language (SL) is the basic alternative communication method between deaf people. However, most of the hearing people have trouble understanding the SL, making communication with deaf people almost impossible and taking them apart from daily activities. In this work we present an automatic basic real-time sign language translator capable of recognize a basic list of Mexican Sign Language (MSL) signs of 10 meaningful words, letters (A-Z) and numbers (1-10) and translate them into speech and text. The signs were collected from a group of 35 MSL signers executed in front of a Microsoft Kinect™ Sensor. The hand gesture recognition system use the RGB-D camera to build and storage data point clouds, color and skeleton tracking information. In this work we propose a method to obtain the representative hand trajectory pattern information. We use Euclidean Segmentation method to obtain the hand shape and Hierarchical Centroid as feature extraction method for images of numbers and letters. A pattern recognition method based on a Back Propagation Artificial Neural Network (ANN) is used to interpret the hand gestures. Finally, we use K-Fold Cross Validation method for training and testing stages. Our results achieve an accuracy of 95.71% on words, 98.57% on numbers and 79.71% on letters. In addition, an interactive user interface was designed to present the results in voice and text format.

  5. Study on Morphological Awareness and Rapid Automatized Naming through Word Reading and Comprehension in Normal and Disabled Reading Arabic-Speaking Children

    ERIC Educational Resources Information Center

    Layes, Smail; Lalonde, Robert; Rebaï, Mohamed

    2017-01-01

    This study explored the role and extent of the involvement of morphological awareness (MA) in contrast to rapid automatized naming (RAN) in word reading and comprehension of Arabic as a morphologically based orthography. We gave measures of word reading, reading comprehension, MA, and RAN in addition to a nonverbal mental ability test to 3 groups…

  6. Evidence for Separate Contributions of High and Low Spatial Frequencies during Visual Word Recognition.

    PubMed

    Winsler, Kurt; Holcomb, Phillip J; Midgley, Katherine J; Grainger, Jonathan

    2017-01-01

    Previous studies have shown that different spatial frequency information processing streams interact during the recognition of visual stimuli. However, it is a matter of debate as to the contributions of high and low spatial frequency (HSF and LSF) information for visual word recognition. This study examined the role of different spatial frequencies in visual word recognition using event-related potential (ERP) masked priming. EEG was recorded from 32 scalp sites in 30 English-speaking adults in a go/no-go semantic categorization task. Stimuli were white characters on a neutral gray background. Targets were uppercase five letter words preceded by a forward-mask (#######) and a 50 ms lowercase prime. Primes were either the same word (repeated) or a different word (un-repeated) than the subsequent target and either contained only high, only low, or full spatial frequency information. Additionally within each condition, half of the prime-target pairs were high lexical frequency, and half were low. In the full spatial frequency condition, typical ERP masked priming effects were found with an attenuated N250 (sub-lexical) and N400 (lexical-semantic) for repeated compared to un-repeated primes. For HSF primes there was a weaker N250 effect which interacted with lexical frequency, a significant reversal of the effect around 300 ms, and an N400-like effect for only high lexical frequency word pairs. LSF primes did not produce any of the classic ERP repetition priming effects, however they did elicit a distinct early effect around 200 ms in the opposite direction of typical repetition effects. HSF information accounted for many of the masked repetition priming ERP effects and therefore suggests that HSFs are more crucial for word recognition. However, LSFs did produce their own pattern of priming effects indicating that larger scale information may still play a role in word recognition.

  7. When benefits outweigh costs: reconsidering "automatic" phonological recoding when reading aloud.

    PubMed

    Robidoux, Serje; Besner, Derek

    2011-06-01

    Skilled readers are slower to read aloud exception words (e.g., PINT) than regular words (e.g., MINT). In the case of exception words, sublexical knowledge competes with the correct pronunciation driven by lexical knowledge, whereas no such competition occurs for regular words. The dominant view is that the cost of this "regularity" effect is evidence that sublexical spelling-sound conversion is impossible to prevent (i.e., is "automatic"). This view has become so reified that the field rarely questions it. However, the results of simulations from the most successful computational models on the table suggest that the claim of "automatic" sublexical phonological recoding is premature given that there is also a benefit conferred by sublexical processing. Taken together with evidence from skilled readers that sublexical phonological recoding can be stopped, we suggest that the field is too narrowly focused when it asserts that sublexical phonological recoding is "automatic" and that a broader, more nuanced and contextually driven approach provides a more useful framework.

  8. Locus of word frequency effects in spelling to dictation: Still at the orthographic level!

    PubMed

    Bonin, Patrick; Laroche, Betty; Perret, Cyril

    2016-11-01

    The present study was aimed at testing the locus of word frequency effects in spelling to dictation: Are they located at the level of spoken word recognition (Chua & Rickard Liow, 2014) or at the level of the orthographic output lexicon (Delattre, Bonin, & Barry, 2006)? Words that varied on objective word frequency and on phonological neighborhood density were orally presented to adults who had to write them down. Following the additive factors logic (Sternberg, 1969, 2001), if word frequency in spelling to dictation influences a processing level, that is, the orthographic output level, different from that influenced by phonological neighborhood density, that is, spoken word recognition, the impact of the 2 factors should be additive. In contrast, their influence should be overadditive if they act at the same processing level in spelling to dictation, namely the spoken word recognition level. We found that both factors had a reliable influence on the spelling latencies but did not interact. This finding is in line with an orthographic output locus hypothesis of word frequency effects in spelling to dictation. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  9. Leveraging Automatic Speech Recognition Errors to Detect Challenging Speech Segments in TED Talks

    ERIC Educational Resources Information Center

    Mirzaei, Maryam Sadat; Meshgi, Kourosh; Kawahara, Tatsuya

    2016-01-01

    This study investigates the use of Automatic Speech Recognition (ASR) systems to epitomize second language (L2) listeners' problems in perception of TED talks. ASR-generated transcripts of videos often involve recognition errors, which may indicate difficult segments for L2 listeners. This paper aims to discover the root-causes of the ASR errors…

  10. Selective handling of information in patients suffering from restrictive anorexia in an emotional Stroop test and a word recognition test.

    PubMed

    Mendlewicz, L; Nef, F; Simon, Y

    2001-01-01

    Several studies have been carried out using the Stroop test in eating disorders. Some of these studies have brought to light the existence of cognitive and attention deficits linked principally to weight and to food in anorexic and bulimic patients. The aim of the current study is to replicate and to clarify the existence of cognitive and attention deficits in anorexic patients using the Stroop test and a word recognition test. The recognition test is made up of 160 words; 80 words from the previous Stroop experiment mixed at random and matched from a semantic point of view to 80 distractions. The recognition word test is carried out 2 or 3 days after the Stroop test. Thirty-two subjects took part in the study: 16 female patients hospitalised for anorexia nervosa and 16 normal females as controls. Our results do not enable us to confirm the existence of specific cognitive deficits in anorexic patients. Copyright 2001 S. Karger AG, Basel

  11. Distributional structure in language: Contributions to noun–verb difficulty differences in infant word recognition

    PubMed Central

    Willits, Jon A.; Seidenberg, Mark S.; Saffran, Jenny R.

    2014-01-01

    What makes some words easy for infants to recognize, and other words difficult? We addressed this issue in the context of prior results suggesting that infants have difficulty recognizing verbs relative to nouns. In this work, we highlight the role played by the distributional contexts in which nouns and verbs occur. Distributional statistics predict that English nouns should generally be easier to recognize than verbs in fluent speech. However, there are situations in which distributional statistics provide similar support for verbs. The statistics for verbs that occur with the English morpheme –ing, for example, should facilitate verb recognition. In two experiments with 7.5- and 9.5-month-old infants, we tested the importance of distributional statistics for word recognition by varying the frequency of the contextual frames in which verbs occur. The results support the conclusion that distributional statistics are utilized by infant language learners and contribute to noun–verb differences in word recognition. PMID:24908342

  12. Improvement in word recognition score with level is associated with hearing aid ownership among patients with hearing loss.

    PubMed

    Halpin, Chris; Rauch, Steven D

    2012-01-01

    Market surveys consistently show that only 22% of those with hearing loss own hearing aids. This is often ascribed to cosmetics, but is it possible that patients apply a different auditory criterion than do audiologists and manufacturers? We tabulated hearing aid ownership in a survey of 1000 consecutive patients. We separated hearing loss cases, with one cohort in which word recognition in quiet could improve with gain (vs. 40 dB HL) and another without such improvement but nonetheless with audiometric thresholds within the manufacturer's fitting ranges. Overall, we found that exactly 22% of hearing loss patients in this sample owned hearing aids; the same finding has been reported in many previous, well-accepted surveys. However, while all patients in the two cohorts experienced difficulty in noise, patients in the cohort without word recognition improvement were found to own hearing aids at a rate of 0.3%, while those patients whose word recognition could increase with level were found to own hearing aids at a rate of 50%. Results also coherently fit a logistic model where shift of the word recognition performance curve by level corresponded to the likelihood of ownership. In addition to the common attribution of low hearing aid usage to patient denial, cosmetic issues, price, or social stigma, these results provide one alternative explanation based on measurable improvement in word recognition performance. Copyright © 2011 S. Karger AG, Basel.

  13. Response-related fMRI of veridical and false recognition of words.

    PubMed

    Heun, Reinhard; Jessen, Frank; Klose, Uwe; Erb, Michael; Granath, Dirk-Oliver; Grodd, Wolfgang

    2004-02-01

    Studies on the relation between local cerebral activation and retrieval success usually compared high and low performance conditions, and thus showed performance-related activation of different brain areas. Only a few studies directly compared signal intensities of different response categories during retrieval. During verbal recognition, we recently observed increased parieto-occipital activation related to false alarms. The present study intends to replicate and extend this observation by investigating common and differential activation by veridical and false recognition. Fifteen healthy volunteers performed a verbal recognition paradigm using 160 learned target and 160 new distractor words. The subjects had to indicate whether they had learned the word before or not. Echo-planar MRI of blood-oxygen-level-dependent signal changes was performed during this recognition task. Words were classified post hoc according to the subjects' responses, i.e. hits, false alarms, correct rejections and misses. Response-related fMRI-analysis was used to compare activation associated with the subjects' recognition success, i.e. signal intensities related to the presentation of words were compared by the above-mentioned four response types. During recognition, all word categories showed increased bilateral activation of the inferior frontal gyrus, the inferior temporal gyrus, the occipital lobe and the brainstem in comparison with the control condition. Hits and false alarms activated several areas including the left medial and lateral parieto-occipital cortex in comparison with subjectively unknown items, i.e. correct rejections and misses. Hits showed more pronounced activation in the medial, false alarms in the lateral parts of the left parieto-occipital cortex. Veridical and false recognition show common as well as different areas of cerebral activation in the left parieto-occipital lobe: increased activation of the medial parietal cortex by hits may correspond to true recognition, increased activation of the parieto-occipital cortex by false alarms may correspond to familiarity decisions. Further studies are needed to investigate the reasons for false decisions in healthy subjects and patients with memory problems.

  14. Clinical Strategies for Sampling Word Recognition Performance.

    PubMed

    Schlauch, Robert S; Carney, Edward

    2018-04-17

    Computer simulation was used to estimate the statistical properties of searches for maximum word recognition ability (PB max). These involve presenting multiple lists and discarding all scores but that of the 1 list that produced the highest score. The simulations, which model limitations inherent in the precision of word recognition scores, were done to inform clinical protocols. A secondary consideration was a derivation of 95% confidence intervals for significant changes in score from phonemic scoring of a 50-word list. The PB max simulations were conducted on a "client" with flat performance intensity functions. The client's performance was assumed to be 60% initially and 40% for a second assessment. Thousands of estimates were obtained to examine the precision of (a) single lists and (b) multiple lists using a PB max procedure. This method permitted summarizing the precision for assessing a 20% drop in performance. A single 25-word list could identify only 58.4% of the cases in which performance fell from 60% to 40%. A single 125-word list identified 99.8% of the declines correctly. Presenting 3 or 5 lists to find PB max produced an undesirable finding: an increase in the word recognition score. A 25-word list produces unacceptably low precision for making clinical decisions. This finding holds in both single and multiple 25-word lists, as in a search for PB max. A table is provided, giving estimates of 95% critical ranges for successive presentations of a 50-word list analyzed by the number of phonemes correctly identified.

  15. Interrupted Monosyllabic Words: The Effects of Ten Interruption Locations on Recognition Performance by Older Listeners with Sensorineural Hearing Loss.

    PubMed

    Wilson, Richard H; Sharrett, Kadie C

    2017-01-01

    Two previous experiments from our laboratory with 70 interrupted monosyllabic words demonstrated that recognition performance was influenced by the temporal location of the interruption pattern. The interruption pattern (10 interruptions/sec, 50% duty cycle) was always the same and referenced word onset; the only difference between the patterns was the temporal location of the on- and off-segments of the interruption cycle. In the first study, both young and older listeners obtained better recognition performances when the initial on-segment coincided with word onset than when the initial on-segment was delayed by 50 msec. The second experiment with 24 young listeners detailed recognition performance as the interruption pattern was incremented in 10-msec steps through the 0- to 90-msec onset range. Across the onset conditions, 95% of the functions were either flat or U-shaped. To define the effects that interruption pattern locations had on word recognition by older listeners with sensorineural hearing loss as the interruption pattern incremented, re: word onset, from 0 to 90 msec in 10-msec steps. A repeated-measures design with ten interruption patterns (onset conditions) and one uninterruption condition. Twenty-four older males (mean = 69.6 yr) with sensorineural hearing loss participated in two 1-hour sessions. The three-frequency pure-tone average was 24.0 dB HL and word recognition was ≥80% correct. Seventy consonant-vowel nucleus-consonant words formed the corpus of materials with 25 additional words used for practice. For each participant, the 700 interrupted stimuli (70 words by 10 onset conditions), the 70 words uninterrupted, and two practice lists each were randomized and recorded on compact disc in 33 tracks of 25 words each. The data were analyzed at the participant and word levels and compared to the results obtained earlier on 24 young listeners with normal hearing. The mean recognition performance on the 70 words uninterrupted was 91.0% with an overall mean performance on the ten interruption conditions of 63.2% (range: 57.9-69.3%), compared to 80.4% (range: 73.0-87.7%) obtained earlier on the young adults. The best performances were at the extremes of the onset conditions. Standard deviations ranged from 22.1% to 28.1% (24 participants) and from 9.2% to 12.8% (70 words). An arithmetic algorithm categorized the shapes of the psychometric functions across the ten onset conditions. With the older participants in the current study, 40% of the functions were flat, 41.4% were U-shaped, and 18.6% were inverted U-shaped, which compared favorably to the function shapes by the young listeners in the earlier study of 50.0%, 41.4%, and 8.6%, respectively. There were two words on which the older listeners had 40% better performances. Collectively, the data are orderly, but at the individual word or participant level, the data are somewhat volatile, which may reflect auditory processing differences between the participant groups. The diversity of recognition performances by the older listeners on the ten interruption conditions with each of the 70 words supports the notion that the term hearing loss is inclusive of processes well beyond the filtering produced by end-organ sensitivity deficits. American Academy of Audiology

  16. Postprocessing for character recognition using pattern features and linguistic information

    NASA Astrophysics Data System (ADS)

    Yoshikawa, Takatoshi; Okamoto, Masayosi; Horii, Hiroshi

    1993-04-01

    We propose a new method of post-processing for character recognition using pattern features and linguistic information. This method corrects errors in the recognition of handwritten Japanese sentences containing Kanji characters. This post-process method is characterized by having two types of character recognition. Improving the accuracy of the character recognition rate of Japanese characters is made difficult by the large number of characters, and the existence of characters with similar patterns. Therefore, it is not practical for a character recognition system to recognize all characters in detail. First, this post-processing method generates a candidate character table by recognizing the simplest features of characters. Then, it selects words corresponding to the character from the candidate character table by referring to a word and grammar dictionary before selecting suitable words. If the correct character is included in the candidate character table, this process can correct an error, however, if the character is not included, it cannot correct an error. Therefore, if this method can presume a character does not exist in a candidate character table by using linguistic information (word and grammar dictionary). It then can verify a presumed character by character recognition using complex features. When this method is applied to an online character recognition system, the accuracy of character recognition improves 93.5% to 94.7%. This proved to be the case when it was used for the editorials of a Japanese newspaper (Asahi Shinbun).

  17. Design and performance of a large vocabulary discrete word recognition system. Volume 1: Technical report. [real time computer technique for voice data processing

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The development, construction, and test of a 100-word vocabulary near real time word recognition system are reported. Included are reasonable replacement of any one or all 100 words in the vocabulary, rapid learning of a new speaker, storage and retrieval of training sets, verbal or manual single word deletion, continuous adaptation with verbal or manual error correction, on-line verification of vocabulary as spoken, system modes selectable via verification display keyboard, relationship of classified word to neighboring word, and a versatile input/output interface to accommodate a variety of applications.

  18. An analysis of initial acquisition and maintenance of sight words following picture matching and copy cover, and compare teaching methods.

    PubMed

    Conley, Colleen M; Derby, K Mark; Roberts-Gwinn, Michelle; Weber, Kimberly P; McLaughlin, T E

    2004-01-01

    This study compared the copy, cover, and compare method to a picture-word matching method for teaching sight word recognition. Participants were 5 kindergarten students with less than preprimer sight word vocabularies who were enrolled in a public school in the Pacific Northwest. A multielement design was used to evaluate the effects of the two interventions. Outcomes suggested that sight words taught using the copy, cover, and compare method resulted in better maintenance of word recognition when compared to the picture-matching intervention. Benefits to students and the practicality of employing the word-level teaching methods are discussed.

  19. Caffeine Improves Left Hemisphere Processing of Positive Words

    PubMed Central

    Kuchinke, Lars; Lux, Vanessa

    2012-01-01

    A positivity advantage is known in emotional word recognition in that positive words are consistently processed faster and with fewer errors compared to emotionally neutral words. A similar advantage is not evident for negative words. Results of divided visual field studies, where stimuli are presented in either the left or right visual field and are initially processed by the contra-lateral brain hemisphere, point to a specificity of the language-dominant left hemisphere. The present study examined this effect by showing that the intake of caffeine further enhanced the recognition performance of positive, but not negative or neutral stimuli compared to a placebo control group. Because this effect was only present in the right visual field/left hemisphere condition, and based on the close link between caffeine intake and dopaminergic transmission, this result points to a dopaminergic explanation of the positivity advantage in emotional word recognition. PMID:23144893

  20. The Effect of Talker Variability on Word Recognition in Preschool Children

    PubMed Central

    Ryalls, Brigette Oliver; Pisoni, David B.

    2012-01-01

    In a series of experiments, the authors investigated the effects of talker variability on children’s word recognition. In Experiment 1, when stimuli were presented in the clear, 3- and 5-year-olds were less accurate at identifying words spoken by multiple talkers than those spoken by a single talker when the multiple-talker list was presented first. In Experiment 2, when words were presented in noise, 3-, 4-, and 5-year-olds again performed worse in the multiple-talker condition than in the single-talker condition, this time regardless of order; processing multiple talkers became easier with age. Experiment 3 showed that both children and adults were slower to repeat words from multiple-talker than those from single-talker lists. More important, children (but not adults) matched acoustic properties of the stimuli (specifically, duration). These results provide important new information about the development of talker normalization in speech perception and spoken word recognition. PMID:9149923

  1. Exploring the Neural Representation of Novel Words Learned through Enactment in a Word Recognition Task

    PubMed Central

    Macedonia, Manuela; Mueller, Karsten

    2016-01-01

    Vocabulary learning in a second language is enhanced if learners enrich the learning experience with self-performed iconic gestures. This learning strategy is called enactment. Here we explore how enacted words are functionally represented in the brain and which brain regions contribute to enhance retention. After an enactment training lasting 4 days, participants performed a word recognition task in the functional Magnetic Resonance Imaging (fMRI) scanner. Data analysis suggests the participation of different and partially intertwined networks that are engaged in higher cognitive processes, i.e., enhanced attention and word recognition. Also, an experience-related network seems to map word representation. Besides core language regions, this latter network includes sensory and motor cortices, the basal ganglia, and the cerebellum. On the basis of its complexity and the involvement of the motor system, this sensorimotor network might explain superior retention for enactment. PMID:27445918

  2. Word recognition materials for native speakers of Taiwan Mandarin.

    PubMed

    Nissen, Shawn L; Harris, Richard W; Dukes, Alycia

    2008-06-01

    To select, digitally record, evaluate, and psychometrically equate word recognition materials that can be used to measure the speech perception abilities of native speakers of Taiwan Mandarin in quiet. Frequently used bisyllabic words produced by male and female talkers of Taiwan Mandarin were digitally recorded and subsequently evaluated using 20 native listeners with normal hearing at 10 intensity levels (-5 to 40 dB HL) in increments of 5 dB. Using logistic regression, 200 words with the steepest psychometric slopes were divided into 4 lists and 8 half-lists that were relatively equivalent in psychometric function slope. To increase auditory homogeneity of the lists, the intensity of words in each list was digitally adjusted so that the threshold of each list was equal to the midpoint between the mean thresholds of the male and female half-lists. Digital recordings of the word recognition lists and the associated clinical instructions are available on CD upon request.

  3. (Almost) Word for Word: As Voice Recognition Programs Improve, Students Reap the Benefits

    ERIC Educational Resources Information Center

    Smith, Mark

    2006-01-01

    Voice recognition software is hardly new--attempts at capturing spoken words and turning them into written text have been available to consumers for about two decades. But what was once an expensive and highly unreliable tool has made great strides in recent years, perhaps most recognized in programs such as Nuance's Dragon NaturallySpeaking…

  4. The Effects of Environmental Context on Recognition Memory and Claims of Remembering

    ERIC Educational Resources Information Center

    Hockley, William E.

    2008-01-01

    Recognition memory for words was tested in same or different contexts using the remember/know response procedure. Context was manipulated by presenting words in different screen colors and locations and by presenting words against real-world photographs. Overall hit and false-alarm rates were higher for tests presented in an old context compared…

  5. Investigating an Innovative Computer Application to Improve L2 Word Recognition from Speech

    ERIC Educational Resources Information Center

    Matthews, Joshua; O'Toole, John Mitchell

    2015-01-01

    The ability to recognise words from the aural modality is a critical aspect of successful second language (L2) listening comprehension. However, little research has been reported on computer-mediated development of L2 word recognition from speech in L2 learning contexts. This report describes the development of an innovative computer application…

  6. The Effects of Semantic Transparency and Base Frequency on the Recognition of English Complex Words

    ERIC Educational Resources Information Center

    Xu, Joe; Taft, Marcus

    2015-01-01

    A visual lexical decision task was used to examine the interaction between base frequency (i.e., the cumulative frequencies of morphologically related forms) and semantic transparency for a list of derived words. Linear mixed effects models revealed that high base frequency facilitates the recognition of the complex word (i.e., a "base…

  7. The Roles of Tonal and Segmental Information in Mandarin Spoken Word Recognition: An Eyetracking Study

    ERIC Educational Resources Information Center

    Malins, Jeffrey G.; Joanisse, Marc F.

    2010-01-01

    We used eyetracking to examine how tonal versus segmental information influence spoken word recognition in Mandarin Chinese. Participants heard an auditory word and were required to identify its corresponding picture from an array that included the target item ("chuang2" "bed"), a phonological competitor (segmental: chuang1 "window"; cohort:…

  8. Facilitatory Effects of Multi-Word Units in Lexical Processing and Word Learning: A Computational Investigation.

    PubMed

    Grimm, Robert; Cassani, Giovanni; Gillis, Steven; Daelemans, Walter

    2017-01-01

    Previous studies have suggested that children and adults form cognitive representations of co-occurring word sequences. We propose (1) that the formation of such multi-word unit (MWU) representations precedes and facilitates the formation of single-word representations in children and thus benefits word learning, and (2) that MWU representations facilitate adult word recognition and thus benefit lexical processing. Using a modified version of an existing computational model (McCauley and Christiansen, 2014), we extract MWUs from a corpus of child-directed speech (CDS) and a corpus of conversations among adults. We then correlate the number of MWUs within which each word appears with (1) age of first production and (2) adult reaction times on a word recognition task. In doing so, we take care to control for the effect of word frequency, as frequent words will naturally tend to occur in many MWUs. We also compare results to a baseline model which randomly groups words into sequences-and find that MWUs have a unique facilitatory effect on both response variables, suggesting that they benefit word learning in children and word recognition in adults. The effect is strongest on age of first production, implying that MWUs are comparatively more important for word learning than for adult lexical processing. We discuss possible underlying mechanisms and formulate testable predictions.

  9. Facilitatory Effects of Multi-Word Units in Lexical Processing and Word Learning: A Computational Investigation

    PubMed Central

    Grimm, Robert; Cassani, Giovanni; Gillis, Steven; Daelemans, Walter

    2017-01-01

    Previous studies have suggested that children and adults form cognitive representations of co-occurring word sequences. We propose (1) that the formation of such multi-word unit (MWU) representations precedes and facilitates the formation of single-word representations in children and thus benefits word learning, and (2) that MWU representations facilitate adult word recognition and thus benefit lexical processing. Using a modified version of an existing computational model (McCauley and Christiansen, 2014), we extract MWUs from a corpus of child-directed speech (CDS) and a corpus of conversations among adults. We then correlate the number of MWUs within which each word appears with (1) age of first production and (2) adult reaction times on a word recognition task. In doing so, we take care to control for the effect of word frequency, as frequent words will naturally tend to occur in many MWUs. We also compare results to a baseline model which randomly groups words into sequences—and find that MWUs have a unique facilitatory effect on both response variables, suggesting that they benefit word learning in children and word recognition in adults. The effect is strongest on age of first production, implying that MWUs are comparatively more important for word learning than for adult lexical processing. We discuss possible underlying mechanisms and formulate testable predictions. PMID:28450842

  10. Beyond word recognition: understanding pediatric oral health literacy.

    PubMed

    Richman, Julia Anne; Huebner, Colleen E; Leggott, Penelope J; Mouradian, Wendy E; Mancl, Lloyd A

    2011-01-01

    Parental oral health literacy is proposed to be an indicator of children's oral health. The purpose of this study was to test if word recognition, commonly used to assess health literacy, is an adequate measure of pediatric oral health literacy. This study evaluated 3 aspects of oral health literacy and parent-reported child oral health. A 3-part pediatric oral health literacy inventory was created to assess parents' word recognition, vocabulary knowledge, and comprehension of 35 terms used in pediatric dentistry. The inventory was administered to 45 English-speaking parents of children enrolled in Head Start. Parents' ability to read dental terms was not associated with vocabulary knowledge (r=0.29, P<.06) or comprehension (r=0.28, P>.06) of the terms. Vocabulary knowledge was strongly associated with comprehension (r=0.80, P<.001). Parent-reported child oral health status was not associated with word recognition, vocabulary knowledge, or comprehension; however parents reporting either excellent or fair/poor ratings had higher scores on all components of the inventory. Word recognition is an inadequate indicator of comprehension of pediatric oral health concepts; pediatric oral health literacy is a multifaceted construct. Parents with adequate reading ability may have difficulty understanding oral health information.

  11. Recognition memory across the lifespan: the impact of word frequency and study-test interval on estimates of familiarity and recollection

    PubMed Central

    Meier, Beat; Rey-Mermet, Alodie; Rothen, Nicolas; Graf, Peter

    2013-01-01

    The goal of this study was to investigate recognition memory performance across the lifespan and to determine how estimates of recollection and familiarity contribute to performance. In each of three experiments, participants from five groups from 14 up to 85 years of age (children, young adults, middle-aged adults, young-old adults, and old-old adults) were presented with high- and low-frequency words in a study phase and were tested immediately afterwards and/or after a one day retention interval. The results showed that word frequency and retention interval affected recognition memory performance as well as estimates of recollection and familiarity. Across the lifespan, the trajectory of recognition memory followed an inverse u-shape function that was neither affected by word frequency nor by retention interval. The trajectory of estimates of recollection also followed an inverse u-shape function, and was especially pronounced for low-frequency words. In contrast, estimates of familiarity did not differ across the lifespan. The results indicate that age differences in recognition memory are mainly due to differences in processes related to recollection while the contribution of familiarity-based processes seems to be age-invariant. PMID:24198796

  12. Evaluation of a wireless audio streaming accessory to improve mobile telephone performance of cochlear implant users.

    PubMed

    Wolfe, Jace; Morais Duke, Mila; Schafer, Erin; Cire, George; Menapace, Christine; O'Neill, Lori

    2016-01-01

    The objective of this study was to evaluate the potential improvement in word recognition in quiet and in noise obtained with use of a Bluetooth-compatible wireless hearing assistance technology (HAT) relative to the acoustic mobile telephone condition (e.g. the mobile telephone receiver held to the microphone of the sound processor). A two-way repeated measures design was used to evaluate differences in telephone word recognition obtained in quiet and in competing noise in the acoustic mobile telephone condition compared to performance obtained with use of the CI sound processor and a telephone HAT. Sixteen adult users of Nucleus cochlear implants and the Nucleus 6 sound processor were included in this study. Word recognition over the mobile telephone in quiet and in noise was significantly better with use of the wireless HAT compared to performance in the acoustic mobile telephone condition. Word recognition over the mobile telephone was better in quiet when compared to performance in noise. The results of this study indicate that use of a wireless HAT improves word recognition over the mobile telephone in quiet and in noise relative to performance in the acoustic mobile telephone condition for a group of adult cochlear implant recipients.

  13. When fear forms memories: threat of shock and brain potentials during encoding and recognition.

    PubMed

    Weymar, Mathias; Bradley, Margaret M; Hamm, Alfons O; Lang, Peter J

    2013-03-01

    The anticipation of highly aversive events is associated with measurable defensive activation, and both animal and human research suggests that stress-inducing contexts can facilitate memory. Here, we investigated whether encoding stimuli in the context of anticipating an aversive shock affects recognition memory. Event-related potentials (ERPs) were measured during a recognition test for words that were encoded in a font color that signaled threat or safety. At encoding, cues signaling threat of shock, compared to safety, prompted enhanced P2 and P3 components. Correct recognition of words encoded in the context of threat, compared to safety, was associated with an enhanced old-new ERP difference (500-700 msec; centro-parietal), and this difference was most reliable for emotional words. Moreover, larger old-new ERP differences when recognizing emotional words encoded in a threatening context were associated with better recognition, compared to words encoded in safety. Taken together, the data indicate enhanced memory for stimuli encoded in a context in which an aversive event is merely anticipated, which could assist in understanding effects of anxiety and stress on memory processes. Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. Age-Related Effects of Stimulus Type and Congruency on Inattentional Blindness.

    PubMed

    Liu, Han-Hui

    2018-01-01

    Background: Most of the previous inattentional blindness (IB) studies focused on the factors that contributed to the detection of unattended stimuli. The age-related changes on IB have rarely been investigated across all age groups. In the current study, by using the dual-task IB paradigm, we aimed to explore the age-related effects of attended stimuli type and congruency between attended and unattended stimuli on IB. Methods: The current study recruited 111 participants (30 adolescents, 48 young adults, and 33 middle-aged adults) in the baseline recognition experiments and 341 participants (135 adolescents, 135 young adults, and 71 middle-aged adults) in the IB experiment. We applied the superimposed picture and word streams experimental paradigm to explore the age-related effects of attended stimuli type and congruency between attended and unattended stimuli on IB. An ANOVA was performed to analyze the results. Results: Participants across all age groups presented significantly lower recognition scores for both pictures and words in comparison with baseline recognition. Participants presented decreased recognition for unattended pictures or words from adolescents to young adults and middle-aged adults. When the pictures and words are congruent, all the participants showed significantly higher recognition scores for unattended stimuli in comparison with incongruent condition. Adolescents and young adults did not show recognition differences when primary tasks were attending pictures or words. Conclusion: The current findings showed that all participants presented better recognition scores for attended stimuli in comparison with unattended stimuli, and the recognition scores decreased from the adolescents to young and middle-aged adults. The findings partly supported the attention capacity models of IB.

  15. Effects of orthographic consistency on eye movement behavior: German and English children and adults process the same words differently.

    PubMed

    Rau, Anne K; Moll, Kristina; Snowling, Margaret J; Landerl, Karin

    2015-02-01

    The current study investigated the time course of cross-linguistic differences in word recognition. We recorded eye movements of German and English children and adults while reading closely matched sentences, each including a target word manipulated for length and frequency. Results showed differential word recognition processes for both developing and skilled readers. Children of the two orthographies did not differ in terms of total word processing time, but this equal outcome was achieved quite differently. Whereas German children relied on small-unit processing early in word recognition, English children applied small-unit decoding only upon rereading-possibly when experiencing difficulties in integrating an unfamiliar word into the sentence context. Rather unexpectedly, cross-linguistic differences were also found in adults in that English adults showed longer processing times than German adults for nonwords. Thus, although orthographic consistency does play a major role in reading development, cross-linguistic differences are detectable even in skilled adult readers. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. The picture superiority effect in a cross-modality recognition task.

    PubMed

    Stenbert, G; Radeborg, K; Hedman, L R

    1995-07-01

    Words and pictures were studied and recognition tests given in which each studied object was to be recognized in both word and picture format. The main dependent variable was the latency of the recognition decision. The purpose was to investigate the effects of study modality (word or picture), of congruence between study and test modalities, and of priming resulting from repeated testing. Experiments 1 and 2 used the same basic design, but the latter also varied retention interval. Experiment 3 added a manipulation of instructions to name studied objects, and Experiment 4 deviated from the others by presenting both picture and word referring to the same object together for study. The results showed that congruence between study and test modalities consistently facilitated recognition. Furthermore, items studied as pictures were more rapidly recognized than were items studied as words. With repeated testing, the second instance was affected by its predecessor, but the facilitating effect of picture-to-word priming exceeded that of word-to-picture priming. The finds suggest a two- stage recognition process, in which the first is based on perceptual familiarity and the second uses semantic links for a retrieval search. Common-code theories that grant privileged access to the semantic code for pictures or, alternatively, dual-code theories that assume mnemonic superiority for the image code are supported by the findings. Explanations of the picture superiority effect as resulting from dual encoding of pictures are not supported by the data.

  17. Visual attention shift to printed words during spoken word recognition in Chinese: The role of phonological information.

    PubMed

    Shen, Wei; Qu, Qingqing; Tong, Xiuhong

    2018-05-01

    The aim of this study was to investigate the extent to which phonological information mediates the visual attention shift to printed Chinese words in spoken word recognition by using an eye-movement technique with a printed-word paradigm. In this paradigm, participants are visually presented with four printed words on a computer screen, which include a target word, a phonological competitor, and two distractors. Participants are then required to select the target word using a computer mouse, and the eye movements are recorded. In Experiment 1, phonological information was manipulated at the full-phonological overlap; in Experiment 2, phonological information at the partial-phonological overlap was manipulated; and in Experiment 3, the phonological competitors were manipulated to share either fulloverlap or partial-overlap with targets directly. Results of the three experiments showed that the phonological competitor effects were observed at both the full-phonological overlap and partial-phonological overlap conditions. That is, phonological competitors attracted more fixations than distractors, which suggested that phonological information mediates the visual attention shift during spoken word recognition. More importantly, we found that the mediating role of phonological information varies as a function of the phonological similarity between target words and phonological competitors.

  18. Internally- and externally-driven network transitions as a basis for automatic and strategic processes in semantic priming: theory and experimental validation

    PubMed Central

    Lerner, Itamar; Shriki, Oren

    2014-01-01

    For the last four decades, semantic priming—the facilitation in recognition of a target word when it follows the presentation of a semantically related prime word—has been a central topic in research of human cognitive processing. Studies have drawn a complex picture of findings which demonstrated the sensitivity of this priming effect to a unique combination of variables, including, but not limited to, the type of relatedness between primes and targets, the prime-target Stimulus Onset Asynchrony (SOA), the relatedness proportion (RP) in the stimuli list and the specific task subjects are required to perform. Automatic processes depending on the activation patterns of semantic representations in memory and controlled strategies adapted by individuals when attempting to maximize their recognition performance have both been implicated in contributing to the results. Lately, we have published a new model of semantic priming that addresses the majority of these findings within one conceptual framework. In our model, semantic memory is depicted as an attractor neural network in which stochastic transitions from one stored pattern to another are continually taking place due to synaptic depression mechanisms. We have shown how such transitions, in combination with a reinforcement-learning rule that adjusts their pace, resemble the classic automatic and controlled processes involved in semantic priming and account for a great number of the findings in the literature. Here, we review the core findings of our model and present new simulations that show how similar principles of parameter-adjustments could account for additional data not addressed in our previous studies, such as the relation between expectancy and inhibition in priming, target frequency and target degradation effects. Finally, we describe two human experiments that validate several key predictions of the model. PMID:24795670

  19. Hearing taboo words can result in early talker effects in word recognition for female listeners.

    PubMed

    Tuft, Samantha E; MᶜLennan, Conor T; Krestar, Maura L

    2018-02-01

    Previous spoken word recognition research using the long-term repetition-priming paradigm found performance costs for stimuli mismatching in talker identity. That is, when words were repeated across the two blocks, and the identity of the talker changed reaction times (RTs) were slower than when the repeated words were spoken by the same talker. Such performance costs, or talker effects, followed a time course, occurring only when processing was relatively slow. More recent research suggests that increased explicit and implicit attention towards the talkers can result in talker effects even during relatively fast processing. The purpose of the current study was to examine whether word meaning would influence the pattern of talker effects in an easy lexical decision task and, if so, whether results would differ depending on whether the presentation of neutral and taboo words was mixed or blocked. Regardless of presentation, participants responded to taboo words faster than neutral words. Furthermore, talker effects for the female talker emerged when participants heard both taboo and neutral words (consistent with an attention-based hypothesis), but not for participants that heard only taboo or only neutral words (consistent with the time-course hypothesis). These findings have important implications for theoretical models of spoken word recognition.

  20. Phonological-orthographic consistency for Japanese words and its impact on visual and auditory word recognition.

    PubMed

    Hino, Yasushi; Kusunose, Yuu; Miyamura, Shinobu; Lupker, Stephen J

    2017-01-01

    In most models of word processing, the degrees of consistency in the mappings between orthographic, phonological, and semantic representations are hypothesized to affect reading time. Following Hino, Miyamura, and Lupker's (2011) examination of the orthographic-phonological (O-P) and orthographic-semantic (O-S) consistency for 1,114 Japanese words (339 katakana and 775 kanji words), in the present research, we initially attempted to measure the phonological-orthographic (P-O) consistency for those same words. In contrast to the O-P and O-S consistencies, which were equivalent for kanji and katakana words, the P-O relationships were much more inconsistent for the kanji words than for the katakana words. The impact of kanji words' P-O consistency was then examined in both visual and auditory word recognition tasks. Although there was no effect of P-O consistency in the standard visual lexical-decision task, significant effects were detected in a lexical-decision task with auditory stimuli, in a perceptual identification task using masked visual stimuli, and in a lexical-decision task with degraded visual stimuli. The implications of these results are discussed in terms of the impact of P-O consistency in auditory and visual word recognition. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  1. English Listeners Use Suprasegmental Cues to Lexical Stress Early During Spoken-Word Recognition

    PubMed Central

    Poellmann, Katja; Kong, Ying-Yee

    2017-01-01

    Purpose We used an eye-tracking technique to investigate whether English listeners use suprasegmental information about lexical stress to speed up the recognition of spoken words in English. Method In a visual world paradigm, 24 young English listeners followed spoken instructions to choose 1 of 4 printed referents on a computer screen (e.g., “Click on the word admiral”). Displays contained a critical pair of words (e.g., ˈadmiral–ˌadmiˈration) that were segmentally identical for their first 2 syllables but differed suprasegmentally in their 1st syllable: One word began with primary lexical stress, and the other began with secondary lexical stress. All words had phrase-level prominence. Listeners' relative proportion of eye fixations on these words indicated their ability to differentiate them over time. Results Before critical word pairs became segmentally distinguishable in their 3rd syllables, participants fixated target words more than their stress competitors, but only if targets had initial primary lexical stress. The degree to which stress competitors were fixated was independent of their stress pattern. Conclusions Suprasegmental information about lexical stress modulates the time course of spoken-word recognition. Specifically, suprasegmental information on the primary-stressed syllable of words with phrase-level prominence helps in distinguishing the word from phonological competitors with secondary lexical stress. PMID:28056135

  2. The word-frequency paradox for recall/recognition occurs for pictures.

    PubMed

    Karlsen, Paul Johan; Snodgrass, Joan Gay

    2004-08-01

    A yes-no recognition task and two recall tasks were conducted using pictures of high and low familiarity ratings. Picture familiarity had analogous effects to word frequency, and replicated the word-frequency paradox in recall and recognition. Low-familiarity pictures were more recognizable than high-familiarity pictures, pure lists of high-familiarity pictures were more recallable than pure lists of low-familiarity pictures, and there was no effect of familiarity for mixed lists. These results are consistent with the predictions of the Search of Associative Memory (SAM) model.

  3. GRAM-CNN: a deep learning approach with local context for named entity recognition in biomedical text.

    PubMed

    Zhu, Qile; Li, Xiaolin; Conesa, Ana; Pereira, Cécile

    2018-05-01

    Best performing named entity recognition (NER) methods for biomedical literature are based on hand-crafted features or task-specific rules, which are costly to produce and difficult to generalize to other corpora. End-to-end neural networks achieve state-of-the-art performance without hand-crafted features and task-specific knowledge in non-biomedical NER tasks. However, in the biomedical domain, using the same architecture does not yield competitive performance compared with conventional machine learning models. We propose a novel end-to-end deep learning approach for biomedical NER tasks that leverages the local contexts based on n-gram character and word embeddings via Convolutional Neural Network (CNN). We call this approach GRAM-CNN. To automatically label a word, this method uses the local information around a word. Therefore, the GRAM-CNN method does not require any specific knowledge or feature engineering and can be theoretically applied to a wide range of existing NER problems. The GRAM-CNN approach was evaluated on three well-known biomedical datasets containing different BioNER entities. It obtained an F1-score of 87.26% on the Biocreative II dataset, 87.26% on the NCBI dataset and 72.57% on the JNLPBA dataset. Those results put GRAM-CNN in the lead of the biological NER methods. To the best of our knowledge, we are the first to apply CNN based structures to BioNER problems. The GRAM-CNN source code, datasets and pre-trained model are available online at: https://github.com/valdersoul/GRAM-CNN. andyli@ece.ufl.edu or aconesa@ufl.edu. Supplementary data are available at Bioinformatics online.

  4. GRAM-CNN: a deep learning approach with local context for named entity recognition in biomedical text

    PubMed Central

    Zhu, Qile; Li, Xiaolin; Conesa, Ana; Pereira, Cécile

    2018-01-01

    Abstract Motivation Best performing named entity recognition (NER) methods for biomedical literature are based on hand-crafted features or task-specific rules, which are costly to produce and difficult to generalize to other corpora. End-to-end neural networks achieve state-of-the-art performance without hand-crafted features and task-specific knowledge in non-biomedical NER tasks. However, in the biomedical domain, using the same architecture does not yield competitive performance compared with conventional machine learning models. Results We propose a novel end-to-end deep learning approach for biomedical NER tasks that leverages the local contexts based on n-gram character and word embeddings via Convolutional Neural Network (CNN). We call this approach GRAM-CNN. To automatically label a word, this method uses the local information around a word. Therefore, the GRAM-CNN method does not require any specific knowledge or feature engineering and can be theoretically applied to a wide range of existing NER problems. The GRAM-CNN approach was evaluated on three well-known biomedical datasets containing different BioNER entities. It obtained an F1-score of 87.26% on the Biocreative II dataset, 87.26% on the NCBI dataset and 72.57% on the JNLPBA dataset. Those results put GRAM-CNN in the lead of the biological NER methods. To the best of our knowledge, we are the first to apply CNN based structures to BioNER problems. Availability and implementation The GRAM-CNN source code, datasets and pre-trained model are available online at: https://github.com/valdersoul/GRAM-CNN. Contact andyli@ece.ufl.edu or aconesa@ufl.edu Supplementary information Supplementary data are available at Bioinformatics online. PMID:29272325

  5. Speech Processing and Recognition (SPaRe)

    DTIC Science & Technology

    2011-01-01

    results in the areas of automatic speech recognition (ASR), speech processing, machine translation (MT), natural language processing ( NLP ), and...Processing ( NLP ), Information Retrieval (IR) 16. SECURITY CLASSIFICATION OF: UNCLASSIFED 17. LIMITATION OF ABSTRACT 18. NUMBER OF PAGES 19a. NAME...Figure 9, the IOC was only expected to provide document submission and search; automatic speech recognition (ASR) for English, Spanish, Arabic , and

  6. The Relationships among Cognitive Correlates and Irregular Word, Non-Word, and Word Reading

    ERIC Educational Resources Information Center

    Abu-Hamour, Bashir; University, Mu'tah; Urso, Annmarie; Mather, Nancy

    2012-01-01

    This study explored four hypotheses: (a) the relationships among rapid automatized naming (RAN) and processing speed (PS) to irregular word, non-word, and word reading; (b) the predictive power of various RAN and PS measures, (c) the cognitive correlates that best predicted irregular word, non-word, and word reading, and (d) reading performance of…

  7. Semantic information mediates visual attention during spoken word recognition in Chinese: Evidence from the printed-word version of the visual-world paradigm.

    PubMed

    Shen, Wei; Qu, Qingqing; Li, Xingshan

    2016-07-01

    In the present study, we investigated whether the activation of semantic information during spoken word recognition can mediate visual attention's deployment to printed Chinese words. We used a visual-world paradigm with printed words, in which participants listened to a spoken target word embedded in a neutral spoken sentence while looking at a visual display of printed words. We examined whether a semantic competitor effect could be observed in the printed-word version of the visual-world paradigm. In Experiment 1, the relationship between the spoken target words and the printed words was manipulated so that they were semantically related (a semantic competitor), phonologically related (a phonological competitor), or unrelated (distractors). We found that the probability of fixations on semantic competitors was significantly higher than that of fixations on the distractors. In Experiment 2, the orthographic similarity between the spoken target words and their semantic competitors was manipulated to further examine whether the semantic competitor effect was modulated by orthographic similarity. We found significant semantic competitor effects regardless of orthographic similarity. Our study not only reveals that semantic information can affect visual attention, it also provides important new insights into the methodology employed to investigate the semantic processing of spoken words during spoken word recognition using the printed-word version of the visual-world paradigm.

  8. Evaluation of a voice recognition system for the MOTAS pseudo pilot station function

    NASA Technical Reports Server (NTRS)

    Houck, J. A.

    1982-01-01

    The Langley Research Center has undertaken a technology development activity to provide a capability, the mission oriented terminal area simulation (MOTAS), wherein terminal area and aircraft systems studies can be performed. An experiment was conducted to evaluate state-of-the-art voice recognition technology and specifically, the Threshold 600 voice recognition system to serve as an aircraft control input device for the MOTAS pseudo pilot station function. The results of the experiment using ten subjects showed a recognition error of 3.67 percent for a 48-word vocabulary tested against a programmed vocabulary of 103 words. After the ten subjects retrained the Threshold 600 system for the words which were misrecognized or rejected, the recognition error decreased to 1.96 percent. The rejection rates for both cases were less than 0.70 percent. Based on the results of the experiment, voice recognition technology and specifically the Threshold 600 voice recognition system were chosen to fulfill this MOTAS function.

  9. Four-Channel Biosignal Analysis and Feature Extraction for Automatic Emotion Recognition

    NASA Astrophysics Data System (ADS)

    Kim, Jonghwa; André, Elisabeth

    This paper investigates the potential of physiological signals as a reliable channel for automatic recognition of user's emotial state. For the emotion recognition, little attention has been paid so far to physiological signals compared to audio-visual emotion channels such as facial expression or speech. All essential stages of automatic recognition system using biosignals are discussed, from recording physiological dataset up to feature-based multiclass classification. Four-channel biosensors are used to measure electromyogram, electrocardiogram, skin conductivity and respiration changes. A wide range of physiological features from various analysis domains, including time/frequency, entropy, geometric analysis, subband spectra, multiscale entropy, etc., is proposed in order to search the best emotion-relevant features and to correlate them with emotional states. The best features extracted are specified in detail and their effectiveness is proven by emotion recognition results.

  10. Visual recognition of permuted words

    NASA Astrophysics Data System (ADS)

    Rashid, Sheikh Faisal; Shafait, Faisal; Breuel, Thomas M.

    2010-02-01

    In current study we examine how letter permutation affects in visual recognition of words for two orthographically dissimilar languages, Urdu and German. We present the hypothesis that recognition or reading of permuted and non-permuted words are two distinct mental level processes, and that people use different strategies in handling permuted words as compared to normal words. A comparison between reading behavior of people in these languages is also presented. We present our study in context of dual route theories of reading and it is observed that the dual-route theory is consistent with explanation of our hypothesis of distinction in underlying cognitive behavior for reading permuted and non-permuted words. We conducted three experiments in lexical decision tasks to analyze how reading is degraded or affected by letter permutation. We performed analysis of variance (ANOVA), distribution free rank test, and t-test to determine the significance differences in response time latencies for two classes of data. Results showed that the recognition accuracy for permuted words is decreased 31% in case of Urdu and 11% in case of German language. We also found a considerable difference in reading behavior for cursive and alphabetic languages and it is observed that reading of Urdu is comparatively slower than reading of German due to characteristics of cursive script.

  11. Talker and accent variability effects on spoken word recognition

    NASA Astrophysics Data System (ADS)

    Nyang, Edna E.; Rogers, Catherine L.; Nishi, Kanae

    2003-04-01

    A number of studies have shown that words in a list are recognized less accurately in noise and with longer response latencies when they are spoken by multiple talkers, rather than a single talker. These results have been interpreted as support for an exemplar-based model of speech perception, in which it is assumed that detailed information regarding the speaker's voice is preserved in memory and used in recognition, rather than being eliminated via normalization. In the present study, the effects of varying both accent and talker are investigated using lists of words spoken by (a) a single native English speaker, (b) six native English speakers, (c) three native English speakers and three Japanese-accented English speakers. Twelve /hVd/ words were mixed with multi-speaker babble at three signal-to-noise ratios (+10, +5, and 0 dB) to create the word lists. Native English-speaking listeners' percent-correct recognition for words produced by native English speakers across the three talker conditions (single talker native, multi-talker native, and multi-talker mixed native and non-native) and three signal-to-noise ratios will be compared to determine whether sources of speaker variability other than voice alone add to the processing demands imposed by simple (i.e., single accent) speaker variability in spoken word recognition.

  12. Voice tracking and spoken word recognition in the presence of other voices

    NASA Astrophysics Data System (ADS)

    Litong-Palima, Marisciel; Violanda, Renante; Saloma, Caesar

    2004-12-01

    We study the human hearing process by modeling the hair cell as a thresholded Hopf bifurcator and compare our calculations with experimental results involving human subjects in two different multi-source listening tasks of voice tracking and spoken-word recognition. In the model, we observed noise suppression by destructive interference between noise sources which weakens the effective noise strength acting on the hair cell. Different success rate characteristics were observed for the two tasks. Hair cell performance at low threshold levels agree well with results from voice-tracking experiments while those of word-recognition experiments are consistent with a linear model of the hearing process. The ability of humans to track a target voice is robust against cross-talk interference unlike word-recognition performance which deteriorates quickly with the number of uncorrelated noise sources in the environment which is a response behavior that is associated with linear systems.

  13. Emotionally enhanced memory for negatively arousing words: storage or retrieval advantage?

    PubMed

    Nadarevic, Lena

    2017-12-01

    People typically remember emotionally negative words better than neutral words. Two experiments are reported that investigate whether emotionally enhanced memory (EEM) for negatively arousing words is based on a storage or retrieval advantage. Participants studied non-word-word pairs that either involved negatively arousing or neutral target words. Memory for these target words was tested by means of a recognition test and a cued-recall test. Data were analysed with a multinomial model that allows the disentanglement of storage and retrieval processes in the present recognition-then-cued-recall paradigm. In both experiments the multinomial analyses revealed no storage differences between negatively arousing and neutral words but a clear retrieval advantage for negatively arousing words in the cued-recall test. These findings suggest that EEM for negatively arousing words is driven by associative processes.

  14. False memory and level of processing effect: an event-related potential study.

    PubMed

    Beato, Maria Soledad; Boldini, Angela; Cadavid, Sara

    2012-09-12

    Event-related potentials (ERPs) were used to determine the effects of level of processing on true and false memory, using the Deese-Roediger-McDermott (DRM) paradigm. In the DRM paradigm, lists of words highly associated to a single nonpresented word (the 'critical lure') are studied and, in a subsequent memory test, critical lures are often falsely remembered. Lists with three critical lures per list were auditorily presented here to participants who studied them with either a shallow (saying whether the word contained the letter 'o') or a deep (creating a mental image of the word) processing task. Visual presentation modality was used on a final recognition test. True recognition of studied words was significantly higher after deep encoding, whereas false recognition of nonpresented critical lures was similar in both experimental groups. At the ERP level, true and false recognition showed similar patterns: no FN400 effect was found, whereas comparable left parietal and late right frontal old/new effects were found for true and false recognition in both experimental conditions. Items studied under shallow encoding conditions elicited more positive ERP than items studied under deep encoding conditions at a 1000-1500 ms interval. These ERP results suggest that true and false recognition share some common underlying processes. Differential effects of level of processing on true and false memory were found only at the behavioral level but not at the ERP level.

  15. Evaluating a Split Processing Model of Visual Word Recognition: Effects of Orthographic Neighborhood Size

    ERIC Educational Resources Information Center

    Lavidor, Michal; Hayes, Adrian; Shillcock, Richard; Ellis, Andrew W.

    2004-01-01

    The split fovea theory proposes that visual word recognition of centrally presented words is mediated by the splitting of the foveal image, with letters to the left of fixation being projected to the right hemisphere (RH) and letters to the right of fixation being projected to the left hemisphere (LH). Two lexical decision experiments aimed to…

  16. The Picture Superiority Effect in Recognition Memory: A Developmental Study Using the Response Signal Procedure

    ERIC Educational Resources Information Center

    Defeyter, Margaret Anne; Russo, Riccardo; McPartlin, Pamela Louise

    2009-01-01

    Items studied as pictures are better remembered than items studied as words even when test items are presented as words. The present study examined the development of this picture superiority effect in recognition memory. Four groups ranging in age from 7 to 20 years participated. They studied words and pictures, with test stimuli always presented…

  17. Learning-Dependent Changes of Associations between Unfamiliar Words and Perceptual Features: A 15-Day Longitudinal Study

    ERIC Educational Resources Information Center

    Kambara, Toshimune; Tsukiura, Takashi; Shigemune, Yayoi; Kanno, Akitake; Nouchi, Rui; Yomogida, Yukihito; Kawashima, Ryuta

    2013-01-01

    This study examined behavioral changes in 15-day learning of word-picture (WP) and word-sound (WS) associations, using meaningless stimuli. Subjects performed a learning task and two recognition tasks under the WP and WS conditions every day for 15 days. Two main findings emerged from this study. First, behavioral data of recognition accuracy and…

  18. Genetic Influences on Early Word Recognition Abilities and Disabilities: A Study of 7-Year-Old Twins

    ERIC Educational Resources Information Center

    Harlaar, Nicole; Spinath, Frank M.; Dale, Philip S.; Plomin, Robert

    2005-01-01

    Background: A fundamental issue for child psychology concerns the origins of individual differences in early reading development. Method: A measure of word recognition, the Test of Word Reading Efficiency (TOWRE), was administered by telephone to a representative population sample of 3,909 same-sex and opposite-sex pairs of 7-year-old twins.…

  19. The Processing of Consonants and Vowels during Letter Identity and Letter Position Assignment in Visual-Word Recognition: An ERP Study

    ERIC Educational Resources Information Center

    Vergara-Martinez, Marta; Perea, Manuel; Marin, Alejandro; Carreiras, Manuel

    2011-01-01

    Recent research suggests that there is a processing distinction between consonants and vowels in visual-word recognition. Here we conjointly examine the time course of consonants and vowels in processes of letter identity and letter position assignment. Event related potentials (ERPs) were recorded while participants read words and pseudowords in…

  20. Lexical-Semantic Processing and Reading: Relations between Semantic Priming, Visual Word Recognition and Reading Comprehension

    ERIC Educational Resources Information Center

    Nobre, Alexandre de Pontes; de Salles, Jerusa Fumagalli

    2016-01-01

    The aim of this study was to investigate relations between lexical-semantic processing and two components of reading: visual word recognition and reading comprehension. Sixty-eight children from private schools in Porto Alegre, Brazil, from 7 to 12 years, were evaluated. Reading was assessed with a word/nonword reading task and a reading…

  1. Re-Evaluating Split-Fovea Processing in Word Recognition: A Critical Assessment of Recent Research

    ERIC Educational Resources Information Center

    Jordan, Timothy R.; Paterson, Kevin B.

    2009-01-01

    In recent years, some researchers have proposed that a fundamental component of the word recognition process is that each fovea is divided precisely at its vertical midline and that information either side of this midline projects to different, contralateral hemispheres. Thus, when a word is fixated, all letters to the left of the point of…

  2. Charting the Functional Relevance of Broca's Area for Visual Word Recognition and Picture Naming in Dutch Using fMRI-Guided TMS

    ERIC Educational Resources Information Center

    Wheat, Katherine L.; Cornelissen, Piers L.; Sack, Alexander T.; Schuhmann, Teresa; Goebel, Rainer; Blomert, Leo

    2013-01-01

    Magnetoencephalography (MEG) has shown pseudohomophone priming effects at Broca's area (specifically pars opercularis of left inferior frontal gyrus and precentral gyrus; LIFGpo/PCG) within [approximately]100 ms of viewing a word. This is consistent with Broca's area involvement in fast phonological access during visual word recognition. Here we…

  3. Reading Habits, Perceptual Learning, and Recognition of Printed Words

    ERIC Educational Resources Information Center

    Nazir, Tatjana A.; Ben-Boutayab, Nadia; Decoppet, Nathalie; Deutsch, Avital; Frost, Ram

    2004-01-01

    The present work aims at demonstrating that visual training associated with the act of reading modifies the way we perceive printed words. As reading does not train all parts of the retina in the same way but favors regions on the side in the direction of scanning, visual word recognition should be better at retinal locations that are frequently…

  4. The Effects of Multiple Script Priming on Word Recognition by the Two Cerebral Hemispheres: Implications for Discourse Processing

    ERIC Educational Resources Information Center

    Faust, Miriam; Barak, Ofra; Chiarello, Christine

    2006-01-01

    The present study examined left (LH) and right (RH) hemisphere involvement in discourse processing by testing the ability of each hemisphere to use world knowledge in the form of script contexts for word recognition. Participants made lexical decisions to laterally presented target words preceded by centrally presented script primes (four…

  5. Encoding instructions and stimulus presentation in local environmental context-dependent memory studies.

    PubMed

    Markopoulos, G; Rutherford, A; Cairns, C; Green, J

    2010-08-01

    Murnane and Phelps (1993) recommend word pair presentations in local environmental context (EC) studies to prevent associations being formed between successively presented items and their ECs and a consequent reduction in the EC effect. Two experiments were conducted to assess the veracity of this assumption. In Experiment 1, participants memorised single words or word pairs, or categorised them as natural or man made. Their free recall protocols were examined to assess any associations established between successively presented items. Fewest associations were observed when the item-specific encoding task (i.e., natural or man made categorisation of word referents) was applied to single words. These findings were examined further in Experiment 2, where the influence of encoding instructions and stimulus presentation on local EC dependent recognition memory was examined. Consistent with recognition dual-process signal detection model predictions and findings (e.g., Macken, 2002; Parks & Yonelinas, 2008), recollection sensitivity, but not familiarity sensitivity, was found to be local EC dependent. However, local EC dependent recognition was observed only after item-specific encoding instructions, irrespective of stimulus presentation. These findings and the existing literature suggest that the use of single word presentations and item-specific encoding enhances local EC dependent recognition.

  6. Levels-of-processing effect on frontotemporal function in schizophrenia during word encoding and recognition.

    PubMed

    Ragland, J Daniel; Gur, Ruben C; Valdez, Jeffrey N; Loughead, James; Elliott, Mark; Kohler, Christian; Kanes, Stephen; Siegel, Steven J; Moelter, Stephen T; Gur, Raquel E

    2005-10-01

    Patients with schizophrenia improve episodic memory accuracy when given organizational strategies through levels-of-processing paradigms. This study tested if improvement is accompanied by normalized frontotemporal function. Event-related blood-oxygen-level-dependent functional magnetic resonance imaging (fMRI) was used to measure activation during shallow (perceptual) and deep (semantic) word encoding and recognition in 14 patients with schizophrenia and 14 healthy comparison subjects. Despite slower and less accurate overall word classification, the patients showed normal levels-of-processing effects, with faster and more accurate recognition of deeply processed words. These effects were accompanied by left ventrolateral prefrontal activation during encoding in both groups, although the thalamus, hippocampus, and lingual gyrus were overactivated in the patients. During word recognition, the patients showed overactivation in the left frontal pole and had a less robust right prefrontal response. Evidence of normal levels-of-processing effects and left prefrontal activation suggests that patients with schizophrenia can form and maintain semantic representations when they are provided with organizational cues and can improve their word encoding and retrieval. Areas of overactivation suggest residual inefficiencies. Nevertheless, the effect of teaching organizational strategies on episodic memory and brain function is a worthwhile topic for future interventional studies.

  7. Correlation applied to the recognition of regular geometric figures

    NASA Astrophysics Data System (ADS)

    Lasso, William; Morales, Yaileth; Vega, Fabio; Díaz, Leonardo; Flórez, Daniel; Torres, Cesar

    2013-11-01

    It developed a system capable of recognizing of regular geometric figures, the images are taken by the software automatically through a process of validating the presence of figure to the camera lens, the digitized image is compared with a database that contains previously images captured, to subsequently be recognized and finally identified using sonorous words referring to the name of the figure identified. The contribution of system set out is the fact that the acquisition of data is done in real time and using a spy smart glasses with usb interface offering an system equally optimal but much more economical. This tool may be useful as a possible application for visually impaired people can get information of surrounding environment.

  8. A New Font, Specifically Designed for Peripheral Vision, Improves Peripheral Letter and Word Recognition, but Not Eye-Mediated Reading Performance

    PubMed Central

    Bernard, Jean-Baptiste; Aguilar, Carlos; Castet, Eric

    2016-01-01

    Reading speed is dramatically reduced when readers cannot use their central vision. This is because low visual acuity and crowding negatively impact letter recognition in the periphery. In this study, we designed a new font (referred to as the Eido font) in order to reduce inter-letter similarity and consequently to increase peripheral letter recognition performance. We tested this font by running five experiments that compared the Eido font with the standard Courier font. Letter spacing and x-height were identical for the two monospaced fonts. Six normally-sighted subjects used exclusively their peripheral vision to run two aloud reading tasks (with eye movements), a letter recognition task (without eye movements), a word recognition task (without eye movements) and a lexical decision task. Results show that reading speed was not significantly different between the Eido and the Courier font when subjects had to read single sentences with a round simulated gaze-contingent central scotoma (10° diameter). In contrast, Eido significantly decreased perceptual errors in peripheral crowded letter recognition (-30% errors on average for letters briefly presented at 6° eccentricity) and in peripheral word recognition (-32% errors on average for words briefly presented at 6° eccentricity). PMID:27074013

  9. A New Font, Specifically Designed for Peripheral Vision, Improves Peripheral Letter and Word Recognition, but Not Eye-Mediated Reading Performance.

    PubMed

    Bernard, Jean-Baptiste; Aguilar, Carlos; Castet, Eric

    2016-01-01

    Reading speed is dramatically reduced when readers cannot use their central vision. This is because low visual acuity and crowding negatively impact letter recognition in the periphery. In this study, we designed a new font (referred to as the Eido font) in order to reduce inter-letter similarity and consequently to increase peripheral letter recognition performance. We tested this font by running five experiments that compared the Eido font with the standard Courier font. Letter spacing and x-height were identical for the two monospaced fonts. Six normally-sighted subjects used exclusively their peripheral vision to run two aloud reading tasks (with eye movements), a letter recognition task (without eye movements), a word recognition task (without eye movements) and a lexical decision task. Results show that reading speed was not significantly different between the Eido and the Courier font when subjects had to read single sentences with a round simulated gaze-contingent central scotoma (10° diameter). In contrast, Eido significantly decreased perceptual errors in peripheral crowded letter recognition (-30% errors on average for letters briefly presented at 6° eccentricity) and in peripheral word recognition (-32% errors on average for words briefly presented at 6° eccentricity).

  10. Congruent bodily arousal promotes the constructive recognition of emotional words.

    PubMed

    Kever, Anne; Grynberg, Delphine; Vermeulen, Nicolas

    2017-08-01

    Considerable research has shown that bodily states shape affect and cognition. Here, we examined whether transient states of bodily arousal influence the categorization speed of high arousal, low arousal, and neutral words. Participants realized two blocks of a constructive recognition task, once after a cycling session (increased arousal), and once after a relaxation session (reduced arousal). Results revealed overall faster response times for high arousal compared to low arousal words, and for positive compared to negative words. Importantly, low arousal words were categorized significantly faster after the relaxation than after the cycling, suggesting that a decrease in bodily arousal promotes the recognition of stimuli matching one's current arousal state. These findings highlight the importance of the arousal dimension in emotional processing, and suggest the presence of arousal-congruency effects. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Experience with compound words influences their processing: An eye movement investigation with English compound words.

    PubMed

    Juhasz, Barbara J

    2016-11-14

    Recording eye movements provides information on the time-course of word recognition during reading. Juhasz and Rayner [Juhasz, B. J., & Rayner, K. (2003). Investigating the effects of a set of intercorrelated variables on eye fixation durations in reading. Journal of Experimental Psychology: Learning, Memory and Cognition, 29, 1312-1318] examined the impact of five word recognition variables, including familiarity and age-of-acquisition (AoA), on fixation durations. All variables impacted fixation durations, but the time-course differed. However, the study focused on relatively short, morphologically simple words. Eye movements are also informative for examining the processing of morphologically complex words such as compound words. The present study further examined the time-course of lexical and semantic variables during morphological processing. A total of 120 English compound words that varied in familiarity, AoA, semantic transparency, lexeme meaning dominance, sensory experience rating (SER), and imageability were selected. The impact of these variables on fixation durations was examined when length, word frequency, and lexeme frequencies were controlled in a regression model. The most robust effects were found for familiarity and AoA, indicating that a reader's experience with compound words significantly impacts compound recognition. These results provide insight into semantic processing of morphologically complex words during reading.

  12. Influences of emotion on context memory while viewing film clips.

    PubMed

    Anderson, Lisa; Shimamura, Arthur P

    2005-01-01

    Participants listened to words while viewing film clips (audio off). Film clips were classified as neutral, positively valenced, negatively valenced, and arousing. Memory was assessed in three ways: recall of film content, recall of words, and context recognition. In the context recognition test, participants were presented a word and determined which film clip was showing when the word was originally presented. In two experiments, context memory performance was disrupted when words were presented during negatively valenced film clips, whereas it was enhanced when words were presented during arousing film clips. Free recall of words presented during the negatively valenced films was also disrupted. These findings suggest multiple influences of emotion on memory performance.

  13. Speed discrimination predicts word but not pseudo-word reading rate in adults and children

    PubMed Central

    Main, Keith L.; Pestilli, Franco; Mezer, Aviv; Yeatman, Jason; Martin, Ryan; Phipps, Stephanie; Wandell, Brian

    2014-01-01

    Word familiarity may affect magnocellular processes of word recognition. To explore this idea, we measured reading rate, speed-discrimination, and contrast detection thresholds in adults and children with a wide range of reading abilities. We found that speed-discrimination thresholds are higher in children than in adults and are correlated with age. Speed discrimination thresholds are also correlated with reading rate, but only for words, not for pseudo-words. Conversely, we found no correlation between contrast sensitivity and reading rate and no correlation between speed discrimination thresholds WASI subtest scores. These findings support the position that reading rate is influenced by magnocellular circuitry attuned to the recognition of familiar word-forms. PMID:25278418

  14. Some Neurocognitive Correlates of Noise-Vocoded Speech Perception in Children With Normal Hearing: A Replication and Extension of ).

    PubMed

    Roman, Adrienne S; Pisoni, David B; Kronenberger, William G; Faulkner, Kathleen F

    Noise-vocoded speech is a valuable research tool for testing experimental hypotheses about the effects of spectral degradation on speech recognition in adults with normal hearing (NH). However, very little research has utilized noise-vocoded speech with children with NH. Earlier studies with children with NH focused primarily on the amount of spectral information needed for speech recognition without assessing the contribution of neurocognitive processes to speech perception and spoken word recognition. In this study, we first replicated the seminal findings reported by ) who investigated effects of lexical density and word frequency on noise-vocoded speech perception in a small group of children with NH. We then extended the research to investigate relations between noise-vocoded speech recognition abilities and five neurocognitive measures: auditory attention (AA) and response set, talker discrimination, and verbal and nonverbal short-term working memory. Thirty-one children with NH between 5 and 13 years of age were assessed on their ability to perceive lexically controlled words in isolation and in sentences that were noise-vocoded to four spectral channels. Children were also administered vocabulary assessments (Peabody Picture Vocabulary test-4th Edition and Expressive Vocabulary test-2nd Edition) and measures of AA (NEPSY AA and response set and a talker discrimination task) and short-term memory (visual digit and symbol spans). Consistent with the findings reported in the original ) study, we found that children perceived noise-vocoded lexically easy words better than lexically hard words. Words in sentences were also recognized better than the same words presented in isolation. No significant correlations were observed between noise-vocoded speech recognition scores and the Peabody Picture Vocabulary test-4th Edition using language quotients to control for age effects. However, children who scored higher on the Expressive Vocabulary test-2nd Edition recognized lexically easy words better than lexically hard words in sentences. Older children perceived noise-vocoded speech better than younger children. Finally, we found that measures of AA and short-term memory capacity were significantly correlated with a child's ability to perceive noise-vocoded isolated words and sentences. First, we successfully replicated the major findings from the ) study. Because familiarity, phonological distinctiveness and lexical competition affect word recognition, these findings provide additional support for the proposal that several foundational elementary neurocognitive processes underlie the perception of spectrally degraded speech. Second, we found strong and significant correlations between performance on neurocognitive measures and children's ability to recognize words and sentences noise-vocoded to four spectral channels. These findings extend earlier research suggesting that perception of spectrally degraded speech reflects early peripheral auditory processes, as well as additional contributions of executive function, specifically, selective attention and short-term memory processes in spoken word recognition. The present findings suggest that AA and short-term memory support robust spoken word recognition in children with NH even under compromised and challenging listening conditions. These results are relevant to research carried out with listeners who have hearing loss, because they are routinely required to encode, process, and understand spectrally degraded acoustic signals.

  15. Some Neurocognitive Correlates of Noise-Vocoded Speech Perception in Children with Normal Hearing: A Replication and Extension of Eisenberg et al., 2002

    PubMed Central

    Roman, Adrienne S.; Pisoni, David B.; Kronenberger, William G.; Faulkner, Kathleen F.

    2016-01-01

    Objectives Noise-vocoded speech is a valuable research tool for testing experimental hypotheses about the effects of spectral-degradation on speech recognition in adults with normal hearing (NH). However, very little research has utilized noise-vocoded speech with children with NH. Earlier studies with children with NH focused primarily on the amount of spectral information needed for speech recognition without assessing the contribution of neurocognitive processes to speech perception and spoken word recognition. In this study, we first replicated the seminal findings reported by Eisenberg et al. (2002) who investigated effects of lexical density and word frequency on noise-vocoded speech perception in a small group of children with NH. We then extended the research to investigate relations between noise-vocoded speech recognition abilities and five neurocognitive measures: auditory attention and response set, talker discrimination and verbal and nonverbal short-term working memory. Design Thirty-one children with NH between 5 and 13 years of age were assessed on their ability to perceive lexically controlled words in isolation and in sentences that were noise-vocoded to four spectral channels. Children were also administered vocabulary assessments (PPVT-4 and EVT-2) and measures of auditory attention (NEPSY Auditory Attention (AA) and Response Set (RS) and a talker discrimination task (TD)) and short-term memory (visual digit and symbol spans). Results Consistent with the findings reported in the original Eisenberg et al. (2002) study, we found that children perceived noise-vocoded lexically easy words better than lexically hard words. Words in sentences were also recognized better than the same words presented in isolation. No significant correlations were observed between noise-vocoded speech recognition scores and the PPVT-4 using language quotients to control for age effects. However, children who scored higher on the EVT-2 recognized lexically easy words better than lexically hard words in sentences. Older children perceived noise-vocoded speech better than younger children. Finally, we found that measures of auditory attention and short-term memory capacity were significantly correlated with a child’s ability to perceive noise-vocoded isolated words and sentences. Conclusions First, we successfully replicated the major findings from the Eisenberg et al. (2002) study. Because familiarity, phonological distinctiveness and lexical competition affect word recognition, these findings provide additional support for the proposal that several foundational elementary neurocognitive processes underlie the perception of spectrally-degraded speech. Second, we found strong and significant correlations between performance on neurocognitive measures and children’s ability to recognize words and sentences noise-vocoded to four spectral channels. These findings extend earlier research suggesting that perception of spectrally-degraded speech reflects early peripheral auditory processes as well as additional contributions of executive function, specifically, selective attention and short-term memory processes in spoken word recognition. The present findings suggest that auditory attention and short-term memory support robust spoken word recognition in children with NH even under compromised and challenging listening conditions. These results are relevant to research carried out with listeners who have hearing loss, since they are routinely required to encode, process and understand spectrally-degraded acoustic signals. PMID:28045787

  16. Effects of Bilateral Eye Movements on Gist Based False Recognition in the DRM Paradigm

    ERIC Educational Resources Information Center

    Parker, Andrew; Dagnall, Neil

    2007-01-01

    The effects of saccadic bilateral (horizontal) eye movements on gist based false recognition was investigated. Following exposure to lists of words related to a critical but non-studied word participants were asked to engage in 30s of bilateral vs. vertical vs. no eye movements. Subsequent testing of recognition memory revealed that those who…

  17. Target recognition based on convolutional neural network

    NASA Astrophysics Data System (ADS)

    Wang, Liqiang; Wang, Xin; Xi, Fubiao; Dong, Jian

    2017-11-01

    One of the important part of object target recognition is the feature extraction, which can be classified into feature extraction and automatic feature extraction. The traditional neural network is one of the automatic feature extraction methods, while it causes high possibility of over-fitting due to the global connection. The deep learning algorithm used in this paper is a hierarchical automatic feature extraction method, trained with the layer-by-layer convolutional neural network (CNN), which can extract the features from lower layers to higher layers. The features are more discriminative and it is beneficial to the object target recognition.

  18. Development of the Word Auditory Recognition and Recall Measure: A Working Memory Test for Use in Rehabilitative Audiology.

    PubMed

    Smith, Sherri L; Pichora-Fuller, M Kathleen; Alexander, Genevieve

    The purpose of this study was to develop the Word Auditory Recognition and Recall Measure (WARRM) and to conduct the inaugural evaluation of the performance of younger adults with normal hearing, older adults with normal to near-normal hearing, and older adults with pure-tone hearing loss on the WARRM. The WARRM is a new test designed for concurrently assessing word recognition and auditory working memory performance in adults who may have pure-tone hearing loss. The test consists of 100 monosyllabic words based on widely used speech-recognition test materials. The 100 words are presented in recall set sizes of 2, 3, 4, 5, and 6 items, with 5 trials in each set size. The WARRM yields a word-recognition score and a recall score. The WARRM was administered to all participants in three listener groups under two processing conditions in a mixed model (between-subjects, repeated measures) design. The between-subjects factor was group, with 48 younger listeners with normal audiometric thresholds (younger listeners with normal hearing [YNH]), 48 older listeners with normal thresholds through 3000 Hz (older listeners with normal hearing [ONH]), and 48 older listeners with sensorineural hearing loss (older listeners with hearing loss [OHL]). The within-subjects factor was WARRM processing condition (no additional task or with an alphabet judgment task). The associations between results on the WARRM test and results on a battery of other auditory and memory measures were examined. Word-recognition performance on the WARRM was not affected by processing condition or set size and was near ceiling for the YNH and ONH listeners (99 and 98%, respectively) with both groups performing significantly better than the OHL listeners (83%). The recall results were significantly better for the YNH, ONH, and OHL groups with no processing (93, 84, and 75%, respectively) than with the alphabet processing (86, 77, and 70%). In both processing conditions, recall was best for YNH, followed by ONH, and worst for OHL listeners. WARRM recall scores were significantly correlated with other memory measures. In addition, WARRM recall scores were correlated with results on the Words-In-Noise (WIN) test for the OHL listeners in the no processing condition and for ONH listeners in the alphabet processing condition. Differences in the WIN and recall scores of these groups are consistent with the interpretation that the OHL listeners found listening to be sufficiently demanding to affect recall even in the no processing condition, whereas the ONH group listeners did not find it so demanding until the additional alphabet processing task was added. These findings demonstrate the feasibility of incorporating an auditory memory test into a word-recognition test to obtain measures of both word recognition and working memory simultaneously. The correlation of WARRM recall with scores from other memory measures is evidence of construct validity. The observation of correlations between the WIN thresholds with each of the older groups and recall scores in certain processing conditions suggests that recall depends on listeners' word-recognition abilities in noise in combination with the processing demands of the task. The recall score provides additional information beyond the pure-tone audiogram and word-recognition scores that may help rehabilitative audiologists assess the listening abilities of patients with hearing loss.

  19. Automatic recognition of postural allocations.

    PubMed

    Sazonov, Edward; Krishnamurthy, Vidya; Makeyev, Oleksandr; Browning, Ray; Schutz, Yves; Hill, James

    2007-01-01

    A significant part of daily energy expenditure may be attributed to non-exercise activity thermogenesis and exercise activity thermogenesis. Automatic recognition of postural allocations such as standing or sitting can be used in behavioral modification programs aimed at minimizing static postures. In this paper we propose a shoe-based device and related pattern recognition methodology for recognition of postural allocations. Inexpensive technology allows implementation of this methodology as a part of footwear. The experimental results suggest high efficiency and reliability of the proposed approach.

  20. Conceptually based vocabulary intervention: second graders' development of vocabulary words.

    PubMed

    Dimling, Lisa M

    2010-01-01

    An instructional strategy was investigated that addressed the needs of deaf and hard of hearing students through a conceptually based sign language vocabulary intervention. A single-subject multiple-baseline design was used to determine the effects of the vocabulary intervention on word recognition, production, and comprehension. Six students took part in the 30-minute intervention over 6-8 weeks, learning 12 new vocabulary words each week by means of the three intervention components: (a) word introduction, (b) word activity (semantic mapping), and (c) practice. Results indicated that the vocabulary intervention successfully improved all students' recognition, production, and comprehension of the vocabulary words and phrases.

  1. Relating dynamic brain states to dynamic machine states: Human and machine solutions to the speech recognition problem

    PubMed Central

    Liu, Xunying; Zhang, Chao; Woodland, Phil; Fonteneau, Elisabeth

    2017-01-01

    There is widespread interest in the relationship between the neurobiological systems supporting human cognition and emerging computational systems capable of emulating these capacities. Human speech comprehension, poorly understood as a neurobiological process, is an important case in point. Automatic Speech Recognition (ASR) systems with near-human levels of performance are now available, which provide a computationally explicit solution for the recognition of words in continuous speech. This research aims to bridge the gap between speech recognition processes in humans and machines, using novel multivariate techniques to compare incremental ‘machine states’, generated as the ASR analysis progresses over time, to the incremental ‘brain states’, measured using combined electro- and magneto-encephalography (EMEG), generated as the same inputs are heard by human listeners. This direct comparison of dynamic human and machine internal states, as they respond to the same incrementally delivered sensory input, revealed a significant correspondence between neural response patterns in human superior temporal cortex and the structural properties of ASR-derived phonetic models. Spatially coherent patches in human temporal cortex responded selectively to individual phonetic features defined on the basis of machine-extracted regularities in the speech to lexicon mapping process. These results demonstrate the feasibility of relating human and ASR solutions to the problem of speech recognition, and suggest the potential for further studies relating complex neural computations in human speech comprehension to the rapidly evolving ASR systems that address the same problem domain. PMID:28945744

  2. Mark My Words: Tone of Voice Changes Affective Word Representations in Memory

    PubMed Central

    Schirmer, Annett

    2010-01-01

    The present study explored the effect of speaker prosody on the representation of words in memory. To this end, participants were presented with a series of words and asked to remember the words for a subsequent recognition test. During study, words were presented auditorily with an emotional or neutral prosody, whereas during test, words were presented visually. Recognition performance was comparable for words studied with emotional and neutral prosody. However, subsequent valence ratings indicated that study prosody changed the affective representation of words in memory. Compared to words with neutral prosody, words with sad prosody were later rated as more negative and words with happy prosody were later rated as more positive. Interestingly, the participants' ability to remember study prosody failed to predict this effect, suggesting that changes in word valence were implicit and associated with initial word processing rather than word retrieval. Taken together these results identify a mechanism by which speakers can have sustained effects on listener attitudes towards word referents. PMID:20169154

  3. Motivation and attention: Incongruent effects of feedback on the processing of valence.

    PubMed

    Rothermund, Klaus

    2003-09-01

    Four experiments investigated the relation between outcome-related motivational states and processes of automatic attention allocation. Experiments 1-3 analyzed influences of feedback on evaluative decisions. Words of opposite valence to the feedback were processed faster, indicating that it is easier to allocate attention to the valence of an affectively incongruent word. Experiment 4 replicated the incongruent effect with interference effects of word valence in a grammatical-categorization task, indicating that the effect reflects automatic attentional capture. In all experiments, incongruent effects of feedback emerged only in a situation involving an attentional shift between words that differed in valence.

  4. Usage of semantic representations in recognition memory.

    PubMed

    Nishiyama, Ryoji; Hirano, Tetsuji; Ukita, Jun

    2017-11-01

    Meanings of words facilitate false acceptance as well as correct rejection of lures in recognition memory tests, depending on the experimental context. This suggests that semantic representations are both directly and indirectly (i.e., mediated by perceptual representations) used in remembering. Studies using memory conjunction errors (MCEs) paradigms, in which the lures consist of component parts of studied words, have reported semantic facilitation of rejection of the lures. However, attending to components of the lures could potentially cause this. Therefore, we investigated whether semantic overlap of lures facilitates MCEs using Japanese Kanji words in which a whole-word image is more concerned in reading. Experiments demonstrated semantic facilitation of MCEs in a delayed recognition test (Experiment 1), and in immediate recognition tests in which participants were prevented from using phonological or orthographic representations (Experiment 2), and the salient effect on individuals with high semantic memory capacities (Experiment 3). Additionally, analysis of the receiver operating characteristic suggested that this effect is attributed to familiarity-based memory judgement and phantom recollection. These findings indicate that semantic representations can be directly used in remembering, even when perceptual representations of studied words are available.

  5. Automatic Item Generation via Frame Semantics: Natural Language Generation of Math Word Problems.

    ERIC Educational Resources Information Center

    Deane, Paul; Sheehan, Kathleen

    This paper is an exploration of the conceptual issues that have arisen in the course of building a natural language generation (NLG) system for automatic test item generation. While natural language processing techniques are applicable to general verbal items, mathematics word problems are particularly tractable targets for natural language…

  6. The influence of speech rate and accent on access and use of semantic information.

    PubMed

    Sajin, Stanislav M; Connine, Cynthia M

    2017-04-01

    Circumstances in which the speech input is presented in sub-optimal conditions generally lead to processing costs affecting spoken word recognition. The current study indicates that some processing demands imposed by listening to difficult speech can be mitigated by feedback from semantic knowledge. A set of lexical decision experiments examined how foreign accented speech and word duration impact access to semantic knowledge in spoken word recognition. Results indicate that when listeners process accented speech, the reliance on semantic information increases. Speech rate was not observed to influence semantic access, except in the setting in which unusually slow accented speech was presented. These findings support interactive activation models of spoken word recognition in which attention is modulated based on speech demands.

  7. Implicit proactive interference, age, and automatic versus controlled retrieval strategies.

    PubMed

    Ikier, Simay; Yang, Lixia; Hasher, Lynn

    2008-05-01

    We assessed the extent to which implicit proactive interference results from automatic versus controlled retrieval among younger and older adults. During a study phase, targets (e.g., "ALLERGY") either were or were not preceded by nontarget competitors (e.g., "ANALOGY"). After a filled interval, the participants were asked to complete word fragments, some of which cued studied words (e.g., "A_L_ _GY"). Retrieval strategies were identified by the difference in response speed between a phase containing fragments that cued only new words and a phase that included a mix of fragments cuing old and new words. Previous results were replicated: Proactive interference was found in implicit memory, and the negative effects were greater for older than for younger adults. Novel findings demonstrate two retrieval processes that contribute to interference: an automatic one that is age invariant and a controlled process that can reduce the magnitude of the automatic interference effects. The controlled process, however, is used effectively only by younger adults. This pattern of findings potentially explains age differences in susceptibility to proactive interference.

  8. Emotion words and categories: evidence from lexical decision.

    PubMed

    Scott, Graham G; O'Donnell, Patrick J; Sereno, Sara C

    2014-05-01

    We examined the categorical nature of emotion word recognition. Positive, negative, and neutral words were presented in lexical decision tasks. Word frequency was additionally manipulated. In Experiment 1, "positive" and "negative" categories of words were implicitly indicated by the blocked design employed. A significant emotion-frequency interaction was obtained, replicating past research. While positive words consistently elicited faster responses than neutral words, only low frequency negative words demonstrated a similar advantage. In Experiments 2a and 2b, explicit categories ("positive," "negative," and "household" items) were specified to participants. Positive words again elicited faster responses than did neutral words. Responses to negative words, however, were no different than those to neutral words, regardless of their frequency. The overall pattern of effects indicates that positive words are always facilitated, frequency plays a greater role in the recognition of negative words, and a "negative" category represents a somewhat disparate set of emotions. These results support the notion that emotion word processing may be moderated by distinct systems.

  9. Associations of hallucination proneness with free-recall intrusions and response bias in a nonclinical sample.

    PubMed

    Brébion, Gildas; Larøi, Frank; Van der Linden, Martial

    2010-10-01

    Hallucinations in patients with schizophrenia have been associated with a liberal response bias in signal detection and recognition tasks and with various types of source-memory error. We investigated the associations of hallucination proneness with free-recall intrusions and false recognitions of words in a nonclinical sample. A total of 81 healthy individuals were administered a verbal memory task involving free recall and recognition of one nonorganizable and one semantically organizable list of words. Hallucination proneness was assessed by means of a self-rating scale. Global hallucination proneness was associated with free-recall intrusions in the nonorganizable list and with a response bias reflecting tendency to make false recognitions of nontarget words in both types of list. The verbal hallucination score was associated with more intrusions and with a reduced tendency to make false recognitions of words. The associations between global hallucination proneness and two types of verbal memory error in a nonclinical sample corroborate those observed in patients with schizophrenia and suggest that common cognitive mechanisms underlie hallucinations in psychiatric and nonclinical individuals.

  10. Digital signal processing algorithms for automatic voice recognition

    NASA Technical Reports Server (NTRS)

    Botros, Nazeih M.

    1987-01-01

    The current digital signal analysis algorithms are investigated that are implemented in automatic voice recognition algorithms. Automatic voice recognition means, the capability of a computer to recognize and interact with verbal commands. The digital signal is focused on, rather than the linguistic, analysis of speech signal. Several digital signal processing algorithms are available for voice recognition. Some of these algorithms are: Linear Predictive Coding (LPC), Short-time Fourier Analysis, and Cepstrum Analysis. Among these algorithms, the LPC is the most widely used. This algorithm has short execution time and do not require large memory storage. However, it has several limitations due to the assumptions used to develop it. The other 2 algorithms are frequency domain algorithms with not many assumptions, but they are not widely implemented or investigated. However, with the recent advances in the digital technology, namely signal processors, these 2 frequency domain algorithms may be investigated in order to implement them in voice recognition. This research is concerned with real time, microprocessor based recognition algorithms.

  11. Improving language models for radiology speech recognition.

    PubMed

    Paulett, John M; Langlotz, Curtis P

    2009-02-01

    Speech recognition systems have become increasingly popular as a means to produce radiology reports, for reasons both of efficiency and of cost. However, the suboptimal recognition accuracy of these systems can affect the productivity of the radiologists creating the text reports. We analyzed a database of over two million de-identified radiology reports to determine the strongest determinants of word frequency. Our results showed that body site and imaging modality had a similar influence on the frequency of words and of three-word phrases as did the identity of the speaker. These findings suggest that the accuracy of speech recognition systems could be significantly enhanced by further tailoring their language models to body site and imaging modality, which are readily available at the time of report creation.

  12. Pictures, images, and recollective experience.

    PubMed

    Dewhurst, S A; Conway, M A

    1994-09-01

    Five experiments investigated the influence of picture processing on recollective experience in recognition memory. Subjects studied items that differed in visual or imaginal detail, such as pictures versus words and high-imageability versus low-imageability words, and performed orienting tasks that directed processing either toward a stimulus as a word or toward a stimulus as a picture or image. Standard effects of imageability (e.g., the picture superiority effect and memory advantages following imagery) were obtained only in recognition judgments that featured recollective experience and were eliminated or reversed when recognition was not accompanied by recollective experience. It is proposed that conscious recollective experience in recognition memory is cued by attributes of retrieved memories such as sensory-perceptual attributes and records of cognitive operations performed at encoding.

  13. The Role of Semantics in Translation Recognition: Effects of Number of Translations, Dominance of Translations and Semantic Relatedness of Multiple Translations

    ERIC Educational Resources Information Center

    Laxen, Jannika; Lavaur, Jean-Marc

    2010-01-01

    This study aims to examine the influence of multiple translations of a word on bilingual processing in three translation recognition experiments during which French-English bilinguals had to decide whether two words were translations of each other or not. In the first experiment, words with only one translation were recognized as translations…

  14. Creating a medical dictionary using word alignment: the influence of sources and resources.

    PubMed

    Nyström, Mikael; Merkel, Magnus; Petersson, Håkan; Ahlfeldt, Hans

    2007-11-23

    Automatic word alignment of parallel texts with the same content in different languages is among other things used to generate dictionaries for new translations. The quality of the generated word alignment depends on the quality of the input resources. In this paper we report on automatic word alignment of the English and Swedish versions of the medical terminology systems ICD-10, ICF, NCSP, KSH97-P and parts of MeSH and how the terminology systems and type of resources influence the quality. We automatically word aligned the terminology systems using static resources, like dictionaries, statistical resources, like statistically derived dictionaries, and training resources, which were generated from manual word alignment. We varied which part of the terminology systems that we used to generate the resources, which parts that we word aligned and which types of resources we used in the alignment process to explore the influence the different terminology systems and resources have on the recall and precision. After the analysis, we used the best configuration of the automatic word alignment for generation of candidate term pairs. We then manually verified the candidate term pairs and included the correct pairs in an English-Swedish dictionary. The results indicate that more resources and resource types give better results but the size of the parts used to generate the resources only partly affects the quality. The most generally useful resources were generated from ICD-10 and resources generated from MeSH were not as general as other resources. Systematic inter-language differences in the structure of the terminology system rubrics make the rubrics harder to align. Manually created training resources give nearly as good results as a union of static resources, statistical resources and training resources and noticeably better results than a union of static resources and statistical resources. The verified English-Swedish dictionary contains 24,000 term pairs in base forms. More resources give better results in the automatic word alignment, but some resources only give small improvements. The most important type of resource is training and the most general resources were generated from ICD-10.

  15. Creating a medical dictionary using word alignment: The influence of sources and resources

    PubMed Central

    Nyström, Mikael; Merkel, Magnus; Petersson, Håkan; Åhlfeldt, Hans

    2007-01-01

    Background Automatic word alignment of parallel texts with the same content in different languages is among other things used to generate dictionaries for new translations. The quality of the generated word alignment depends on the quality of the input resources. In this paper we report on automatic word alignment of the English and Swedish versions of the medical terminology systems ICD-10, ICF, NCSP, KSH97-P and parts of MeSH and how the terminology systems and type of resources influence the quality. Methods We automatically word aligned the terminology systems using static resources, like dictionaries, statistical resources, like statistically derived dictionaries, and training resources, which were generated from manual word alignment. We varied which part of the terminology systems that we used to generate the resources, which parts that we word aligned and which types of resources we used in the alignment process to explore the influence the different terminology systems and resources have on the recall and precision. After the analysis, we used the best configuration of the automatic word alignment for generation of candidate term pairs. We then manually verified the candidate term pairs and included the correct pairs in an English-Swedish dictionary. Results The results indicate that more resources and resource types give better results but the size of the parts used to generate the resources only partly affects the quality. The most generally useful resources were generated from ICD-10 and resources generated from MeSH were not as general as other resources. Systematic inter-language differences in the structure of the terminology system rubrics make the rubrics harder to align. Manually created training resources give nearly as good results as a union of static resources, statistical resources and training resources and noticeably better results than a union of static resources and statistical resources. The verified English-Swedish dictionary contains 24,000 term pairs in base forms. Conclusion More resources give better results in the automatic word alignment, but some resources only give small improvements. The most important type of resource is training and the most general resources were generated from ICD-10. PMID:18036221

  16. Amplitude (vu and rms) and Temporal (msec) Measures of Two Northwestern University Auditory Test No. 6 Recordings.

    PubMed

    Wilson, Richard H

    2015-04-01

    In 1940, a cooperative effort by the radio networks and Bell Telephone produced the volume unit (vu) meter that has been the mainstay instrument for monitoring the level of speech signals in commercial broadcasting and research laboratories. With the use of computers, today the amplitude of signals can be quantified easily using the root mean square (rms) algorithm. Researchers had previously reported that amplitude estimates of sentences and running speech were 4.8 dB higher when measured with a vu meter than when calculated with rms. This study addresses the vu-rms relation as applied to the carrier phrase and target word paradigm used to assess word-recognition abilities, the premise being that by definition the word-recognition paradigm is a special and different case from that described previously. The purpose was to evaluate the vu and rms amplitude relations for the carrier phrases and target words commonly used to assess word-recognition abilities. In addition, the relations with the target words between rms level and recognition performance were examined. Descriptive and correlational. Two recoded versions of the Northwestern University Auditory Test No. 6 were evaluated, the Auditec of St. Louis (Auditec) male speaker and the Department of Veterans Affairs (VA) female speaker. Using both visual and auditory cues from a waveform editor, the temporal onsets and offsets were defined for each carrier phrase and each target word. The rms amplitudes for those segments then were computed and expressed in decibels with reference to the maximum digitization range. The data were maintained for each of the four Northwestern University Auditory Test No. 6 word lists. Descriptive analyses were used with linear regressions used to evaluate the reliability of the measurement technique and the relation between the rms levels of the target words and recognition performances. Although there was a 1.3 dB difference between the calibration tones, the mean levels of the carrier phrases for the two recordings were -14.8 dB (Auditec) and -14.1 dB (VA) with standard deviations <1 dB. For the target words, the mean amplitudes were -19.9 dB (Auditec) and -18.3 dB (VA) with standard deviations ranging from 1.3 to 2.4 dB. The mean durations for the carrier phrases of both recordings were 593-594 msec, with the mean durations of the target words a little different, 509 msec (Auditec) and 528 msec (VA). Random relations were observed between the recognition performances and rms levels of the target words. Amplitude and temporal data for the individual words are provided. The rms levels of the carrier phrases closely approximated (±1 dB) the rms levels of the calibration tones, both of which were set to 0 vu (dB). The rms levels of the target words were 5-6 dB below the levels of the carrier phrases and were substantially more variable than the levels of the carrier phrases. The relation between the rms levels of the target words and recognition performances on the words was random. American Academy of Audiology.

  17. Handwritten Word Recognition Using Multi-view Analysis

    NASA Astrophysics Data System (ADS)

    de Oliveira, J. J.; de A. Freitas, C. O.; de Carvalho, J. M.; Sabourin, R.

    This paper brings a contribution to the problem of efficiently recognizing handwritten words from a limited size lexicon. For that, a multiple classifier system has been developed that analyzes the words from three different approximation levels, in order to get a computational approach inspired on the human reading process. For each approximation level a three-module architecture composed of a zoning mechanism (pseudo-segmenter), a feature extractor and a classifier is defined. The proposed application is the recognition of the Portuguese handwritten names of the months, for which a best recognition rate of 97.7% was obtained, using classifier combination.

  18. Concreteness norms for 1,659 French words: Relationships with other psycholinguistic variables and word recognition times.

    PubMed

    Bonin, Patrick; Méot, Alain; Bugaiska, Aurélia

    2018-02-12

    Words that correspond to a potential sensory experience-concrete words-have long been found to possess a processing advantage over abstract words in various lexical tasks. We collected norms of concreteness for a set of 1,659 French words, together with other psycholinguistic norms that were not available for these words-context availability, emotional valence, and arousal-but which are important if we are to achieve a better understanding of the meaning of concreteness effects. We then investigated the relationships of concreteness with these newly collected variables, together with other psycholinguistic variables that were already available for this set of words (e.g., imageability, age of acquisition, and sensory experience ratings). Finally, thanks to the variety of psychological norms available for this set of words, we decided to test further the embodied account of concreteness effects in visual-word recognition, championed by Kousta, Vigliocco, Vinson, Andrews, and Del Campo (Journal of Experimental Psychology: General, 140, 14-34, 2011). Similarly, we investigated the influences of concreteness in three word recognition tasks-lexical decision, progressive demasking, and word naming-using a multiple regression approach, based on the reaction times available in Chronolex (Ferrand, Brysbaert, Keuleers, New, Bonin, Méot, Pallier, Frontiers in Psychology, 2; 306, 2011). The norms can be downloaded as supplementary material provided with this article.

  19. Faces are special but not too special: Spared face recognition in amnesia is based on familiarity

    PubMed Central

    Aly, Mariam; Knight, Robert T.; Yonelinas, Andrew P.

    2014-01-01

    Most current theories of human memory are material-general in the sense that they assume that the medial temporal lobe (MTL) is important for retrieving the details of prior events, regardless of the specific type of materials. Recent studies of amnesia have challenged the material-general assumption by suggesting that the MTL may be necessary for remembering words, but is not involved in remembering faces. We examined recognition memory for faces and words in a group of amnesic patients, which included hypoxic patients and patients with extensive left or right MTL lesions. Recognition confidence judgments were used to plot receiver operating characteristics (ROCs) in order to more fully quantify recognition performance and to estimate the contributions of recollection and familiarity. Consistent with the extant literature, an analysis of overall recognition accuracy showed that the patients were impaired at word memory but had spared face memory. However, the ROC analysis indicated that the patients were generally impaired at high confidence recognition responses for faces and words, and they exhibited significant recollection impairments for both types of materials. Familiarity for faces was preserved in all patients, but extensive left MTL damage impaired familiarity for words. These results suggest that face recognition may appear to be spared because performance tends to rely heavily on familiarity, a process that is relatively well preserved in amnesia. The findings challenge material-general theories of memory, and suggest that both material and process are important determinants of memory performance in amnesia, and different types of materials may depend more or less on recollection and familiarity. PMID:20833190

  20. Voice gender and the segregation of competing talkers: Perceptual learning in cochlear implant simulations

    PubMed Central

    Sullivan, Jessica R.; Assmann, Peter F.; Hossain, Shaikat; Schafer, Erin C.

    2017-01-01

    Two experiments explored the role of differences in voice gender in the recognition of speech masked by a competing talker in cochlear implant simulations. Experiment 1 confirmed that listeners with normal hearing receive little benefit from differences in voice gender between a target and masker sentence in four- and eight-channel simulations, consistent with previous findings that cochlear implants deliver an impoverished representation of the cues for voice gender. However, gender differences led to small but significant improvements in word recognition with 16 and 32 channels. Experiment 2 assessed the benefits of perceptual training on the use of voice gender cues in an eight-channel simulation. Listeners were assigned to one of four groups: (1) word recognition training with target and masker differing in gender; (2) word recognition training with same-gender target and masker; (3) gender recognition training; or (4) control with no training. Significant improvements in word recognition were observed from pre- to post-test sessions for all three training groups compared to the control group. These improvements were maintained at the late session (one week following the last training session) for all three groups. There was an overall improvement in masked word recognition performance provided by gender mismatch following training, but the amount of benefit did not differ as a function of the type of training. The training effects observed here are consistent with a form of rapid perceptual learning that contributes to the segregation of competing voices but does not specifically enhance the benefits provided by voice gender cues. PMID:28372046

Top