Sample records for word recognition ability

  1. Novel word acquisition in aphasia: Facing the word-referent ambiguity of natural language learning contexts.

    PubMed

    Peñaloza, Claudia; Mirman, Daniel; Tuomiranta, Leena; Benetello, Annalisa; Heikius, Ida-Maria; Järvinen, Sonja; Majos, Maria C; Cardona, Pedro; Juncadella, Montserrat; Laine, Matti; Martin, Nadine; Rodríguez-Fornells, Antoni

    2016-06-01

    Recent research suggests that some people with aphasia preserve some ability to learn novel words and to retain them in the long-term. However, this novel word learning ability has been studied only in the context of single word-picture pairings. We examined the ability of people with chronic aphasia to learn novel words using a paradigm that presents new word forms together with a limited set of different possible visual referents and requires the identification of the correct word-object associations on the basis of online feedback. We also studied the relationship between word learning ability and aphasia severity, word processing abilities, and verbal short-term memory (STM). We further examined the influence of gross lesion location on new word learning. The word learning task was first validated with a group of forty-five young adults. Fourteen participants with chronic aphasia were administered the task and underwent tests of immediate and long-term recognition memory at 1 week. Their performance was compared to that of a group of fourteen matched controls using growth curve analysis. The learning curve and recognition performance of the aphasia group was significantly below the matched control group, although above-chance recognition performance and case-by-case analyses indicated that some participants with aphasia had learned the correct word-referent mappings. Verbal STM but not word processing abilities predicted word learning ability after controlling for aphasia severity. Importantly, participants with lesions in the left frontal cortex performed significantly worse than participants with lesions that spared the left frontal region both during word learning and on the recognition tests. Our findings indicate that some people with aphasia can preserve the ability to learn a small novel lexicon in an ambiguous word-referent context. This learning and recognition memory ability was associated with verbal STM capacity, aphasia severity and the integrity of the left inferior frontal region. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Novel word acquisition in aphasia: Facing the word-referent ambiguity of natural language learning contexts

    PubMed Central

    Peñaloza, Claudia; Mirman, Daniel; Tuomiranta, Leena; Benetello, Annalisa; Heikius, Ida-Maria; Järvinen, Sonja; Majos, Maria C.; Cardona, Pedro; Juncadella, Montserrat; Laine, Matti; Martin, Nadine; Rodríguez-Fornells, Antoni

    2017-01-01

    Recent research suggests that some people with aphasia preserve some ability to learn novel words and to retain them in the long-term. However, this novel word learning ability has been studied only in the context of single word-picture pairings. We examined the ability of people with chronic aphasia to learn novel words using a paradigm that presents new word forms together with a limited set of different possible visual referents and requires the identification of the correct word-object associations on the basis of online feedback. We also studied the relationship between word learning ability and aphasia severity, word processing abilities, and verbal short-term memory (STM). We further examined the influence of gross lesion location on new word learning. The word learning task was first validated with a group of forty-five young adults. Fourteen participants with chronic aphasia were administered the task and underwent tests of immediate and long-term recognition memory at 1 week. Their performance was compared to that of a group of fourteen matched controls using growth curve analysis. The learning curve and recognition performance of the aphasia group was significantly below the matched control group, although above-chance recognition performance and case-by-case analyses indicated that some participants with aphasia had learned the correct word-referent mappings. Verbal STM but not word processing abilities predicted word learning ability after controlling for aphasia severity. Importantly, participants with lesions in the left frontal cortex performed significantly worse than participants with lesions that spared the left frontal region both during word learning and on the recognition tests. Our findings indicate that some people with aphasia can preserve the ability to learn a small novel lexicon in an ambiguous word-referent context. This learning and recognition memory ability was associated with verbal STM capacity, aphasia severity and the integrity of the left inferior frontal region. PMID:27085892

  3. The effect of background noise on the word activation process in nonnative spoken-word recognition.

    PubMed

    Scharenborg, Odette; Coumans, Juul M J; van Hout, Roeland

    2018-02-01

    This article investigates 2 questions: (1) does the presence of background noise lead to a differential increase in the number of simultaneously activated candidate words in native and nonnative listening? And (2) do individual differences in listeners' cognitive and linguistic abilities explain the differential effect of background noise on (non-)native speech recognition? English and Dutch students participated in an English word recognition experiment, in which either a word's onset or offset was masked by noise. The native listeners outperformed the nonnative listeners in all listening conditions. Importantly, however, the effect of noise on the multiple activation process was found to be remarkably similar in native and nonnative listening. The presence of noise increased the set of candidate words considered for recognition in both native and nonnative listening. The results indicate that the observed performance differences between the English and Dutch listeners should not be primarily attributed to a differential effect of noise, but rather to the difference between native and nonnative listening. Additional analyses showed that word-initial information was found to be more important than word-final information during spoken-word recognition. When word-initial information was no longer reliably available word recognition accuracy dropped and word frequency information could no longer be used suggesting that word frequency information is strongly tied to the onset of words and the earliest moments of lexical access. Proficiency and inhibition ability were found to influence nonnative spoken-word recognition in noise, with a higher proficiency in the nonnative language and worse inhibition ability leading to improved recognition performance. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  4. Word Recognition and Critical Reading.

    ERIC Educational Resources Information Center

    Groff, Patrick

    1991-01-01

    This article discusses the distinctions between literal and critical reading and explains the role that word recognition ability plays in critical reading behavior. It concludes that correct word recognition provides the raw material on which higher order critical reading is based. (DB)

  5. Cross-modal working memory binding and word recognition skills: how specific is the link?

    PubMed

    Wang, Shinmin; Allen, Richard J

    2018-04-01

    Recent research has suggested that the creation of temporary bound representations of information from different sources within working memory uniquely relates to word recognition abilities in school-age children. However, it is unclear to what extent this link is attributable specifically to the binding ability for cross-modal information. This study examined the performance of Grade 3 (8-9 years old) children on binding tasks requiring either temporary association formation of two visual items (i.e., within-modal binding) or pairs of visually presented abstract shapes and auditorily presented nonwords (i.e., cross-modal binding). Children's word recognition skills were related to performance on the cross-modal binding task but not on the within-modal binding task. Further regression models showed that cross-modal binding memory was a significant predictor of word recognition when memory for its constituent elements, general abilities, and crucially, within-modal binding memory were taken into account. These findings may suggest a specific link between the ability to bind information across modalities within working memory and word recognition skills.

  6. Individual differences in online spoken word recognition: Implications for SLI

    PubMed Central

    McMurray, Bob; Samelson, Vicki M.; Lee, Sung Hee; Tomblin, J. Bruce

    2012-01-01

    Thirty years of research has uncovered the broad principles that characterize spoken word processing across listeners. However, there have been few systematic investigations of individual differences. Such an investigation could help refine models of word recognition by indicating which processing parameters are likely to vary, and could also have important implications for work on language impairment. The present study begins to fill this gap by relating individual differences in overall language ability to variation in online word recognition processes. Using the visual world paradigm, we evaluated online spoken word recognition in adolescents who varied in both basic language abilities and non-verbal cognitive abilities. Eye movements to target, cohort and rhyme objects were monitored during spoken word recognition, as an index of lexical activation. Adolescents with poor language skills showed fewer looks to the target and more fixations to the cohort and rhyme competitors. These results were compared to a number of variants of the TRACE model (McClelland & Elman, 1986) that were constructed to test a range of theoretical approaches to language impairment: impairments at sensory and phonological levels; vocabulary size, and generalized slowing. None were strongly supported, and variation in lexical decay offered the best fit. Thus, basic word recognition processes like lexical decay may offer a new way to characterize processing differences in language impairment. PMID:19836014

  7. Individual differences in language and working memory affect children's speech recognition in noise.

    PubMed

    McCreery, Ryan W; Spratford, Meredith; Kirby, Benjamin; Brennan, Marc

    2017-05-01

    We examined how cognitive and linguistic skills affect speech recognition in noise for children with normal hearing. Children with better working memory and language abilities were expected to have better speech recognition in noise than peers with poorer skills in these domains. As part of a prospective, cross-sectional study, children with normal hearing completed speech recognition in noise for three types of stimuli: (1) monosyllabic words, (2) syntactically correct but semantically anomalous sentences and (3) semantically and syntactically anomalous word sequences. Measures of vocabulary, syntax and working memory were used to predict individual differences in speech recognition in noise. Ninety-six children with normal hearing, who were between 5 and 12 years of age. Higher working memory was associated with better speech recognition in noise for all three stimulus types. Higher vocabulary abilities were associated with better recognition in noise for sentences and word sequences, but not for words. Working memory and language both influence children's speech recognition in noise, but the relationships vary across types of stimuli. These findings suggest that clinical assessment of speech recognition is likely to reflect underlying cognitive and linguistic abilities, in addition to a child's auditory skills, consistent with the Ease of Language Understanding model.

  8. Modeling Polymorphemic Word Recognition: Exploring Differences among Children with Early-Emerging and Late- Emerging Word Reading Difficulty

    ERIC Educational Resources Information Center

    Kearns, Devin M.; Steacy, Laura M.; Compton, Donald L.; Gilbert, Jennifer K.; Goodwin, Amanda P.; Cho, Eunsoo; Lindstrom, Esther R.; Collins, Alyson A.

    2016-01-01

    Comprehensive models of derived polymorphemic word recognition skill in developing readers, with an emphasis on children with reading difficulty (RD), have not been developed. The purpose of the present study was to model individual differences in polymorphemic word recognition ability at the item level among 5th-grade children (N = 173)…

  9. Relationships among vocabulary size, nonverbal cognition, and spoken word recognition in adults with cochlear implants

    NASA Astrophysics Data System (ADS)

    Collison, Elizabeth A.; Munson, Benjamin; Carney, Arlene E.

    2002-05-01

    Recent research has attempted to identify the factors that predict speech perception performance among users of cochlear implants (CIs). Studies have found that approximately 20%-60% of the variance in speech perception scores can be accounted for by factors including duration of deafness, etiology, type of device, and length of implant use, leaving approximately 50% of the variance unaccounted for. The current study examines the extent to which vocabulary size and nonverbal cognitive ability predict CI listeners' spoken word recognition. Fifteen postlingually deafened adults with nucleus or clarion CIs were given standardized assessments of nonverbal cognitive ability and expressive vocabulary size: the Expressive Vocabulary Test, the Test of Nonverbal Intelligence-III, and the Woodcock-Johnson-III Test of Cognitive Ability, Verbal Comprehension subtest. Two spoken word recognition tasks were administered. In the first, listeners identified isophonemic CVC words. In the second, listeners identified gated words varying in lexical frequency and neighborhood density. Analyses will examine the influence of lexical frequency and neighborhood density on the uniqueness point in the gating task, as well as relationships among nonverbal cognitive ability, vocabulary size, and the two spoken word recognition measures. [Work supported by NIH Grant P01 DC00110 and by the Lions 3M Hearing Foundation.

  10. Influences of High and Low Variability on Infant Word Recognition

    ERIC Educational Resources Information Center

    Singh, Leher

    2008-01-01

    Although infants begin to encode and track novel words in fluent speech by 7.5 months, their ability to recognize words is somewhat limited at this stage. In particular, when the surface form of a word is altered, by changing the gender or affective prosody of the speaker, infants begin to falter at spoken word recognition. Given that natural…

  11. Effectiveness of a Phonological Awareness Training Intervention on Word Recognition Ability of Children with Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Mohammed, Adel Abdulla; Mostafa, Amaal Ahmed

    2012-01-01

    This study describes an action research project designed to improve word recognition ability of children with Autism Spectrum Disorder. A total of 47 children diagnosed as having Autism Spectrum Disorder using Autism Spectrum Disorder Evaluation Inventory (Mohammed, 2006), participated in this study. The sample was randomly divided into two…

  12. Memory for Pictures, Words, and Spatial Location in Older Adults: Evidence for Pictorial Superiority.

    ERIC Educational Resources Information Center

    Park, Denise Cortis; And Others

    1983-01-01

    Tested recognition memory for items and spatial location by varying picture and word stimuli across four slide quadrants. Results showed a pictorial superiority effect for item recognition and a greater ability to remember the spatial location of pictures versus words for both old and young adults (N=95). (WAS)

  13. Word Recognition and Cognitive Profiles of Chinese Pre-School Children at Risk for Dyslexia through Language Delay or Familial History of Dyslexia

    ERIC Educational Resources Information Center

    McBride-Chang, Catherine; Lam, Fanny; Lam, Catherine; Doo, Sylvia; Wong, Simpson W. L.; Chow, Yvonne Y. Y.

    2008-01-01

    Background: This study sought to identify cognitive abilities that might distinguish Hong Kong Chinese kindergarten children at risk for dyslexia through either language delay or familial history of dyslexia from children who were not at risk and to examine how these abilities were associated with Chinese word recognition. The cognitive skills of…

  14. Evaluating the developmental trajectory of the episodic buffer component of working memory and its relation to word recognition in children.

    PubMed

    Wang, Shinmin; Allen, Richard J; Lee, Jun Ren; Hsieh, Chia-En

    2015-05-01

    The creation of temporary bound representation of information from different sources is one of the key abilities attributed to the episodic buffer component of working memory. Whereas the role of working memory in word learning has received substantial attention, very little is known about the link between the development of word recognition skills and the ability to bind information in the episodic buffer of working memory and how it may develop with age. This study examined the performance of Grade 2 children (8 years old), Grade 3 children (9 years old), and young adults on a task designed to measure their ability to bind visual and auditory-verbal information in working memory. Children's performance on this task significantly correlated with their word recognition skills even when chronological age, memory for individual elements, and other possible reading-related factors were taken into account. In addition, clear developmental trajectories were observed, with improvements in the ability to hold temporary bound information in working memory between Grades 2 and 3, and between the child and adult groups, that were independent from memory for the individual elements. These findings suggest that the capacity to temporarily bind novel auditory-verbal information to visual form in working memory is linked to the development of word recognition in children and improves with age. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Investigating an Innovative Computer Application to Improve L2 Word Recognition from Speech

    ERIC Educational Resources Information Center

    Matthews, Joshua; O'Toole, John Mitchell

    2015-01-01

    The ability to recognise words from the aural modality is a critical aspect of successful second language (L2) listening comprehension. However, little research has been reported on computer-mediated development of L2 word recognition from speech in L2 learning contexts. This report describes the development of an innovative computer application…

  16. Beyond word recognition: understanding pediatric oral health literacy.

    PubMed

    Richman, Julia Anne; Huebner, Colleen E; Leggott, Penelope J; Mouradian, Wendy E; Mancl, Lloyd A

    2011-01-01

    Parental oral health literacy is proposed to be an indicator of children's oral health. The purpose of this study was to test if word recognition, commonly used to assess health literacy, is an adequate measure of pediatric oral health literacy. This study evaluated 3 aspects of oral health literacy and parent-reported child oral health. A 3-part pediatric oral health literacy inventory was created to assess parents' word recognition, vocabulary knowledge, and comprehension of 35 terms used in pediatric dentistry. The inventory was administered to 45 English-speaking parents of children enrolled in Head Start. Parents' ability to read dental terms was not associated with vocabulary knowledge (r=0.29, P<.06) or comprehension (r=0.28, P>.06) of the terms. Vocabulary knowledge was strongly associated with comprehension (r=0.80, P<.001). Parent-reported child oral health status was not associated with word recognition, vocabulary knowledge, or comprehension; however parents reporting either excellent or fair/poor ratings had higher scores on all components of the inventory. Word recognition is an inadequate indicator of comprehension of pediatric oral health concepts; pediatric oral health literacy is a multifaceted construct. Parents with adequate reading ability may have difficulty understanding oral health information.

  17. Some considerations in evaluating spoken word recognition by normal-hearing, noise-masked normal-hearing, and cochlear implant listeners. I: The effects of response format.

    PubMed

    Sommers, M S; Kirk, K I; Pisoni, D B

    1997-04-01

    The purpose of the present studies was to assess the validity of using closed-set response formats to measure two cognitive processes essential for recognizing spoken words---perceptual normalization (the ability to accommodate acoustic-phonetic variability) and lexical discrimination (the ability to isolate words in the mental lexicon). In addition, the experiments were designed to examine the effects of response format on evaluation of these two abilities in normal-hearing (NH), noise-masked normal-hearing (NMNH), and cochlear implant (CI) subject populations. The speech recognition performance of NH, NMNH, and CI listeners was measured using both open- and closed-set response formats under a number of experimental conditions. To assess talker normalization abilities, identification scores for words produced by a single talker were compared with recognition performance for items produced by multiple talkers. To examine lexical discrimination, performance for words that are phonetically similar to many other words (hard words) was compared with scores for items with few phonetically similar competitors (easy words). Open-set word identification for all subjects was significantly poorer when stimuli were produced in lists with multiple talkers compared with conditions in which all of the words were spoken by a single talker. Open-set word recognition also was better for lexically easy compared with lexically hard words. Closed-set tests, in contrast, failed to reveal the effects of either talker variability or lexical difficulty even when the response alternatives provided were systematically selected to maximize confusability with target items. These findings suggest that, although closed-set tests may provide important information for clinical assessment of speech perception, they may not adequately evaluate a number of cognitive processes that are necessary for recognizing spoken words. The parallel results obtained across all subject groups indicate that NH, NMNH, and CI listeners engage similar perceptual operations to identify spoken words. Implications of these findings for the design of new test batteries that can provide comprehensive evaluations of the individual capacities needed for processing spoken language are discussed.

  18. Severe difficulties with word recognition in noise after platinum chemotherapy in childhood, and improvements with open-fitting hearing-aids.

    PubMed

    Einarsson, Einar-Jón; Petersen, Hannes; Wiebe, Thomas; Fransson, Per-Anders; Magnusson, Måns; Moëll, Christian

    2011-10-01

    To investigate word recognition in noise in subjects treated in childhood with chemotherapy, study benefits of open-fitting hearing-aids for word recognition, and investigate whether self-reported hearing-handicap corresponded to subjects' word recognition ability. Subjects diagnosed with cancer and treated with platinum-based chemotherapy in childhood underwent audiometric evaluations. Fifteen subjects (eight females and seven males) fulfilled the criteria set for the study, and four of those received customized open-fitting hearing-aids. Subjects with cisplatin-induced ototoxicity had severe difficulties recognizing words in noise, and scored as low as 54% below reference scores standardized for age and degree of hearing loss. Hearing-impaired subjects' self-reported hearing-handicap correlated significantly with word recognition in a quiet environment but not in noise. Word recognition in noise improved markedly (up to 46%) with hearing-aids, and the self-reported hearing-handicap and disability score were reduced by more than 50%. This study demonstrates the importance of testing word recognition in noise in subjects treated with platinum-based chemotherapy in childhood, and to use specific custom-made questionnaires to evaluate the experienced hearing-handicap. Open-fitting hearing-aids are a good alternative for subjects suffering from poor word recognition in noise.

  19. Brief report: accuracy and response time for the recognition of facial emotions in a large sample of children with autism spectrum disorders.

    PubMed

    Fink, Elian; de Rosnay, Marc; Wierda, Marlies; Koot, Hans M; Begeer, Sander

    2014-09-01

    The empirical literature has presented inconsistent evidence for deficits in the recognition of basic emotion expressions in children with autism spectrum disorders (ASD), which may be due to the focus on research with relatively small sample sizes. Additionally, it is proposed that although children with ASD may correctly identify emotion expression they rely on more deliberate, more time-consuming strategies in order to accurately recognize emotion expressions when compared to typically developing children. In the current study, we examine both emotion recognition accuracy and response time in a large sample of children, and explore the moderating influence of verbal ability on these findings. The sample consisted of 86 children with ASD (M age = 10.65) and 114 typically developing children (M age = 10.32) between 7 and 13 years of age. All children completed a pre-test (emotion word-word matching), and test phase consisting of basic emotion recognition, whereby they were required to match a target emotion expression to the correct emotion word; accuracy and response time were recorded. Verbal IQ was controlled for in the analyses. We found no evidence of a systematic deficit in emotion recognition accuracy or response time for children with ASD, controlling for verbal ability. However, when controlling for children's accuracy in word-word matching, children with ASD had significantly lower emotion recognition accuracy when compared to typically developing children. The findings suggest that the social impairments observed in children with ASD are not the result of marked deficits in basic emotion recognition accuracy or longer response times. However, children with ASD may be relying on other perceptual skills (such as advanced word-word matching) to complete emotion recognition tasks at a similar level as typically developing children.

  20. Genetic Influences on Early Word Recognition Abilities and Disabilities: A Study of 7-Year-Old Twins

    ERIC Educational Resources Information Center

    Harlaar, Nicole; Spinath, Frank M.; Dale, Philip S.; Plomin, Robert

    2005-01-01

    Background: A fundamental issue for child psychology concerns the origins of individual differences in early reading development. Method: A measure of word recognition, the Test of Word Reading Efficiency (TOWRE), was administered by telephone to a representative population sample of 3,909 same-sex and opposite-sex pairs of 7-year-old twins.…

  1. The Effects of Multiple Script Priming on Word Recognition by the Two Cerebral Hemispheres: Implications for Discourse Processing

    ERIC Educational Resources Information Center

    Faust, Miriam; Barak, Ofra; Chiarello, Christine

    2006-01-01

    The present study examined left (LH) and right (RH) hemisphere involvement in discourse processing by testing the ability of each hemisphere to use world knowledge in the form of script contexts for word recognition. Participants made lexical decisions to laterally presented target words preceded by centrally presented script primes (four…

  2. Some Neurocognitive Correlates of Noise-Vocoded Speech Perception in Children With Normal Hearing: A Replication and Extension of ).

    PubMed

    Roman, Adrienne S; Pisoni, David B; Kronenberger, William G; Faulkner, Kathleen F

    Noise-vocoded speech is a valuable research tool for testing experimental hypotheses about the effects of spectral degradation on speech recognition in adults with normal hearing (NH). However, very little research has utilized noise-vocoded speech with children with NH. Earlier studies with children with NH focused primarily on the amount of spectral information needed for speech recognition without assessing the contribution of neurocognitive processes to speech perception and spoken word recognition. In this study, we first replicated the seminal findings reported by ) who investigated effects of lexical density and word frequency on noise-vocoded speech perception in a small group of children with NH. We then extended the research to investigate relations between noise-vocoded speech recognition abilities and five neurocognitive measures: auditory attention (AA) and response set, talker discrimination, and verbal and nonverbal short-term working memory. Thirty-one children with NH between 5 and 13 years of age were assessed on their ability to perceive lexically controlled words in isolation and in sentences that were noise-vocoded to four spectral channels. Children were also administered vocabulary assessments (Peabody Picture Vocabulary test-4th Edition and Expressive Vocabulary test-2nd Edition) and measures of AA (NEPSY AA and response set and a talker discrimination task) and short-term memory (visual digit and symbol spans). Consistent with the findings reported in the original ) study, we found that children perceived noise-vocoded lexically easy words better than lexically hard words. Words in sentences were also recognized better than the same words presented in isolation. No significant correlations were observed between noise-vocoded speech recognition scores and the Peabody Picture Vocabulary test-4th Edition using language quotients to control for age effects. However, children who scored higher on the Expressive Vocabulary test-2nd Edition recognized lexically easy words better than lexically hard words in sentences. Older children perceived noise-vocoded speech better than younger children. Finally, we found that measures of AA and short-term memory capacity were significantly correlated with a child's ability to perceive noise-vocoded isolated words and sentences. First, we successfully replicated the major findings from the ) study. Because familiarity, phonological distinctiveness and lexical competition affect word recognition, these findings provide additional support for the proposal that several foundational elementary neurocognitive processes underlie the perception of spectrally degraded speech. Second, we found strong and significant correlations between performance on neurocognitive measures and children's ability to recognize words and sentences noise-vocoded to four spectral channels. These findings extend earlier research suggesting that perception of spectrally degraded speech reflects early peripheral auditory processes, as well as additional contributions of executive function, specifically, selective attention and short-term memory processes in spoken word recognition. The present findings suggest that AA and short-term memory support robust spoken word recognition in children with NH even under compromised and challenging listening conditions. These results are relevant to research carried out with listeners who have hearing loss, because they are routinely required to encode, process, and understand spectrally degraded acoustic signals.

  3. Some Neurocognitive Correlates of Noise-Vocoded Speech Perception in Children with Normal Hearing: A Replication and Extension of Eisenberg et al., 2002

    PubMed Central

    Roman, Adrienne S.; Pisoni, David B.; Kronenberger, William G.; Faulkner, Kathleen F.

    2016-01-01

    Objectives Noise-vocoded speech is a valuable research tool for testing experimental hypotheses about the effects of spectral-degradation on speech recognition in adults with normal hearing (NH). However, very little research has utilized noise-vocoded speech with children with NH. Earlier studies with children with NH focused primarily on the amount of spectral information needed for speech recognition without assessing the contribution of neurocognitive processes to speech perception and spoken word recognition. In this study, we first replicated the seminal findings reported by Eisenberg et al. (2002) who investigated effects of lexical density and word frequency on noise-vocoded speech perception in a small group of children with NH. We then extended the research to investigate relations between noise-vocoded speech recognition abilities and five neurocognitive measures: auditory attention and response set, talker discrimination and verbal and nonverbal short-term working memory. Design Thirty-one children with NH between 5 and 13 years of age were assessed on their ability to perceive lexically controlled words in isolation and in sentences that were noise-vocoded to four spectral channels. Children were also administered vocabulary assessments (PPVT-4 and EVT-2) and measures of auditory attention (NEPSY Auditory Attention (AA) and Response Set (RS) and a talker discrimination task (TD)) and short-term memory (visual digit and symbol spans). Results Consistent with the findings reported in the original Eisenberg et al. (2002) study, we found that children perceived noise-vocoded lexically easy words better than lexically hard words. Words in sentences were also recognized better than the same words presented in isolation. No significant correlations were observed between noise-vocoded speech recognition scores and the PPVT-4 using language quotients to control for age effects. However, children who scored higher on the EVT-2 recognized lexically easy words better than lexically hard words in sentences. Older children perceived noise-vocoded speech better than younger children. Finally, we found that measures of auditory attention and short-term memory capacity were significantly correlated with a child’s ability to perceive noise-vocoded isolated words and sentences. Conclusions First, we successfully replicated the major findings from the Eisenberg et al. (2002) study. Because familiarity, phonological distinctiveness and lexical competition affect word recognition, these findings provide additional support for the proposal that several foundational elementary neurocognitive processes underlie the perception of spectrally-degraded speech. Second, we found strong and significant correlations between performance on neurocognitive measures and children’s ability to recognize words and sentences noise-vocoded to four spectral channels. These findings extend earlier research suggesting that perception of spectrally-degraded speech reflects early peripheral auditory processes as well as additional contributions of executive function, specifically, selective attention and short-term memory processes in spoken word recognition. The present findings suggest that auditory attention and short-term memory support robust spoken word recognition in children with NH even under compromised and challenging listening conditions. These results are relevant to research carried out with listeners who have hearing loss, since they are routinely required to encode, process and understand spectrally-degraded acoustic signals. PMID:28045787

  4. Cognitive Predictors of Spoken Word Recognition in Children With and Without Developmental Language Disorders.

    PubMed

    Evans, Julia L; Gillam, Ronald B; Montgomery, James W

    2018-05-10

    This study examined the influence of cognitive factors on spoken word recognition in children with developmental language disorder (DLD) and typically developing (TD) children. Participants included 234 children (aged 7;0-11;11 years;months), 117 with DLD and 117 TD children, propensity matched for age, gender, socioeconomic status, and maternal education. Children completed a series of standardized assessment measures, a forward gating task, a rapid automatic naming task, and a series of tasks designed to examine cognitive factors hypothesized to influence spoken word recognition including phonological working memory, updating, attention shifting, and interference inhibition. Spoken word recognition for both initial and final accept gate points did not differ for children with DLD and TD controls after controlling target word knowledge in both groups. The 2 groups also did not differ on measures of updating, attention switching, and interference inhibition. Despite the lack of difference on these measures, for children with DLD, attention shifting and interference inhibition were significant predictors of spoken word recognition, whereas updating and receptive vocabulary were significant predictors of speed of spoken word recognition for the children in the TD group. Contrary to expectations, after controlling for target word knowledge, spoken word recognition did not differ for children with DLD and TD controls; however, the cognitive processing factors that influenced children's ability to recognize the target word in a stream of speech differed qualitatively for children with and without DLDs.

  5. The effects of age and divided attention on spontaneous recognition.

    PubMed

    Anderson, Benjamin A; Jacoby, Larry L; Thomas, Ruthann C; Balota, David A

    2011-05-01

    Studies of recognition typically involve tests in which the participant's memory for a stimulus is directly questioned. There are occasions however, in which memory occurs more spontaneously (e.g., an acquaintance seeming familiar out of context). Spontaneous recognition was investigated in a novel paradigm involving study of pictures and words followed by recognition judgments on stimuli with an old or new word superimposed over an old or new picture. Participants were instructed to make their recognition decision on either the picture or word and to ignore the distracting stimulus. Spontaneous recognition was measured as the influence of old vs. new distracters on target recognition. Across two experiments, older adults and younger adults placed under divided-attention showed a greater tendency to spontaneously recognize old distracters as compared to full-attention younger adults. The occurrence of spontaneous recognition is discussed in relation to ability to constrain retrieval to goal-relevant information.

  6. Modeling Spoken Word Recognition Performance by Pediatric Cochlear Implant Users using Feature Identification

    PubMed Central

    Frisch, Stefan A.; Pisoni, David B.

    2012-01-01

    Objective Computational simulations were carried out to evaluate the appropriateness of several psycholinguistic theories of spoken word recognition for children who use cochlear implants. These models also investigate the interrelations of commonly used measures of closed-set and open-set tests of speech perception. Design A software simulation of phoneme recognition performance was developed that uses feature identification scores as input. Two simulations of lexical access were developed. In one, early phoneme decisions are used in a lexical search to find the best matching candidate. In the second, phoneme decisions are made only when lexical access occurs. Simulated phoneme and word identification performance was then applied to behavioral data from the Phonetically Balanced Kindergarten test and Lexical Neighborhood Test of open-set word recognition. Simulations of performance were evaluated for children with prelingual sensorineural hearing loss who use cochlear implants with the MPEAK or SPEAK coding strategies. Results Open-set word recognition performance can be successfully predicted using feature identification scores. In addition, we observed no qualitative differences in performance between children using MPEAK and SPEAK, suggesting that both groups of children process spoken words similarly despite differences in input. Word recognition ability was best predicted in the model in which phoneme decisions were delayed until lexical access. Conclusions Closed-set feature identification and open-set word recognition focus on different, but related, levels of language processing. Additional insight for clinical intervention may be achieved by collecting both types of data. The most successful model of performance is consistent with current psycholinguistic theories of spoken word recognition. Thus it appears that the cognitive process of spoken word recognition is fundamentally the same for pediatric cochlear implant users and children and adults with normal hearing. PMID:11132784

  7. Assessing spoken word recognition in children who are deaf or hard of hearing: a translational approach.

    PubMed

    Kirk, Karen Iler; Prusick, Lindsay; French, Brian; Gotch, Chad; Eisenberg, Laurie S; Young, Nancy

    2012-06-01

    Under natural conditions, listeners use both auditory and visual speech cues to extract meaning from speech signals containing many sources of variability. However, traditional clinical tests of spoken word recognition routinely employ isolated words or sentences produced by a single talker in an auditory-only presentation format. The more central cognitive processes used during multimodal integration, perceptual normalization, and lexical discrimination that may contribute to individual variation in spoken word recognition performance are not assessed in conventional tests of this kind. In this article, we review our past and current research activities aimed at developing a series of new assessment tools designed to evaluate spoken word recognition in children who are deaf or hard of hearing. These measures are theoretically motivated by a current model of spoken word recognition and also incorporate "real-world" stimulus variability in the form of multiple talkers and presentation formats. The goal of this research is to enhance our ability to estimate real-world listening skills and to predict benefit from sensory aid use in children with varying degrees of hearing loss. American Academy of Audiology.

  8. Speed discrimination predicts word but not pseudo-word reading rate in adults and children

    PubMed Central

    Main, Keith L.; Pestilli, Franco; Mezer, Aviv; Yeatman, Jason; Martin, Ryan; Phipps, Stephanie; Wandell, Brian

    2014-01-01

    Word familiarity may affect magnocellular processes of word recognition. To explore this idea, we measured reading rate, speed-discrimination, and contrast detection thresholds in adults and children with a wide range of reading abilities. We found that speed-discrimination thresholds are higher in children than in adults and are correlated with age. Speed discrimination thresholds are also correlated with reading rate, but only for words, not for pseudo-words. Conversely, we found no correlation between contrast sensitivity and reading rate and no correlation between speed discrimination thresholds WASI subtest scores. These findings support the position that reading rate is influenced by magnocellular circuitry attuned to the recognition of familiar word-forms. PMID:25278418

  9. Word recognition in Alzheimer's disease: Effects of semantic degeneration.

    PubMed

    Cuetos, Fernando; Arce, Noemí; Martínez, Carmen; Ellis, Andrew W

    2017-03-01

    Impairments of word recognition in Alzheimer's disease (AD) have been less widely investigated than impairments affecting word retrieval and production. In particular, we know little about what makes individual words easier or harder for patients with AD to recognize. We used a lexical selection task in which participants were shown sets of four items, each set consisting of one word and three non-words. The task was simply to point to the word on each trial. Forty patients with mild-to-moderate AD were significantly impaired on this task relative to matched controls who made very few errors. The number of patients with AD able to recognize each word correctly was predicted by the frequency, age of acquisition, and imageability of the words, but not by their length or number of orthographic neighbours. Patient Mini-Mental State Examination and phonological fluency scores also predicted the number of words recognized. We propose that progressive degradation of central semantic representations in AD differentially affects the ability to recognize low-imageability, low-frequency, late-acquired words, with the same factors affecting word recognition as affecting word retrieval. © 2015 The British Psychological Society.

  10. Use of Adaptive Digital Signal Processing to Improve Speech Communication for Normally Hearing aand Hearing-Impaired Subjects.

    ERIC Educational Resources Information Center

    Harris, Richard W.; And Others

    1988-01-01

    A two-microphone adaptive digital noise cancellation technique improved word-recognition ability for 20 normal and 12 hearing-impaired adults by reducing multitalker speech babble and speech spectrum noise 18-22 dB. Word recognition improvements averaged 37-50 percent for normal and 27-40 percent for hearing-impaired subjects. Improvement was best…

  11. Effect of Context and Hearing Loss on Time-Gated Word Recognition in Children.

    PubMed

    Lewis, Dawna; Kopun, Judy; McCreery, Ryan; Brennan, Marc; Nishi, Kanae; Cordrey, Evan; Stelmachowicz, Pat; Moeller, Mary Pat

    The purpose of this study was to examine word recognition in children who are hard of hearing (CHH) and children with normal hearing (CNH) in response to time-gated words presented in high- versus low-predictability sentences (HP, LP), where semantic cues were manipulated. Findings inform our understanding of how CHH combine cognitive-linguistic and acoustic-phonetic cues to support spoken word recognition. It was hypothesized that both groups of children would be able to make use of linguistic cues provided by HP sentences to support word recognition. CHH were expected to require greater acoustic information (more gates) than CNH to correctly identify words in the LP condition. In addition, it was hypothesized that error patterns would differ across groups. Sixteen CHH with mild to moderate hearing loss and 16 age-matched CNH participated (5 to 12 years). Test stimuli included 15 LP and 15 HP age-appropriate sentences. The final word of each sentence was divided into segments and recombined with the sentence frame to create series of sentences in which the final word was progressively longer by the gated increments. Stimuli were presented monaurally through headphones and children were asked to identify the target word at each successive gate. They also were asked to rate their confidence in their word choice using a five- or three-point scale. For CHH, the signals were processed through a hearing aid simulator. Standardized language measures were used to assess the contribution of linguistic skills. Analysis of language measures revealed that the CNH and CHH performed within the average range on language abilities. Both groups correctly recognized a significantly higher percentage of words in the HP condition than in the LP condition. Although CHH performed comparably with CNH in terms of successfully recognizing the majority of words, differences were observed in the amount of acoustic-phonetic information needed to achieve accurate word recognition. CHH needed more gates than CNH to identify words in the LP condition. CNH were significantly lower in rating their confidence in the LP condition than in the HP condition. CHH, however, were not significantly different in confidence between the conditions. Error patterns for incorrect word responses across gates and predictability varied depending on hearing status. The results of this study suggest that CHH with age-appropriate language abilities took advantage of context cues in the HP sentences to guide word recognition in a manner similar to CNH. However, in the LP condition, they required more acoustic information (more gates) than CNH for word recognition. Differences in the structure of incorrect word responses and their nomination patterns across gates for CHH compared with their peers with NH suggest variations in how these groups use limited acoustic information to select word candidates.

  12. Development of Phonological Constancy

    PubMed Central

    Best, Catherine T.; Tyler, Michael D.; Gooding, Tiffany N.; Orlando, Corey B.; Quann, Chelsea A.

    2009-01-01

    Efficient word recognition depends on detecting critical phonetic differences among similar-sounding words, or sensitivity to phonological distinctiveness, an ability evident at 19 months of age but unreliable at 14 to 15 months of age. However, little is known about phonological constancy, the equally crucial ability to recognize a word's identity across natural phonetic variations, such as those in cross-dialect pronunciation differences. We show that 15- and 19-month-old children recognize familiar words spoken in their native dialect, but that only the older children recognize familiar words in a dissimilar nonnative dialect, providing evidence for emergence of phonological constancy by 19 months. These results are compatible with a perceptual-attunement account of developmental change in early word recognition, but not with statistical-learning or phonological accounts. Thus, the complementary skills of phonological constancy and distinctiveness both appear at around 19 months of age, together providing the child with a fundamental insight that permits rapid vocabulary growth and later reading acquisition. PMID:19368700

  13. Effects of age and hearing loss on recognition of unaccented and accented multisyllabic words.

    PubMed

    Gordon-Salant, Sandra; Yeni-Komshian, Grace H; Fitzgibbons, Peter J; Cohen, Julie I

    2015-02-01

    The effects of age and hearing loss on recognition of unaccented and accented words of varying syllable length were investigated. It was hypothesized that with increments in length of syllables, there would be atypical alterations in syllable stress in accented compared to native English, and that these altered stress patterns would be sensitive to auditory temporal processing deficits with aging. Sets of one-, two-, three-, and four-syllable words with the same initial syllable were recorded by one native English and two Spanish-accented talkers. Lists of these words were presented in isolation and in sentence contexts to younger and older normal-hearing listeners and to older hearing-impaired listeners. Hearing loss effects were apparent for unaccented and accented monosyllabic words, whereas age effects were observed for recognition of accented multisyllabic words, consistent with the notion that altered syllable stress patterns with accent are sensitive for revealing effects of age. Older listeners also exhibited lower recognition scores for moderately accented words in sentence contexts than in isolation, suggesting that the added demands on working memory for words in sentence contexts impact recognition of accented speech. The general pattern of results suggests that hearing loss, age, and cognitive factors limit the ability to recognize Spanish-accented speech.

  14. Effects of age and hearing loss on recognition of unaccented and accented multisyllabic words

    PubMed Central

    Gordon-Salant, Sandra; Yeni-Komshian, Grace H.; Fitzgibbons, Peter J.; Cohen, Julie I.

    2015-01-01

    The effects of age and hearing loss on recognition of unaccented and accented words of varying syllable length were investigated. It was hypothesized that with increments in length of syllables, there would be atypical alterations in syllable stress in accented compared to native English, and that these altered stress patterns would be sensitive to auditory temporal processing deficits with aging. Sets of one-, two-, three-, and four-syllable words with the same initial syllable were recorded by one native English and two Spanish-accented talkers. Lists of these words were presented in isolation and in sentence contexts to younger and older normal-hearing listeners and to older hearing-impaired listeners. Hearing loss effects were apparent for unaccented and accented monosyllabic words, whereas age effects were observed for recognition of accented multisyllabic words, consistent with the notion that altered syllable stress patterns with accent are sensitive for revealing effects of age. Older listeners also exhibited lower recognition scores for moderately accented words in sentence contexts than in isolation, suggesting that the added demands on working memory for words in sentence contexts impact recognition of accented speech. The general pattern of results suggests that hearing loss, age, and cognitive factors limit the ability to recognize Spanish-accented speech. PMID:25698021

  15. Word recognition materials for native speakers of Taiwan Mandarin.

    PubMed

    Nissen, Shawn L; Harris, Richard W; Dukes, Alycia

    2008-06-01

    To select, digitally record, evaluate, and psychometrically equate word recognition materials that can be used to measure the speech perception abilities of native speakers of Taiwan Mandarin in quiet. Frequently used bisyllabic words produced by male and female talkers of Taiwan Mandarin were digitally recorded and subsequently evaluated using 20 native listeners with normal hearing at 10 intensity levels (-5 to 40 dB HL) in increments of 5 dB. Using logistic regression, 200 words with the steepest psychometric slopes were divided into 4 lists and 8 half-lists that were relatively equivalent in psychometric function slope. To increase auditory homogeneity of the lists, the intensity of words in each list was digitally adjusted so that the threshold of each list was equal to the midpoint between the mean thresholds of the male and female half-lists. Digital recordings of the word recognition lists and the associated clinical instructions are available on CD upon request.

  16. A benefit of context reinstatement to recognition memory in aging: the role of familiarity processes.

    PubMed

    Ward, Emma V; Maylor, Elizabeth A; Poirier, Marie; Korko, Malgorzata; Ruud, Jens C M

    2017-11-01

    Reinstatement of encoding context facilitates memory for targets in young and older individuals (e.g., a word studied on a particular background scene is more likely to be remembered later if it is presented on the same rather than a different scene or no scene), yet older adults are typically inferior at recalling and recognizing target-context pairings. This study examined the mechanisms of the context effect in normal aging. Age differences in word recognition by context condition (original, switched, none, new), and the ability to explicitly remember target-context pairings were investigated using word-scene pairs (Experiment 1) and word-word pairs (Experiment 2). Both age groups benefited from context reinstatement in item recognition, although older adults were significantly worse than young adults at identifying original pairings and at discriminating between original and switched pairings. In Experiment 3, participants were given a three-alternative forced-choice recognition task that allowed older individuals to draw upon intact familiarity processes in selecting original pairings. Performance was age equivalent. Findings suggest that heightened familiarity associated with context reinstatement is useful for boosting recognition memory in aging.

  17. Voice tracking and spoken word recognition in the presence of other voices

    NASA Astrophysics Data System (ADS)

    Litong-Palima, Marisciel; Violanda, Renante; Saloma, Caesar

    2004-12-01

    We study the human hearing process by modeling the hair cell as a thresholded Hopf bifurcator and compare our calculations with experimental results involving human subjects in two different multi-source listening tasks of voice tracking and spoken-word recognition. In the model, we observed noise suppression by destructive interference between noise sources which weakens the effective noise strength acting on the hair cell. Different success rate characteristics were observed for the two tasks. Hair cell performance at low threshold levels agree well with results from voice-tracking experiments while those of word-recognition experiments are consistent with a linear model of the hearing process. The ability of humans to track a target voice is robust against cross-talk interference unlike word-recognition performance which deteriorates quickly with the number of uncorrelated noise sources in the environment which is a response behavior that is associated with linear systems.

  18. Textual emotion recognition for enhancing enterprise computing

    NASA Astrophysics Data System (ADS)

    Quan, Changqin; Ren, Fuji

    2016-05-01

    The growing interest in affective computing (AC) brings a lot of valuable research topics that can meet different application demands in enterprise systems. The present study explores a sub area of AC techniques - textual emotion recognition for enhancing enterprise computing. Multi-label emotion recognition in text is able to provide a more comprehensive understanding of emotions than single label emotion recognition. A representation of 'emotion state in text' is proposed to encompass the multidimensional emotions in text. It ensures the description in a formal way of the configurations of basic emotions as well as of the relations between them. Our method allows recognition of the emotions for the words bear indirect emotions, emotion ambiguity and multiple emotions. We further investigate the effect of word order for emotional expression by comparing the performances of bag-of-words model and sequence model for multi-label sentence emotion recognition. The experiments show that the classification results under sequence model are better than under bag-of-words model. And homogeneous Markov model showed promising results of multi-label sentence emotion recognition. This emotion recognition system is able to provide a convenient way to acquire valuable emotion information and to improve enterprise competitive ability in many aspects.

  19. Syllables and bigrams: orthographic redundancy and syllabic units affect visual word recognition at different processing levels.

    PubMed

    Conrad, Markus; Carreiras, Manuel; Tamm, Sascha; Jacobs, Arthur M

    2009-04-01

    Over the last decade, there has been increasing evidence for syllabic processing during visual word recognition. If syllabic effects prove to be independent from orthographic redundancy, this would seriously challenge the ability of current computational models to account for the processing of polysyllabic words. Three experiments are presented to disentangle effects of the frequency of syllabic units and orthographic segments in lexical decision. In Experiment 1 the authors obtained an inhibitory syllable frequency effect that was unaffected by the presence or absence of a bigram trough at the syllable boundary. In Experiments 2 and 3 an inhibitory effect of initial syllable frequency but a facilitative effect of initial bigram frequency emerged when manipulating 1 of the 2 measures and controlling for the other in Spanish words starting with consonant-vowel syllables. The authors conclude that effects of syllable frequency and letter-cluster frequency are independent and arise at different processing levels of visual word recognition. Results are discussed within the framework of an interactive activation model of visual word recognition. (c) 2009 APA, all rights reserved.

  20. Exploring Individual Differences in Irregular Word Recognition among Children with Early-Emerging and Late-Emerging Word Reading Difficulty

    ERIC Educational Resources Information Center

    Steacy, Laura M.; Kearns, Devin M.; Gilbert, Jennifer K.; Compton, Donald L.; Cho, Eunsoo; Lindstrom, Esther R.; Collins, Alyson A.

    2017-01-01

    Models of irregular word reading that take into account both child- and word-level predictors have not been evaluated in typically developing children and children with reading difficulty (RD). The purpose of the present study was to model individual differences in irregular word reading ability among 5th grade children (N = 170), oversampled for…

  1. Effect of Context and Hearing Loss on Time-Gated Word Recognition in Children

    PubMed Central

    Lewis, Dawna E.; Kopun, Judy; McCreery, Ryan; Brennan, Marc; Nishi, Kanae; Cordrey, Evan; Stelmachowicz, Pat; Moeller, Mary Pat

    2016-01-01

    Objectives The purpose of this study was to examine word recognition in children who are hard of hearing (CHH) and children with normal hearing (CNH) in response to time-gated words presented in high- vs. low-predictability sentences (HP, LP), where semantic cues were manipulated. Findings inform our understanding of how CHH combine cognitive-linguistic and acoustic-phonetic cues to support spoken word recognition. It was hypothesized that both groups of children would be able to make use of linguistic cues provided by HP sentences to support word recognition. CHH were expected to require greater acoustic information (more gates) than CNH to correctly identify words in the LP condition. In addition, it was hypothesized that error patterns would differ across groups. Design Sixteen CHH with mild-to-moderate hearing loss and 16 age-matched CNH participated (5–12 yrs). Test stimuli included 15 LP and 15 HP age-appropriate sentences. The final word of each sentence was divided into segments and recombined with the sentence frame to create series of sentences in which the final word was progressively longer by the gated increments. Stimuli were presented monaurally through headphones and children were asked to identify the target word at each successive gate. They also were asked to rate their confidence in their word choice using a 5- or 3-point scale. For CHH, the signals were processed through a hearing aid simulator. Standardized language measures were used to assess the contribution of linguistic skills. Results Analysis of language measures revealed that the CNH and CHH performed within the average range on language abilities. Both groups correctly recognized a significantly higher percentage of words in the HP condition than in the LP condition. Although CHH performed comparably to CNH in terms of successfully recognizing the majority of words, differences were observed in the amount of acoustic-phonetic information needed to achieve accurate word recognition. CHH needed more gates than CNH to identify words in the LP condition. CNH were significantly lower in rating their confidence in the LP condition than in the HP condition. CHH, however, were not significantly different in confidence between the conditions. Error patterns for incorrect word responses across gates and predictability varied depending on hearing status. Conclusions The results of this study suggest that CHH with age-appropriate language abilities took advantage of context cues in the HP sentences to guide word recognition in a manner similar to CNH. However, in the LP condition, they required more acoustic information (more gates) than CNH for word recognition. Differences in the structure of incorrect word responses and their nomination patterns across gates for CHH compared to their peers with normal hearing suggest variations in how these groups use limited acoustic information to select word candidates. PMID:28045838

  2. Multisensory speech perception in autism spectrum disorder: From phoneme to whole-word perception.

    PubMed

    Stevenson, Ryan A; Baum, Sarah H; Segers, Magali; Ferber, Susanne; Barense, Morgan D; Wallace, Mark T

    2017-07-01

    Speech perception in noisy environments is boosted when a listener can see the speaker's mouth and integrate the auditory and visual speech information. Autistic children have a diminished capacity to integrate sensory information across modalities, which contributes to core symptoms of autism, such as impairments in social communication. We investigated the abilities of autistic and typically-developing (TD) children to integrate auditory and visual speech stimuli in various signal-to-noise ratios (SNR). Measurements of both whole-word and phoneme recognition were recorded. At the level of whole-word recognition, autistic children exhibited reduced performance in both the auditory and audiovisual modalities. Importantly, autistic children showed reduced behavioral benefit from multisensory integration with whole-word recognition, specifically at low SNRs. At the level of phoneme recognition, autistic children exhibited reduced performance relative to their TD peers in auditory, visual, and audiovisual modalities. However, and in contrast to their performance at the level of whole-word recognition, both autistic and TD children showed benefits from multisensory integration for phoneme recognition. In accordance with the principle of inverse effectiveness, both groups exhibited greater benefit at low SNRs relative to high SNRs. Thus, while autistic children showed typical multisensory benefits during phoneme recognition, these benefits did not translate to typical multisensory benefit of whole-word recognition in noisy environments. We hypothesize that sensory impairments in autistic children raise the SNR threshold needed to extract meaningful information from a given sensory input, resulting in subsequent failure to exhibit behavioral benefits from additional sensory information at the level of whole-word recognition. Autism Res 2017. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. Autism Res 2017, 10: 1280-1290. © 2017 International Society for Autism Research, Wiley Periodicals, Inc. © 2017 International Society for Autism Research, Wiley Periodicals, Inc.

  3. Mark My Words: Tone of Voice Changes Affective Word Representations in Memory

    PubMed Central

    Schirmer, Annett

    2010-01-01

    The present study explored the effect of speaker prosody on the representation of words in memory. To this end, participants were presented with a series of words and asked to remember the words for a subsequent recognition test. During study, words were presented auditorily with an emotional or neutral prosody, whereas during test, words were presented visually. Recognition performance was comparable for words studied with emotional and neutral prosody. However, subsequent valence ratings indicated that study prosody changed the affective representation of words in memory. Compared to words with neutral prosody, words with sad prosody were later rated as more negative and words with happy prosody were later rated as more positive. Interestingly, the participants' ability to remember study prosody failed to predict this effect, suggesting that changes in word valence were implicit and associated with initial word processing rather than word retrieval. Taken together these results identify a mechanism by which speakers can have sustained effects on listener attitudes towards word referents. PMID:20169154

  4. The Effects of Word Walls and Word Wall Activities on the Reading Fluency of First Grade Students

    ERIC Educational Resources Information Center

    Jasmine, Joanne; Schiesl, Pamela

    2009-01-01

    Reading fluency is the ability to read orally with speed and efficiency, including word recognition, decoding, and comprehension (Chard & Pikulski, 2005). Able readers achieve fluency as they recognize words with speed and build upon them to aid in comprehension (Pumfrey & Elliott, 1990). One way to help students achieve fluency is through the use…

  5. He Said, She Said: Effects of Bilingualism on Cross-Talker Word Recognition in Infancy

    ERIC Educational Resources Information Center

    Singh, Leher

    2018-01-01

    The purpose of the current study was to examine effects of bilingual language input on infant word segmentation and on talker generalization. In the present study, monolingually and bilingually exposed infants were compared on their abilities to recognize familiarized words in speech and to maintain generalizable representations of familiarized…

  6. Strategic value-directed learning and memory in Alzheimer's disease and behavioural-variant frontotemporal dementia.

    PubMed

    Wong, Stephanie; Irish, Muireann; Savage, Greg; Hodges, John R; Piguet, Olivier; Hornberger, Michael

    2018-02-12

    In healthy adults, the ability to prioritize learning of highly valued information is supported by executive functions and enhances subsequent memory retrieval for this information. In Alzheimer's disease (AD) and behavioural-variant frontotemporal dementia (bvFTD), marked deficits are evident in learning and memory, presenting in the context of executive dysfunction. It is unclear whether these patients show a typical memory bias for higher valued stimuli. We administered a value-directed word-list learning task to AD (n = 10) and bvFTD (n = 21) patients and age-matched healthy controls (n = 22). Each word was assigned a low, medium or high point value, and participants were instructed to maximize the number of points earned across three learning trials. Participants' memory for the words was assessed on a delayed recall trial, followed by a recognition test for the words and corresponding point values. Relative to controls, both patient groups showed poorer overall learning, delayed recall and recognition. Despite these impairments, patients with AD preferentially recalled high-value words on learning trials and showed significant value-directed enhancement of recognition memory for the words and points. Conversely, bvFTD patients did not prioritize recall of high-value words during learning trials, and this reduced selectivity was related to inhibitory dysfunction. Nonetheless, bvFTD patients showed value-directed enhancement of recognition memory for the point values, suggesting a mismatch between memory of high-value information and the ability to apply this in a motivationally salient context. Our findings demonstrate that value-directed enhancement of memory may persist to some degree in patients with dementia, despite pronounced deficits in learning and memory. © 2018 The British Psychological Society.

  7. Effects of hydrocortisone on false memory recognition in healthy men and women.

    PubMed

    Duesenberg, Moritz; Weber, Juliane; Schaeuffele, Carmen; Fleischer, Juliane; Hellmann-Regen, Julian; Roepke, Stefan; Moritz, Steffen; Otte, Christian; Wingenfeld, Katja

    2016-12-01

    Most of the studies focusing on the effect of stress on false memories by using psychosocial and physiological stressors yielded diverse results. In the present study, we systematically tested the effect of exogenous hydrocortisone using a false memory paradigm. In this placebo-controlled study, 37 healthy men and 38 healthy women (mean age 24.59 years) received either 10 mg of hydrocortisone or placebo 75 min before using the false memory, that is, Deese-Roediger-McDermott (DRM), paradigm. We used emotionally charged and neutral DRM-based word lists to look for false recognition rates in comparison to true recognition rates. Overall, we expected an increase in false memory after hydrocortisone compared to placebo. No differences between the cortisol and the placebo group were revealed for false and for true recognition performance. In general, false recognition rates were lower compared to true recognition rates. Furthermore, we found a valence effect (neutral, positive, negative, disgust word stimuli), indicating higher rates of true and false recognition for emotional compared to neutral words. We further found an interaction effect between sex and recognition. Post hoc t tests showed that for true recognition women showed a significantly better memory performance than men, independent of treatment. This study does not support the hypothesis that cortisol decreases the ability to distinguish between old versus novel words in young healthy individuals. However, sex and emotional valence of word stimuli appear to be important moderators. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  8. Some factors underlying individual differences in speech recognition on PRESTO: a first report.

    PubMed

    Tamati, Terrin N; Gilbert, Jaimie L; Pisoni, David B

    2013-01-01

    Previous studies investigating speech recognition in adverse listening conditions have found extensive variability among individual listeners. However, little is currently known about the core underlying factors that influence speech recognition abilities. To investigate sensory, perceptual, and neurocognitive differences between good and poor listeners on the Perceptually Robust English Sentence Test Open-set (PRESTO), a new high-variability sentence recognition test under adverse listening conditions. Participants who fell in the upper quartile (HiPRESTO listeners) or lower quartile (LoPRESTO listeners) on key word recognition on sentences from PRESTO in multitalker babble completed a battery of behavioral tasks and self-report questionnaires designed to investigate real-world hearing difficulties, indexical processing skills, and neurocognitive abilities. Young, normal-hearing adults (N = 40) from the Indiana University community participated in the current study. Participants' assessment of their own real-world hearing difficulties was measured with a self-report questionnaire on situational hearing and hearing health history. Indexical processing skills were assessed using a talker discrimination task, a gender discrimination task, and a forced-choice regional dialect categorization task. Neurocognitive abilities were measured with the Auditory Digit Span Forward (verbal short-term memory) and Digit Span Backward (verbal working memory) tests, the Stroop Color and Word Test (attention/inhibition), the WordFam word familiarity test (vocabulary size), the Behavioral Rating Inventory of Executive Function-Adult Version (BRIEF-A) self-report questionnaire on executive function, and two performance subtests of the Wechsler Abbreviated Scale of Intelligence (WASI) Performance Intelligence Quotient (IQ; nonverbal intelligence). Scores on self-report questionnaires and behavioral tasks were tallied and analyzed by listener group (HiPRESTO and LoPRESTO). The extreme groups did not differ overall on self-reported hearing difficulties in real-world listening environments. However, an item-by-item analysis of questions revealed that LoPRESTO listeners reported significantly greater difficulty understanding speakers in a public place. HiPRESTO listeners were significantly more accurate than LoPRESTO listeners at gender discrimination and regional dialect categorization, but they did not differ on talker discrimination accuracy or response time, or gender discrimination response time. HiPRESTO listeners also had longer forward and backward digit spans, higher word familiarity ratings on the WordFam test, and lower (better) scores for three individual items on the BRIEF-A questionnaire related to cognitive load. The two groups did not differ on the Stroop Color and Word Test or either of the WASI performance IQ subtests. HiPRESTO listeners and LoPRESTO listeners differed in indexical processing abilities, short-term and working memory capacity, vocabulary size, and some domains of executive functioning. These findings suggest that individual differences in the ability to encode and maintain highly detailed episodic information in speech may underlie the variability observed in speech recognition performance in adverse listening conditions using high-variability PRESTO sentences in multitalker babble. American Academy of Audiology.

  9. Some Factors Underlying Individual Differences in Speech Recognition on PRESTO: A First Report

    PubMed Central

    Tamati, Terrin N.; Gilbert, Jaimie L.; Pisoni, David B.

    2013-01-01

    Background Previous studies investigating speech recognition in adverse listening conditions have found extensive variability among individual listeners. However, little is currently known about the core, underlying factors that influence speech recognition abilities. Purpose To investigate sensory, perceptual, and neurocognitive differences between good and poor listeners on PRESTO, a new high-variability sentence recognition test under adverse listening conditions. Research Design Participants who fell in the upper quartile (HiPRESTO listeners) or lower quartile (LoPRESTO listeners) on key word recognition on sentences from PRESTO in multitalker babble completed a battery of behavioral tasks and self-report questionnaires designed to investigate real-world hearing difficulties, indexical processing skills, and neurocognitive abilities. Study Sample Young, normal-hearing adults (N = 40) from the Indiana University community participated in the current study. Data Collection and Analysis Participants’ assessment of their own real-world hearing difficulties was measured with a self-report questionnaire on situational hearing and hearing health history. Indexical processing skills were assessed using a talker discrimination task, a gender discrimination task, and a forced-choice regional dialect categorization task. Neurocognitive abilities were measured with the Auditory Digit Span Forward (verbal short-term memory) and Digit Span Backward (verbal working memory) tests, the Stroop Color and Word Test (attention/inhibition), the WordFam word familiarity test (vocabulary size), the BRIEF-A self-report questionnaire on executive function, and two performance subtests of the WASI Performance IQ (non-verbal intelligence). Scores on self-report questionnaires and behavioral tasks were tallied and analyzed by listener group (HiPRESTO and LoPRESTO). Results The extreme groups did not differ overall on self-reported hearing difficulties in real-world listening environments. However, an item-by-item analysis of questions revealed that LoPRESTO listeners reported significantly greater difficulty understanding speakers in a public place. HiPRESTO listeners were significantly more accurate than LoPRESTO listeners at gender discrimination and regional dialect categorization, but they did not differ on talker discrimination accuracy or response time, or gender discrimination response time. HiPRESTO listeners also had longer forward and backward digit spans, higher word familiarity ratings on the WordFam test, and lower (better) scores for three individual items on the BRIEF-A questionnaire related to cognitive load. The two groups did not differ on the Stroop Color and Word Test or either of the WASI performance IQ subtests. Conclusions HiPRESTO listeners and LoPRESTO listeners differed in indexical processing abilities, short-term and working memory capacity, vocabulary size, and some domains of executive functioning. These findings suggest that individual differences in the ability to encode and maintain highly detailed episodic information in speech may underlie the variability observed in speech recognition performance in adverse listening conditions using high-variability PRESTO sentences in multitalker babble. PMID:24047949

  10. The Importance of Concept of Word in Text as a Predictor of Sight Word Development in Spanish

    ERIC Educational Resources Information Center

    Ford, Karen L.; Invernizzi, Marcia A.; Meyer, J. Patrick

    2015-01-01

    The goal of the current study was to determine whether Concept of Word in Text (COW-T) predicts later sight word reading achievement in Spanish, as it does in English. COW-T requires that children have beginning sound awareness, automatic recognition of letters and letter sounds, and the ability to coordinate these skills to finger point…

  11. English Listeners Use Suprasegmental Cues to Lexical Stress Early During Spoken-Word Recognition

    PubMed Central

    Poellmann, Katja; Kong, Ying-Yee

    2017-01-01

    Purpose We used an eye-tracking technique to investigate whether English listeners use suprasegmental information about lexical stress to speed up the recognition of spoken words in English. Method In a visual world paradigm, 24 young English listeners followed spoken instructions to choose 1 of 4 printed referents on a computer screen (e.g., “Click on the word admiral”). Displays contained a critical pair of words (e.g., ˈadmiral–ˌadmiˈration) that were segmentally identical for their first 2 syllables but differed suprasegmentally in their 1st syllable: One word began with primary lexical stress, and the other began with secondary lexical stress. All words had phrase-level prominence. Listeners' relative proportion of eye fixations on these words indicated their ability to differentiate them over time. Results Before critical word pairs became segmentally distinguishable in their 3rd syllables, participants fixated target words more than their stress competitors, but only if targets had initial primary lexical stress. The degree to which stress competitors were fixated was independent of their stress pattern. Conclusions Suprasegmental information about lexical stress modulates the time course of spoken-word recognition. Specifically, suprasegmental information on the primary-stressed syllable of words with phrase-level prominence helps in distinguishing the word from phonological competitors with secondary lexical stress. PMID:28056135

  12. The Effect of Background Noise on the Word Activation Process in Nonnative Spoken-Word Recognition

    ERIC Educational Resources Information Center

    Scharenborg, Odette; Coumans, Juul M. J.; van Hout, Roeland

    2018-01-01

    This article investigates 2 questions: (1) does the presence of background noise lead to a differential increase in the number of simultaneously activated candidate words in native and nonnative listening? And (2) do individual differences in listeners' cognitive and linguistic abilities explain the differential effect of background noise on…

  13. The Effect of Lexical Content on Dichotic Speech Recognition in Older Adults.

    PubMed

    Findlen, Ursula M; Roup, Christina M

    2016-01-01

    Age-related auditory processing deficits have been shown to negatively affect speech recognition for older adult listeners. In contrast, older adults gain benefit from their ability to make use of semantic and lexical content of the speech signal (i.e., top-down processing), particularly in complex listening situations. Assessment of auditory processing abilities among aging adults should take into consideration semantic and lexical content of the speech signal. The purpose of this study was to examine the effects of lexical and attentional factors on dichotic speech recognition performance characteristics for older adult listeners. A repeated measures design was used to examine differences in dichotic word recognition as a function of lexical and attentional factors. Thirty-five older adults (61-85 yr) with sensorineural hearing loss participated in this study. Dichotic speech recognition was evaluated using consonant-vowel-consonant (CVC) word and nonsense CVC syllable stimuli administered in the free recall, directed recall right, and directed recall left response conditions. Dichotic speech recognition performance for nonsense CVC syllables was significantly poorer than performance for CVC words. Dichotic recognition performance varied across response condition for both stimulus types, which is consistent with previous studies on dichotic speech recognition. Inspection of individual results revealed that five listeners demonstrated an auditory-based left ear deficit for one or both stimulus types. Lexical content of stimulus materials affects performance characteristics for dichotic speech recognition tasks in the older adult population. The use of nonsense CVC syllable material may provide a way to assess dichotic speech recognition performance while potentially lessening the effects of lexical content on performance (i.e., measuring bottom-up auditory function both with and without top-down processing). American Academy of Audiology.

  14. The relationship between novel word learning and anomia treatment success in adults with chronic aphasia.

    PubMed

    Dignam, Jade; Copland, David; Rawlings, Alicia; O'Brien, Kate; Burfein, Penni; Rodriguez, Amy D

    2016-01-29

    Learning capacity may influence an individual's response to aphasia rehabilitation. However, investigations into the relationship between novel word learning ability and response to anomia therapy are lacking. The aim of the present study was to evaluate the novel word learning ability in post-stroke aphasia and to establish the relationship between learning ability and anomia treatment outcomes. We also explored the influence of locus of language breakdown on novel word learning ability and anomia treatment response. 30 adults (6F; 24M) with chronic, post-stroke aphasia were recruited to the study. Prior to treatment, participants underwent an assessment of language, which included the Comprehensive Aphasia Test and three baseline confrontation naming probes in order to develop sets of treated and untreated items. We also administered the novel word learning paradigm, in which participants learnt novel names associated with unfamiliar objects and were immediately tested on recall (expressive) and recognition (receptive) tasks. Participants completed 48 h of Aphasia Language Impairment and Functioning Therapy (Aphasia LIFT) over a 3 week (intensive) or 8 week (distributed) schedule. Therapy primarily targeted the remediation of word retrieval deficits, so naming of treated and untreated items immediately post-therapy and at 1 month follow-up was used to determine therapeutic response. Performance on recall and recognition tasks demonstrated that participants were able to learn novel words; however, performance was variable and was influenced by participants' aphasia severity, lexical-semantic processing and locus of language breakdown. Novel word learning performance was significantly correlated with participants' response to therapy for treated items at post-therapy. In contrast, participants' novel word learning performance was not correlated with therapy gains for treated items at 1 month follow-up or for untreated items at either time point. Therapy intensity did not influence treatment outcomes. This is the first group study to directly examine the relationship between novel word learning and therapy outcomes for anomia rehabilitation in adults with aphasia. Importantly, we found that novel word learning performance was correlated with therapy outcomes. We propose that novel word learning ability may contribute to the initial acquisition of treatment gains in anomia rehabilitation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Clinical Strategies for Sampling Word Recognition Performance.

    PubMed

    Schlauch, Robert S; Carney, Edward

    2018-04-17

    Computer simulation was used to estimate the statistical properties of searches for maximum word recognition ability (PB max). These involve presenting multiple lists and discarding all scores but that of the 1 list that produced the highest score. The simulations, which model limitations inherent in the precision of word recognition scores, were done to inform clinical protocols. A secondary consideration was a derivation of 95% confidence intervals for significant changes in score from phonemic scoring of a 50-word list. The PB max simulations were conducted on a "client" with flat performance intensity functions. The client's performance was assumed to be 60% initially and 40% for a second assessment. Thousands of estimates were obtained to examine the precision of (a) single lists and (b) multiple lists using a PB max procedure. This method permitted summarizing the precision for assessing a 20% drop in performance. A single 25-word list could identify only 58.4% of the cases in which performance fell from 60% to 40%. A single 125-word list identified 99.8% of the declines correctly. Presenting 3 or 5 lists to find PB max produced an undesirable finding: an increase in the word recognition score. A 25-word list produces unacceptably low precision for making clinical decisions. This finding holds in both single and multiple 25-word lists, as in a search for PB max. A table is provided, giving estimates of 95% critical ranges for successive presentations of a 50-word list analyzed by the number of phonemes correctly identified.

  16. Nonword Repetition and Vocabulary Knowledge as Predictors of Children's Phonological and Semantic Word Learning.

    PubMed

    Adlof, Suzanne M; Patten, Hannah

    2017-03-01

    This study examined the unique and shared variance that nonword repetition and vocabulary knowledge contribute to children's ability to learn new words. Multiple measures of word learning were used to assess recall and recognition of phonological and semantic information. Fifty children, with a mean age of 8 years (range 5-12 years), completed experimental assessments of word learning and norm-referenced assessments of receptive and expressive vocabulary knowledge and nonword repetition skills. Hierarchical multiple regression analyses examined the variance in word learning that was explained by vocabulary knowledge and nonword repetition after controlling for chronological age. Together with chronological age, nonword repetition and vocabulary knowledge explained up to 44% of the variance in children's word learning. Nonword repetition was the stronger predictor of phonological recall, phonological recognition, and semantic recognition, whereas vocabulary knowledge was the stronger predictor of verbal semantic recall. These findings extend the results of past studies indicating that both nonword repetition skill and existing vocabulary knowledge are important for new word learning, but the relative influence of each predictor depends on the way word learning is measured. Suggestions for further research involving typically developing children and children with language or reading impairments are discussed.

  17. Nonword Repetition and Vocabulary Knowledge as Predictors of Children's Phonological and Semantic Word Learning

    ERIC Educational Resources Information Center

    Adlof, Suzanne M.; Patten, Hannah

    2017-01-01

    Purpose: This study examined the unique and shared variance that nonword repetition and vocabulary knowledge contribute to children's ability to learn new words. Multiple measures of word learning were used to assess recall and recognition of phonological and semantic information. Method: Fifty children, with a mean age of 8 years (range 5-12…

  18. Amplitude (vu and rms) and Temporal (msec) Measures of Two Northwestern University Auditory Test No. 6 Recordings.

    PubMed

    Wilson, Richard H

    2015-04-01

    In 1940, a cooperative effort by the radio networks and Bell Telephone produced the volume unit (vu) meter that has been the mainstay instrument for monitoring the level of speech signals in commercial broadcasting and research laboratories. With the use of computers, today the amplitude of signals can be quantified easily using the root mean square (rms) algorithm. Researchers had previously reported that amplitude estimates of sentences and running speech were 4.8 dB higher when measured with a vu meter than when calculated with rms. This study addresses the vu-rms relation as applied to the carrier phrase and target word paradigm used to assess word-recognition abilities, the premise being that by definition the word-recognition paradigm is a special and different case from that described previously. The purpose was to evaluate the vu and rms amplitude relations for the carrier phrases and target words commonly used to assess word-recognition abilities. In addition, the relations with the target words between rms level and recognition performance were examined. Descriptive and correlational. Two recoded versions of the Northwestern University Auditory Test No. 6 were evaluated, the Auditec of St. Louis (Auditec) male speaker and the Department of Veterans Affairs (VA) female speaker. Using both visual and auditory cues from a waveform editor, the temporal onsets and offsets were defined for each carrier phrase and each target word. The rms amplitudes for those segments then were computed and expressed in decibels with reference to the maximum digitization range. The data were maintained for each of the four Northwestern University Auditory Test No. 6 word lists. Descriptive analyses were used with linear regressions used to evaluate the reliability of the measurement technique and the relation between the rms levels of the target words and recognition performances. Although there was a 1.3 dB difference between the calibration tones, the mean levels of the carrier phrases for the two recordings were -14.8 dB (Auditec) and -14.1 dB (VA) with standard deviations <1 dB. For the target words, the mean amplitudes were -19.9 dB (Auditec) and -18.3 dB (VA) with standard deviations ranging from 1.3 to 2.4 dB. The mean durations for the carrier phrases of both recordings were 593-594 msec, with the mean durations of the target words a little different, 509 msec (Auditec) and 528 msec (VA). Random relations were observed between the recognition performances and rms levels of the target words. Amplitude and temporal data for the individual words are provided. The rms levels of the carrier phrases closely approximated (±1 dB) the rms levels of the calibration tones, both of which were set to 0 vu (dB). The rms levels of the target words were 5-6 dB below the levels of the carrier phrases and were substantially more variable than the levels of the carrier phrases. The relation between the rms levels of the target words and recognition performances on the words was random. American Academy of Audiology.

  19. Development of the Word Auditory Recognition and Recall Measure: A Working Memory Test for Use in Rehabilitative Audiology.

    PubMed

    Smith, Sherri L; Pichora-Fuller, M Kathleen; Alexander, Genevieve

    The purpose of this study was to develop the Word Auditory Recognition and Recall Measure (WARRM) and to conduct the inaugural evaluation of the performance of younger adults with normal hearing, older adults with normal to near-normal hearing, and older adults with pure-tone hearing loss on the WARRM. The WARRM is a new test designed for concurrently assessing word recognition and auditory working memory performance in adults who may have pure-tone hearing loss. The test consists of 100 monosyllabic words based on widely used speech-recognition test materials. The 100 words are presented in recall set sizes of 2, 3, 4, 5, and 6 items, with 5 trials in each set size. The WARRM yields a word-recognition score and a recall score. The WARRM was administered to all participants in three listener groups under two processing conditions in a mixed model (between-subjects, repeated measures) design. The between-subjects factor was group, with 48 younger listeners with normal audiometric thresholds (younger listeners with normal hearing [YNH]), 48 older listeners with normal thresholds through 3000 Hz (older listeners with normal hearing [ONH]), and 48 older listeners with sensorineural hearing loss (older listeners with hearing loss [OHL]). The within-subjects factor was WARRM processing condition (no additional task or with an alphabet judgment task). The associations between results on the WARRM test and results on a battery of other auditory and memory measures were examined. Word-recognition performance on the WARRM was not affected by processing condition or set size and was near ceiling for the YNH and ONH listeners (99 and 98%, respectively) with both groups performing significantly better than the OHL listeners (83%). The recall results were significantly better for the YNH, ONH, and OHL groups with no processing (93, 84, and 75%, respectively) than with the alphabet processing (86, 77, and 70%). In both processing conditions, recall was best for YNH, followed by ONH, and worst for OHL listeners. WARRM recall scores were significantly correlated with other memory measures. In addition, WARRM recall scores were correlated with results on the Words-In-Noise (WIN) test for the OHL listeners in the no processing condition and for ONH listeners in the alphabet processing condition. Differences in the WIN and recall scores of these groups are consistent with the interpretation that the OHL listeners found listening to be sufficiently demanding to affect recall even in the no processing condition, whereas the ONH group listeners did not find it so demanding until the additional alphabet processing task was added. These findings demonstrate the feasibility of incorporating an auditory memory test into a word-recognition test to obtain measures of both word recognition and working memory simultaneously. The correlation of WARRM recall with scores from other memory measures is evidence of construct validity. The observation of correlations between the WIN thresholds with each of the older groups and recall scores in certain processing conditions suggests that recall depends on listeners' word-recognition abilities in noise in combination with the processing demands of the task. The recall score provides additional information beyond the pure-tone audiogram and word-recognition scores that may help rehabilitative audiologists assess the listening abilities of patients with hearing loss.

  20. Effect of training on word-recognition performance in noise for young normal-hearing and older hearing-impaired listeners.

    PubMed

    Burk, Matthew H; Humes, Larry E; Amos, Nathan E; Strauser, Lauren E

    2006-06-01

    The objective of this study was to evaluate the effectiveness of a training program for hearing-impaired listeners to improve their speech-recognition performance within a background noise when listening to amplified speech. Both noise-masked young normal-hearing listeners, used to model the performance of elderly hearing-impaired listeners, and a group of elderly hearing-impaired listeners participated in the study. Of particular interest was whether training on an isolated word list presented by a standardized talker can generalize to everyday speech communication across novel talkers. Word-recognition performance was measured for both young normal-hearing (n = 16) and older hearing-impaired (n = 7) adults. Listeners were trained on a set of 75 monosyllabic words spoken by a single female talker over a 9- to 14-day period. Performance for the familiar (trained) talker was measured before and after training in both open-set and closed-set response conditions. Performance on the trained words of the familiar talker were then compared with those same words spoken by three novel talkers and to performance on a second set of untrained words presented by both the familiar and unfamiliar talkers. The hearing-impaired listeners returned 6 mo after their initial training to examine retention of the trained words as well as their ability to transfer any knowledge gained from word training to sentences containing both trained and untrained words. Both young normal-hearing and older hearing-impaired listeners performed significantly better on the word list in which they were trained versus a second untrained list presented by the same talker. Improvements on the untrained words were small but significant, indicating some generalization to novel words. The large increase in performance on the trained words, however, was maintained across novel talkers, pointing to the listener's greater focus on lexical memorization of the words rather than a focus on talker-specific acoustic characteristics. On return in 6 mo, listeners performed significantly better on the trained words relative to their initial baseline performance. Although the listeners performed significantly better on trained versus untrained words in isolation, once the trained words were embedded in sentences, no improvement in recognition over untrained words within the same sentences was shown. Older hearing-impaired listeners were able to significantly improve their word-recognition abilities through training with one talker and to the same degree as young normal-hearing listeners. The improved performance was maintained across talkers and across time. This might imply that training a listener using a standardized list and talker may still provide benefit when these same words are presented by novel talkers outside the clinic. However, training on isolated words was not sufficient to transfer to fluent speech for the specific sentence materials used within this study. Further investigation is needed regarding approaches to improve a hearing aid user's speech understanding in everyday communication situations.

  1. Selective verbal recognition memory impairments are associated with atrophy of the language network in non-semantic variants of primary progressive aphasia.

    PubMed

    Nilakantan, Aneesha S; Voss, Joel L; Weintraub, Sandra; Mesulam, M-Marsel; Rogalski, Emily J

    2017-06-01

    Primary progressive aphasia (PPA) is clinically defined by an initial loss of language function and preservation of other cognitive abilities, including episodic memory. While PPA primarily affects the left-lateralized perisylvian language network, some clinical neuropsychological tests suggest concurrent initial memory loss. The goal of this study was to test recognition memory of objects and words in the visual and auditory modality to separate language-processing impairments from retentive memory in PPA. Individuals with non-semantic PPA had longer reaction times and higher false alarms for auditory word stimuli compared to visual object stimuli. Moreover, false alarms for auditory word recognition memory were related to cortical thickness within the left inferior frontal gyrus and left temporal pole, while false alarms for visual object recognition memory was related to cortical thickness within the right-temporal pole. This pattern of results suggests that specific vulnerability in processing verbal stimuli can hinder episodic memory in PPA, and provides evidence for differential contributions of the left and right temporal poles in word and object recognition memory. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. The Contribution of General Reading Ability to Science Achievement

    ERIC Educational Resources Information Center

    Reed, Deborah K.; Petscher, Yaacov; Truckenmiller, Adrea J.

    2017-01-01

    This study explored the relationship between the reading ability and science achievement of students in grades 5, 8, and 9. Reading ability was assessed with four measures: word recognition, vocabulary, syntactic knowledge, and comprehension (23% of all passages were on science topics). Science achievement was assessed with state…

  3. Investigating the Improvement of Decoding Abilities and Working Memory in Children with Incremental or Entity Personal Conceptions of Intelligence: Two Case Reports

    PubMed Central

    Alesi, Marianna; Rappo, Gaetano; Pepi, Annamaria

    2016-01-01

    One of the most significant current discussions has led to the hypothesis that domain-specific training programs alone are not enough to improve reading achievement or working memory abilities. Incremental or Entity personal conceptions of intelligence may be assumed to be an important prognostic factor to overcome domain-specific deficits. Specifically, incremental students tend to be more oriented toward change and autonomy and are able to adopt more efficacious strategies. This study aims at examining the effect of personal conceptions of intelligence to strengthen the efficacy of a multidimensional intervention program in order to improve decoding abilities and working memory. Participants included two children (M age = 10 years) with developmental dyslexia and different conceptions of intelligence. The children were tested on a whole battery of reading and spelling tests commonly used in the assessment of reading disabilities in Italy. Afterwards, they were given a multimedia test to measure motivational factors such as conceptions of intelligence and achievement goals. The children took part in the T.I.R.D. Multimedia Training for the Rehabilitation of Dyslexia (Rappo and Pepi, 2010) reinforced by specific units to improve verbal working memory for 3 months. This training consisted of specific tasks to rehabilitate both visual and phonological strategies (sound blending, word segmentation, alliteration test and rhyme test, letter recognition, digraph recognition, trigraph recognition, and word recognition as samples of visual tasks) and verbal working memory (rapid words and non-words recognition). Posttest evaluations showed that the child holding the incremental theory of intelligence improved more than the child holding a static representation. On the whole this study highlights the importance of treatment programs in which both specificity of deficits and motivational factors are both taken into account. There is a need to plan multifaceted intervention programs based on a transverse approach, considering both cognitive and motivational factors. PMID:26779069

  4. Auditory Word Serial Recall Benefits from Orthographic Dissimilarity

    ERIC Educational Resources Information Center

    Pattamadilok, Chotiga; Lafontaine, Helene; Morais, Jose; Kolinsky, Regine

    2010-01-01

    The influence of orthographic knowledge has been consistently observed in speech recognition and metaphonological tasks. The present study provides data suggesting that such influence also pervades other cognitive domains related to language abilities, such as verbal working memory. Using serial recall of auditory seven-word lists, we observed…

  5. Effects of lexical characteristics and demographic factors on mandarin chinese open-set word recognition in children with cochlear implants.

    PubMed

    Liu, Haihong; Liu, Sha; Wang, Suju; Liu, Chang; Kong, Ying; Zhang, Ning; Li, Shujing; Yang, Yilin; Han, Demin; Zhang, Luo

    2013-01-01

    The purpose of this study was to examine the open-set word recognition performance of Mandarin Chinese-speaking children who had received a multichannel cochlear implant (CI) and examine the effects of lexical characteristics and demographic factors (i.e., age at implantation and duration of implant use) on Mandarin Chinese open-set word recognition in these children. Participants were 230 prelingually deafened children with CIs. Age at implantation ranged from 0.9 to 16.0 years, with a mean of 3.9 years. The Standard-Chinese version of the Monosyllabic Lexical Neighborhood test and the Multisyllabic Lexical Neighborhood test were used to evaluate the open-set word identification abilities of the children. A two-way analysis of variance was performed to delineate the lexical effects on the open-set word identification, with word difficulty and syllable length as the two main factors. The effects of age at implantation and duration of implant use on open-set, word-recognition performance were examined using correlational/regressional models. First, the average percent-correct scores for the disyllabic "easy" list, disyllabic "hard" list, monosyllabic "easy" list, and monosyllabic "hard" list were 65.0%, 51.3%, 58.9%, and 46.2%, respectively. For both the easy and hard lists, the percentage of words correctly identified was higher for disyllabic words than for monosyllabic words, Second, the CI group scored 26.3%, 31.3%, and 18.8 % points lower than their hearing-age-matched normal-hearing peers for 4, 5, and 6 years of hearing age, respectively. The corresponding gaps between the CI group and the chronological-age-matched normal-hearing group were 47.6, 49.6, and 42.4, respectively. The individual variations in performance were much greater in the CI group than in the normal-hearing group, Third, the children exhibited steady improvements in performance as the duration of implant use increased, especially 1 to 6 years postimplantation. Last, age at implantation had significant effects on postimplantation word-recognition performance. The benefit of early implantation was particularly evident in children 5 years old or younger. First, Mandarin Chinese-speaking pediatric CI users' open-set word recognition was influenced by the lexical characteristics of the stimuli. The score was higher for easy words than for hard words and was higher for disyllabic words than for monosyllabic words, Second, Mandarin-Chinese-speaking pediatric CI users exhibited steady progress in open-set word recognition as the duration of implant use increased. However, the present study also demonstrated that, even after 6 years of CI use, there was a significant deficit in open-set, word-recognition performance in the CI children compared with their normal-hearing peers. Third, age at implantation had significant effects on open-set, word-recognition performance. Early implanted children exhibited better performance than children implanted later.

  6. Nonword Repetition and Vocabulary Knowledge as Predictors of Children's Phonological and Semantic Word Learning

    PubMed Central

    Patten, Hannah

    2017-01-01

    Purpose This study examined the unique and shared variance that nonword repetition and vocabulary knowledge contribute to children's ability to learn new words. Multiple measures of word learning were used to assess recall and recognition of phonological and semantic information. Method Fifty children, with a mean age of 8 years (range 5–12 years), completed experimental assessments of word learning and norm-referenced assessments of receptive and expressive vocabulary knowledge and nonword repetition skills. Hierarchical multiple regression analyses examined the variance in word learning that was explained by vocabulary knowledge and nonword repetition after controlling for chronological age. Results Together with chronological age, nonword repetition and vocabulary knowledge explained up to 44% of the variance in children's word learning. Nonword repetition was the stronger predictor of phonological recall, phonological recognition, and semantic recognition, whereas vocabulary knowledge was the stronger predictor of verbal semantic recall. Conclusions These findings extend the results of past studies indicating that both nonword repetition skill and existing vocabulary knowledge are important for new word learning, but the relative influence of each predictor depends on the way word learning is measured. Suggestions for further research involving typically developing children and children with language or reading impairments are discussed. PMID:28241284

  7. Individual Differences in Language Ability Are Related to Variation in Word Recognition, Not Speech Perception: Evidence from Eye Movements

    ERIC Educational Resources Information Center

    McMurray, Bob; Munson, Cheyenne; Tomblin, J. Bruce

    2014-01-01

    Purpose: The authors examined speech perception deficits associated with individual differences in language ability, contrasting auditory, phonological, or lexical accounts by asking whether lexical competition is differentially sensitive to fine-grained acoustic variation. Method: Adolescents with a range of language abilities (N = 74, including…

  8. Lexical Tone Variation and Spoken Word Recognition in Preschool Children: Effects of Perceptual Salience

    ERIC Educational Resources Information Center

    Singh, Leher; Tan, Aloysia; Wewalaarachchi, Thilanga D.

    2017-01-01

    Children undergo gradual progression in their ability to differentiate correct and incorrect pronunciations of words, a process that is crucial to establishing a native vocabulary. For the most part, the development of mature phonological representations has been researched by investigating children's sensitivity to consonant and vowel variation,…

  9. A Picture-Identification Test for Hearing-Impaired Children. Final Report.

    ERIC Educational Resources Information Center

    Ross, Mark; Lerman, Jay

    The Word Intelligibility by Picture Identification Test (WIPI) was developed to measure speech discrimination ability in hearing impaired children. In the first phase of development, the word stimuli were evaluated to determine whether they were within the recognition vocabulary of 15 hearing impaired children (aged 6 to 12) and whether the…

  10. Language deficits in poor comprehenders: a case for the simple view of reading.

    PubMed

    Catts, Hugh W; Adlof, Suzanne M; Ellis Weismer, Susan

    2006-04-01

    To examine concurrently and retrospectively the language abilities of children with specific reading comprehension deficits ("poor comprehenders") and compare them to typical readers and children with specific decoding deficits ("poor decoders"). In Study 1, the authors identified 57 poor comprehenders, 27 poor decoders, and 98 typical readers on the basis of 8th-grade reading achievement. These subgroups' performances on 8th-grade measures of language comprehension and phonological processing were investigated. In Study 2, the authors examined retrospectively subgroups' performances on measures of language comprehension and phonological processing in kindergarten, 2nd, and 4th grades. Word recognition and reading comprehension in 2nd and 4th grades were also considered. Study 1 showed that poor comprehenders had concurrent deficits in language comprehension but normal abilities in phonological processing. Poor decoders were characterized by the opposite pattern of language abilities. Study 2 results showed that subgroups had language (and word recognition) profiles in the earlier grades that were consistent with those observed in 8th grade. Subgroup differences in reading comprehension were inconsistent across grades but reflective of the changes in the components of reading comprehension over time. The results support the simple view of reading and the phonological deficit hypothesis. Furthermore, the findings indicate that a classification system that is based on the simple view has advantages over standard systems that focus only on word recognition and/or reading comprehension.

  11. Theory of mind and emotion recognition skills in children with specific language impairment, autism spectrum disorder and typical development: group differences and connection to knowledge of grammatical morphology, word-finding abilities and verbal working memory.

    PubMed

    Loukusa, Soile; Mäkinen, Leena; Kuusikko-Gauffin, Sanna; Ebeling, Hanna; Moilanen, Irma

    2014-01-01

    Social perception skills, such as understanding the mind and emotions of others, affect children's communication abilities in real-life situations. In addition to autism spectrum disorder (ASD), there is increasing knowledge that children with specific language impairment (SLI) also demonstrate difficulties in their social perception abilities. To compare the performance of children with SLI, ASD and typical development (TD) in social perception tasks measuring Theory of Mind (ToM) and emotion recognition. In addition, to evaluate the association between social perception tasks and language tests measuring word-finding abilities, knowledge of grammatical morphology and verbal working memory. Children with SLI (n = 18), ASD (n = 14) and TD (n = 25) completed two NEPSY-II subtests measuring social perception abilities: (1) Affect Recognition and (2) ToM (includes Verbal and non-verbal Contextual tasks). In addition, children's word-finding abilities were measured with the TWF-2, grammatical morphology by using the Grammatical Closure subtest of ITPA, and verbal working memory by using subtests of Sentence Repetition or Word List Interference (chosen according the child's age) of the NEPSY-II. Children with ASD scored significantly lower than children with SLI or TD on the NEPSY-II Affect Recognition subtest. Both SLI and ASD groups scored significantly lower than TD children on Verbal tasks of the ToM subtest of NEPSY-II. However, there were no significant group differences on non-verbal Contextual tasks of the ToM subtest of the NEPSY-II. Verbal tasks of the ToM subtest were correlated with the Grammatical Closure subtest and TWF-2 in children with SLI. In children with ASD correlation between TWF-2 and ToM: Verbal tasks was moderate, almost achieving statistical significance, but no other correlations were found. Both SLI and ASD groups showed difficulties in tasks measuring verbal ToM but differences were not found in tasks measuring non-verbal Contextual ToM. The association between Verbal ToM tasks and language tests was stronger in children with SLI than in children with ASD. There is a need for further studies in order to understand interaction between different areas of language and cognitive development. © 2014 Royal College of Speech and Language Therapists.

  12. Recognition Memory for Braille or Spoken Words: An fMRI study in Early Blind

    PubMed Central

    Burton, Harold; Sinclair, Robert J.; Agato, Alvin

    2012-01-01

    We examined cortical activity in early blind during word recognition memory. Nine participants were blind at birth and one by 1.5 yrs. In an event-related design, we studied blood oxygen level-dependent responses to studied (“old”) compared to novel (“new”) words. Presentation mode was in Braille or spoken. Responses were larger for identified “new” words read with Braille in bilateral lower and higher tier visual areas and primary somatosensory cortex. Responses to spoken “new” words were larger in bilateral primary and accessory auditory cortex. Auditory cortex was unresponsive to Braille words and occipital cortex responded to spoken words but not differentially with “old”/“new” recognition. Left dorsolateral prefrontal cortex had larger responses to “old” words only with Braille. Larger occipital cortex responses to “new” Braille words suggested verbal memory based on the mechanism of recollection. A previous report in sighted noted larger responses for “new” words studied in association with pictures that created a distinctiveness heuristic source factor which enhanced recollection during remembering. Prior behavioral studies in early blind noted an exceptional ability to recall words. Utilization of this skill by participants in the current study possibly engendered recollection that augmented remembering “old” words. A larger response when identifying “new” words possibly resulted from exhaustive recollecting the sensory properties of “old” words in modality appropriate sensory cortices. The uniqueness of a memory role for occipital cortex is in its cross-modal responses to coding tactile properties of Braille. The latter possibly reflects a “sensory echo” that aids recollection. PMID:22251836

  13. Recognition memory for Braille or spoken words: an fMRI study in early blind.

    PubMed

    Burton, Harold; Sinclair, Robert J; Agato, Alvin

    2012-02-15

    We examined cortical activity in early blind during word recognition memory. Nine participants were blind at birth and one by 1.5years. In an event-related design, we studied blood oxygen level-dependent responses to studied ("old") compared to novel ("new") words. Presentation mode was in Braille or spoken. Responses were larger for identified "new" words read with Braille in bilateral lower and higher tier visual areas and primary somatosensory cortex. Responses to spoken "new" words were larger in bilateral primary and accessory auditory cortex. Auditory cortex was unresponsive to Braille words and occipital cortex responded to spoken words but not differentially with "old"/"new" recognition. Left dorsolateral prefrontal cortex had larger responses to "old" words only with Braille. Larger occipital cortex responses to "new" Braille words suggested verbal memory based on the mechanism of recollection. A previous report in sighted noted larger responses for "new" words studied in association with pictures that created a distinctiveness heuristic source factor which enhanced recollection during remembering. Prior behavioral studies in early blind noted an exceptional ability to recall words. Utilization of this skill by participants in the current study possibly engendered recollection that augmented remembering "old" words. A larger response when identifying "new" words possibly resulted from exhaustive recollecting the sensory properties of "old" words in modality appropriate sensory cortices. The uniqueness of a memory role for occipital cortex is in its cross-modal responses to coding tactile properties of Braille. The latter possibly reflects a "sensory echo" that aids recollection. Copyright © 2011 Elsevier B.V. All rights reserved.

  14. Development of Body-Part Vocabulary in Toddlers in Relation to Self-Understanding

    ERIC Educational Resources Information Center

    Waugh, Whitney E.; Brownell, Celia A.

    2015-01-01

    To better understand young children's ability to communicate about their bodies, toddlers' comprehension and production of 27 common body-part words was assessed using parental report at 20 and 30 months (n?=?64), and self-awareness was assessed using mirror self-recognition. Children at both ages comprehended more body-part words that referred to…

  15. Predictive Coding Accelerates Word Recognition and Learning in the Early Stages of Language Development

    ERIC Educational Resources Information Center

    Ylinen, Sari; Bosseler, Alexis; Junttila, Katja; Huotilainen, Minna

    2017-01-01

    The ability to predict future events in the environment and learn from them is a fundamental component of adaptive behavior across species. Here we propose that inferring predictions facilitates speech processing and word learning in the early stages of language development. Twelve- and 24-month olds' electrophysiological brain responses to heard…

  16. Speech Recognition in Adults With Cochlear Implants: The Effects of Working Memory, Phonological Sensitivity, and Aging.

    PubMed

    Moberly, Aaron C; Harris, Michael S; Boyce, Lauren; Nittrouer, Susan

    2017-04-14

    Models of speech recognition suggest that "top-down" linguistic and cognitive functions, such as use of phonotactic constraints and working memory, facilitate recognition under conditions of degradation, such as in noise. The question addressed in this study was what happens to these functions when a listener who has experienced years of hearing loss obtains a cochlear implant. Thirty adults with cochlear implants and 30 age-matched controls with age-normal hearing underwent testing of verbal working memory using digit span and serial recall of words. Phonological capacities were assessed using a lexical decision task and nonword repetition. Recognition of words in sentences in speech-shaped noise was measured. Implant users had only slightly poorer working memory accuracy than did controls and only on serial recall of words; however, phonological sensitivity was highly impaired. Working memory did not facilitate speech recognition in noise for either group. Phonological sensitivity predicted sentence recognition for implant users but not for listeners with normal hearing. Clinical speech recognition outcomes for adult implant users relate to the ability of these users to process phonological information. Results suggest that phonological capacities may serve as potential clinical targets through rehabilitative training. Such novel interventions may be particularly helpful for older adult implant users.

  17. Speech Recognition in Adults With Cochlear Implants: The Effects of Working Memory, Phonological Sensitivity, and Aging

    PubMed Central

    Harris, Michael S.; Boyce, Lauren; Nittrouer, Susan

    2017-01-01

    Purpose Models of speech recognition suggest that “top-down” linguistic and cognitive functions, such as use of phonotactic constraints and working memory, facilitate recognition under conditions of degradation, such as in noise. The question addressed in this study was what happens to these functions when a listener who has experienced years of hearing loss obtains a cochlear implant. Method Thirty adults with cochlear implants and 30 age-matched controls with age-normal hearing underwent testing of verbal working memory using digit span and serial recall of words. Phonological capacities were assessed using a lexical decision task and nonword repetition. Recognition of words in sentences in speech-shaped noise was measured. Results Implant users had only slightly poorer working memory accuracy than did controls and only on serial recall of words; however, phonological sensitivity was highly impaired. Working memory did not facilitate speech recognition in noise for either group. Phonological sensitivity predicted sentence recognition for implant users but not for listeners with normal hearing. Conclusion Clinical speech recognition outcomes for adult implant users relate to the ability of these users to process phonological information. Results suggest that phonological capacities may serve as potential clinical targets through rehabilitative training. Such novel interventions may be particularly helpful for older adult implant users. PMID:28384805

  18. Image jitter enhances visual performance when spatial resolution is impaired.

    PubMed

    Watson, Lynne M; Strang, Niall C; Scobie, Fraser; Love, Gordon D; Seidel, Dirk; Manahilov, Velitchko

    2012-09-06

    Visibility of low-spatial frequency stimuli improves when their contrast is modulated at 5 to 10 Hz compared with stationary stimuli. Therefore, temporal modulations of visual objects could enhance the performance of low vision patients who primarily perceive images of low-spatial frequency content. We investigated the effect of retinal-image jitter on word recognition speed and facial emotion recognition in subjects with central visual impairment. Word recognition speed and accuracy of facial emotion discrimination were measured in volunteers with AMD under stationary and jittering conditions. Computer-driven and optoelectronic approaches were used to induce retinal-image jitter with duration of 100 or 166 ms and amplitude within the range of 0.5 to 2.6° visual angle. Word recognition speed was also measured for participants with simulated (Bangerter filters) visual impairment. Text jittering markedly enhanced word recognition speed for people with severe visual loss (101 ± 25%), while for those with moderate visual impairment, this effect was weaker (19 ± 9%). The ability of low vision patients to discriminate the facial emotions of jittering images improved by a factor of 2. A prototype of optoelectronic jitter goggles produced similar improvement in facial emotion discrimination. Word recognition speed in participants with simulated visual impairment was enhanced for interjitter intervals over 100 ms and reduced for shorter intervals. Results suggest that retinal-image jitter with optimal frequency and amplitude is an effective strategy for enhancing visual information processing in the absence of spatial detail. These findings will enable the development of novel tools to improve the quality of life of low vision patients.

  19. The Importance of Flexibility of Pronunciation in Learning to Decode: A Training Study in Set for Variability

    ERIC Educational Resources Information Center

    Zipke, Marcy

    2016-01-01

    The ability to flexibly approach the pronunciation of unknown words, or set "for variability", has been shown to contribute to word recognition skills. However, this is the first study that has attempted to teach students strategies for increasing their set for variability. Beginning readers (N = 15) were instructed to correct oral…

  20. Non-native Listeners’ Recognition of High-Variability Speech Using PRESTO

    PubMed Central

    Tamati, Terrin N.; Pisoni, David B.

    2015-01-01

    Background Natural variability in speech is a significant challenge to robust successful spoken word recognition. In everyday listening environments, listeners must quickly adapt and adjust to multiple sources of variability in both the signal and listening environments. High-variability speech may be particularly difficult to understand for non-native listeners, who have less experience with the second language (L2) phonological system and less detailed knowledge of sociolinguistic variation of the L2. Purpose The purpose of this study was to investigate the effects of high-variability sentences on non-native speech recognition and to explore the underlying sources of individual differences in speech recognition abilities of non-native listeners. Research Design Participants completed two sentence recognition tasks involving high-variability and low-variability sentences. They also completed a battery of behavioral tasks and self-report questionnaires designed to assess their indexical processing skills, vocabulary knowledge, and several core neurocognitive abilities. Study Sample Native speakers of Mandarin (n = 25) living in the United States recruited from the Indiana University community participated in the current study. A native comparison group consisted of scores obtained from native speakers of English (n = 21) in the Indiana University community taken from an earlier study. Data Collection and Analysis Speech recognition in high-variability listening conditions was assessed with a sentence recognition task using sentences from PRESTO (Perceptually Robust English Sentence Test Open-Set) mixed in 6-talker multitalker babble. Speech recognition in low-variability listening conditions was assessed using sentences from HINT (Hearing In Noise Test) mixed in 6-talker multitalker babble. Indexical processing skills were measured using a talker discrimination task, a gender discrimination task, and a forced-choice regional dialect categorization task. Vocabulary knowledge was assessed with the WordFam word familiarity test, and executive functioning was assessed with the BRIEF-A (Behavioral Rating Inventory of Executive Function – Adult Version) self-report questionnaire. Scores from the non-native listeners on behavioral tasks and self-report questionnaires were compared with scores obtained from native listeners tested in a previous study and were examined for individual differences. Results Non-native keyword recognition scores were significantly lower on PRESTO sentences than on HINT sentences. Non-native listeners’ keyword recognition scores were also lower than native listeners’ scores on both sentence recognition tasks. Differences in performance on the sentence recognition tasks between non-native and native listeners were larger on PRESTO than on HINT, although group differences varied by signal-to-noise ratio. The non-native and native groups also differed in the ability to categorize talkers by region of origin and in vocabulary knowledge. Individual non-native word recognition accuracy on PRESTO sentences in multitalker babble at more favorable signal-to-noise ratios was found to be related to several BRIEF-A subscales and composite scores. However, non-native performance on PRESTO was not related to regional dialect categorization, talker and gender discrimination, or vocabulary knowledge. Conclusions High-variability sentences in multitalker babble were particularly challenging for non-native listeners. Difficulty under high-variability testing conditions was related to lack of experience with the L2, especially L2 sociolinguistic information, compared with native listeners. Individual differences among the non-native listeners were related to weaknesses in core neurocognitive abilities affecting behavioral control in everyday life. PMID:25405842

  1. Reading Ability and the Utilization of Orthographic Structure in Reading. Technical Report No. 515.

    ERIC Educational Resources Information Center

    Massaro, Dominic W.; Taylor, Glen A.

    Previous research has demonstrated that readers utilize orthographic structure in their perceptual recognition of letter strings. Two experiments were conducted to assess whether this utilization varied with reading ability. Anagrams of words were made to create strings that orthogonally combined high and low single letter positional frequency and…

  2. Development of Body Part Vocabulary in Toddlers in Relation to Self-Understanding

    PubMed Central

    Brownell, Celia

    2014-01-01

    To better understand young children’s ability to communicate about their bodies, toddlers’ comprehension and production of 27 common body part words was assessed using parental report at 20 and 30 months (n = 64), and self-awareness was assessed using mirror self-recognition. Children at both ages comprehended more body part words that referred to themselves than to others’ bodies, and more words referring to locations that they could see on themselves than to those they could not see. Children with more advanced mirror self-recognition comprehended and produced more body part words. These findings suggest that with age and better understanding of the self, children also possess a better understanding of the body, and they provide new information about factors that affect how young children begin to talk about their own and others’ bodies. They should be useful for practitioners who need to ask children about their bodies and body parts. PMID:26195850

  3. Individual differences in language ability are related to variation in word recognition, not speech perception: evidence from eye movements.

    PubMed

    McMurray, Bob; Munson, Cheyenne; Tomblin, J Bruce

    2014-08-01

    The authors examined speech perception deficits associated with individual differences in language ability, contrasting auditory, phonological, or lexical accounts by asking whether lexical competition is differentially sensitive to fine-grained acoustic variation. Adolescents with a range of language abilities (N = 74, including 35 impaired) participated in an experiment based on McMurray, Tanenhaus, and Aslin (2002). Participants heard tokens from six 9-step voice onset time (VOT) continua spanning 2 words (beach/peach, beak/peak, etc.) while viewing a screen containing pictures of those words and 2 unrelated objects. Participants selected the referent while eye movements to each picture were monitored as a measure of lexical activation. Fixations were examined as a function of both VOT and language ability. Eye movements were sensitive to within-category VOT differences: As VOT approached the boundary, listeners made more fixations to the competing word. This did not interact with language ability, suggesting that language impairment is not associated with differential auditory sensitivity or phonetic categorization. Listeners with poorer language skills showed heightened competitors fixations overall, suggesting a deficit in lexical processes. Language impairment may be better characterized by a deficit in lexical competition (inability to suppress competing words), rather than differences in phonological categorization or auditory abilities.

  4. Speech Recognition and Parent Ratings From Auditory Development Questionnaires in Children Who Are Hard of Hearing.

    PubMed

    McCreery, Ryan W; Walker, Elizabeth A; Spratford, Meredith; Oleson, Jacob; Bentler, Ruth; Holte, Lenore; Roush, Patricia

    2015-01-01

    Progress has been made in recent years in the provision of amplification and early intervention for children who are hard of hearing. However, children who use hearing aids (HAs) may have inconsistent access to their auditory environment due to limitations in speech audibility through their HAs or limited HA use. The effects of variability in children's auditory experience on parent-reported auditory skills questionnaires and on speech recognition in quiet and in noise were examined for a large group of children who were followed as part of the Outcomes of Children with Hearing Loss study. Parent ratings on auditory development questionnaires and children's speech recognition were assessed for 306 children who are hard of hearing. Children ranged in age from 12 months to 9 years. Three questionnaires involving parent ratings of auditory skill development and behavior were used, including the LittlEARS Auditory Questionnaire, Parents Evaluation of Oral/Aural Performance in Children rating scale, and an adaptation of the Speech, Spatial, and Qualities of Hearing scale. Speech recognition in quiet was assessed using the Open- and Closed-Set Test, Early Speech Perception test, Lexical Neighborhood Test, and Phonetically Balanced Kindergarten word lists. Speech recognition in noise was assessed using the Computer-Assisted Speech Perception Assessment. Children who are hard of hearing were compared with peers with normal hearing matched for age, maternal educational level, and nonverbal intelligence. The effects of aided audibility, HA use, and language ability on parent responses to auditory development questionnaires and on children's speech recognition were also examined. Children who are hard of hearing had poorer performance than peers with normal hearing on parent ratings of auditory skills and had poorer speech recognition. Significant individual variability among children who are hard of hearing was observed. Children with greater aided audibility through their HAs, more hours of HA use, and better language abilities generally had higher parent ratings of auditory skills and better speech-recognition abilities in quiet and in noise than peers with less audibility, more limited HA use, or poorer language abilities. In addition to the auditory and language factors that were predictive for speech recognition in quiet, phonological working memory was also a positive predictor for word recognition abilities in noise. Children who are hard of hearing continue to experience delays in auditory skill development and speech-recognition abilities compared with peers with normal hearing. However, significant improvements in these domains have occurred in comparison to similar data reported before the adoption of universal newborn hearing screening and early intervention programs for children who are hard of hearing. Increasing the audibility of speech has a direct positive effect on auditory skill development and speech-recognition abilities and also may enhance these skills by improving language abilities in children who are hard of hearing. Greater number of hours of HA use also had a significant positive impact on parent ratings of auditory skills and children's speech recognition.

  5. Spoken Word Recognition in Toddlers Who Use Cochlear Implants

    PubMed Central

    Grieco-Calub, Tina M.; Saffran, Jenny R.; Litovsky, Ruth Y.

    2010-01-01

    Purpose The purpose of this study was to assess the time course of spoken word recognition in 2-year-old children who use cochlear implants (CIs) in quiet and in the presence of speech competitors. Method Children who use CIs and age-matched peers with normal acoustic hearing listened to familiar auditory labels, in quiet or in the presence of speech competitors, while their eye movements to target objects were digitally recorded. Word recognition performance was quantified by measuring each child’s reaction time (i.e., the latency between the spoken auditory label and the first look at the target object) and accuracy (i.e., the amount of time that children looked at target objects within 367 ms to 2,000 ms after the label onset). Results Children with CIs were less accurate and took longer to fixate target objects than did age-matched children without hearing loss. Both groups of children showed reduced performance in the presence of the speech competitors, although many children continued to recognize labels at above-chance levels. Conclusion The results suggest that the unique auditory experience of young CI users slows the time course of spoken word recognition abilities. In addition, real-world listening environments may slow language processing in young language learners, regardless of their hearing status. PMID:19951921

  6. A Spondee Recognition Test for Young Hearing-Impaired Children

    ERIC Educational Resources Information Center

    Cramer, Kathryn D.; Erber, Norman P.

    1974-01-01

    An auditory test of 10 spondaic words recorded on Language Master cards was presented monaurally, through insert receivers to 58 hearing-impaired young children to evaluate their ability to recognize familiar speech material. (MYS)

  7. Word learning in adults with second-language experience: effects of phonological and referent familiarity.

    PubMed

    Kaushanskaya, Margarita; Yoo, Jeewon; Van Hecke, Stephanie

    2013-04-01

    The goal of this research was to examine whether phonological familiarity exerts different effects on novel word learning for familiar versus unfamiliar referents and whether successful word learning is associated with increased second-language experience. Eighty-one adult native English speakers with various levels of Spanish knowledge learned phonologically familiar novel words (constructed using English sounds) or phonologically unfamiliar novel words (constructed using non-English and non-Spanish sounds) in association with either familiar or unfamiliar referents. Retention was tested via a forced-choice recognition task. A median-split procedure identified high-ability and low-ability word learners in each condition, and the two groups were compared on measures of second-language experience. Findings suggest that the ability to accurately match newly learned novel names to their appropriate referents is facilitated by phonological familiarity only for familiar referents but not for unfamiliar referents. Moreover, more extensive second-language learning experience characterized superior learners primarily in one word-learning condition: in which phonologically unfamiliar novel words were paired with familiar referents. Together, these findings indicate that phonological familiarity facilitates novel word learning only for familiar referents and that experience with learning a second language may have a specific impact on novel vocabulary learning in adults.

  8. Word learning in adults with second language experience: Effects of phonological and referent familiarity

    PubMed Central

    Kaushanskaya, Margarita; Yoo, Jeewon; Van Hecke, Stephanie

    2014-01-01

    Purpose The goal of this research was to examine whether phonological familiarity exerts different effects on novel word learning for familiar vs. unfamiliar referents, and whether successful word-learning is associated with increased second-language experience. Method Eighty-one adult native English speakers with various levels of Spanish knowledge learned phonologically-familiar novel words (constructed using English sounds) or phonologically-unfamiliar novel words (constructed using non-English and non-Spanish sounds) in association with either familiar or unfamiliar referents. Retention was tested via a forced-choice recognition-task. A median-split procedure identified high-ability and low-ability word-learners in each condition, and the two groups were compared on measures of second-language experience. Results Findings suggest that the ability to accurately match newly-learned novel names to their appropriate referents is facilitated by phonological familiarity only for familiar referents but not for unfamiliar referents. Moreover, more extensive second-language learning experience characterized superior learners primarily in one word-learning condition: Where phonologically-unfamiliar novel words were paired with familiar referents. Conclusions Together, these findings indicate that phonological familiarity facilitates novel word learning only for familiar referents, and that experience with learning a second language may have a specific impact on novel vocabulary learning in adults. PMID:22992709

  9. Physical Feature Encoding and Word Recognition Abilities Are Altered in Children with Intractable Epilepsy: Preliminary Neuromagnetic Evidence

    PubMed Central

    Pardos, Maria; Korostenskaja, Milena; Xiang, Jing; Fujiwara, Hisako; Lee, Ki H.; Horn, Paul S.; Byars, Anna; Vannest, Jennifer; Wang, Yingying; Hemasilpin, Nat; Rose, Douglas F.

    2015-01-01

    Objective evaluation of language function is critical for children with intractable epilepsy under consideration for epilepsy surgery. The purpose of this preliminary study was to evaluate word recognition in children with intractable epilepsy by using magnetoencephalography (MEG). Ten children with intractable epilepsy (M/F 6/4, mean ± SD 13.4 ± 2.2 years) were matched on age and sex to healthy controls. Common nouns were presented simultaneously from visual and auditory sensory inputs in “match” and “mismatch” conditions. Neuromagnetic responses M1, M2, M3, M4, and M5 with latencies of ~100 ms, ~150 ms, ~250 ms, ~350 ms, and ~450 ms, respectively, elicited during the “match” condition were identified. Compared to healthy children, epilepsy patients had both significantly delayed latency of the M1 and reduced amplitudes of M3 and M5 responses. These results provide neurophysiologic evidence of altered word recognition in children with intractable epilepsy. PMID:26146459

  10. What Could Replace the Phonics Screening Check during the Early Years of Reading Development?

    ERIC Educational Resources Information Center

    Glazzard, Jonathan

    2017-01-01

    This article argues that the phonics screening check, introduced in England in 2012, is not fit for purpose. It is a test of children's ability to decode words rather than an assessment of their reading skills. Whilst this assessment may, to some extent, support the needs of children who rely on phonemic decoding as a route to word recognition, it…

  11. Comparison of auditory temporal resolution between monolingual Persian and bilingual Turkish-Persian individuals.

    PubMed

    Omidvar, Shaghayegh; Jafari, Zahra; Tahaei, Ali Akbar; Salehi, Masoud

    2013-04-01

    The aims of this study were to prepare a Persian version of the temporal resolution test using the method of Phillips et al (1994) and Stuart and Phillips (1996), and to compare the word-recognition performance in the presence of continuous and interrupted noise as well as the temporal resolution abilities between monolingual (ML) Persian and bilingual (BL) Turkish-Persian young adults. Word-recognition scores (WRSs) were obtained in quiet and in the presence of background competing continuous and interrupted noise at signal-to-noise ratios (SNRs) of -20, -10, 0, and 10 dB. Two groups of 33 ML Persian and 36 BL Turkish-Persian volunteers participated. WRSs significantly differed between ML and BL subjects at four sensation levels in the presence of continuous and interrupted noise. However, the difference in the release from masking between ML and BL subjects was not significant at the studied SNRs. BL Turkish-Persian listeners seem to show poorer performance when responding to Persian words in continuous and interrupted noise. However, bilingualism may not affect auditory temporal resolution ability.

  12. Memory without context: amnesia with confabulations after infarction of the right capsular genu.

    PubMed Central

    Schnider, A; Gutbrod, K; Hess, C W; Schroth, G

    1996-01-01

    OBJECTIVE--To explore the mechanism of an amnesia marked by confabulations and lack of insight in a patient with an infarct of the right inferior capsular genu. The confabulations could mostly be traced back to earlier events, indicating that the memory disorder ensued from an inability to store the temporal and spatial context of information acquisition rather than a failure to store new information. METHODS--To test the patient's ability to store the context of information acquisition, two experiments were composed in which she was asked to decide when or where she had learned the words from two word lists presented at different points in time or in different rooms. To test her ability to store new information, two continuous recognition tests with novel non-words and nonsense designs were used. Recognition of these stimuli was assumed to be independent of the context of acquisition because the patient could not have an a priori sense of familiarity with them. RESULTS--The patient performed at chance in the experiments probing knowledge of the context of information acquisition, although she recognised the presented words almost as well as the controls. By contrast, her performance was normal in the recognition tests with non-words and nonsense designs. CONCLUSION--These findings indicate that the patient's amnesia was based on an inability to store the context of information acquisition rather than the information itself. Based on an analysis of her lesion, which disconnected the thalamus from the orbitofrontal cortex and the amygdala, and considering the similarities between her disorder, Wernicke-Korsakoff syndrome, and the amnesia after orbitofrontal lesions, it is proposed that contextual amnesia results from interruption of the loop connecting the amygdala, the dorsomedial nucleus, and the orbitofrontal cortex. Images PMID:8708688

  13. Reading Comprehension in Autism Spectrum Disorders: The Role of Oral Language and Social Functioning

    ERIC Educational Resources Information Center

    Ricketts, Jessie; Jones, Catherine R. G.; Happe, Francesca; Charman, Tony

    2013-01-01

    Reading comprehension is an area of difficulty for many individuals with autism spectrum disorders (ASD). According to the Simple View of Reading, word recognition and oral language are both important determinants of reading comprehension ability. We provide a novel test of this model in 100 adolescents with ASD of varying intellectual ability.…

  14. Brain regions and functional interactions supporting early word recognition in the face of input variability.

    PubMed

    Benavides-Varela, Silvia; Siugzdaite, Roma; Gómez, David Maximiliano; Macagno, Francesco; Cattarossi, Luigi; Mehler, Jacques

    2017-07-18

    Perception and cognition in infants have been traditionally investigated using habituation paradigms, assuming that babies' memories in laboratory contexts are best constructed after numerous repetitions of the very same stimulus in the absence of interference. A crucial, yet open, question regards how babies deal with stimuli experienced in a fashion similar to everyday learning situations-namely, in the presence of interfering stimuli. To address this question, we used functional near-infrared spectroscopy to test 40 healthy newborns on their ability to encode words presented in concomitance with other words. The results evidenced a habituation-like hemodynamic response during encoding in the left-frontal region, which was associated with a progressive decrement of the functional connections between this region and the left-temporal, right-temporal, and right-parietal regions. In a recognition test phase, a characteristic neural signature of recognition recruited first the right-frontal region and subsequently the right-parietal ones. Connections originating from the right-temporal regions to these areas emerged when newborns listened to the familiar word in the test phase. These findings suggest a neural specialization at birth characterized by the lateralization of memory functions: the interplay between temporal and left-frontal regions during encoding and between temporo-parietal and right-frontal regions during recognition of speech sounds. Most critically, the results show that newborns are capable of retaining the sound of specific words despite hearing other stimuli during encoding. Thus, habituation designs that include various items may be as effective for studying early memory as repeated presentation of a single word.

  15. Developmental reversals in false memory: now you see them, now you don't!

    PubMed

    Holliday, Robyn E; Brainerd, Charles J; Reyna, Valerie F

    2011-03-01

    A developmental reversal in false memory is the counterintuitive phenomenon of higher levels of false memory in older children, adolescents, and adults than in younger children. The ability of verbatim memory to suppress this age trend in false memory was evaluated using the Deese-Roediger-McDermott (DRM) paradigm. Seven and 11-year-old children studied DRM lists either in a standard condition (whole words) that normally produces high levels of false memory or in an alternative condition that should enhance verbatim memory (word fragments). Half the children took 1 recognition test, and the other half took 3 recognition tests. In the single-test condition, the typical age difference in false memory was found for the word condition (higher false memory for 11-year-olds than for 7-year-olds), but in the word fragment condition false memory was lower in the older children. In the word condition, false memory increased over successive recognition tests. Our findings are consistent with 2 principles of fuzzy-trace theory's explanation of false memories: (a) reliance on verbatim rather than gist memory causes such errors to decline with age, and (b) repeated testing increases reliance on gist memory in older children and adults who spontaneously connect meaning across events. PsycINFO Database Record (c) 2011 APA, all rights reserved.

  16. Rejecting familiar distracters during recognition in young adults with traumatic brain injury and in healthy older adults.

    PubMed

    Ozen, Lana J; Skinner, Erin I; Fernandes, Myra A

    2010-05-01

    The most common cognitive complaint reported by healthy older adults and young adults with traumatic brain injury (TBI) is memory difficulties. We investigated the effects of normal aging and the long-term effects of TBI in young adults on the susceptibility to incorrectly endorse distracter information on a memory test. Prior to a study phase, participants viewed a "pre-exposure" list containing distracter words, presented once or three times, and half of the target study words. Subsequently, during the study phase, all target words were presented such that, across lists, study words were viewed either once or three times. On the recognition test, TBI and older adult participants were more likely to falsely endorse "pre-exposed" distracter words viewed three times as being from the target study list, compared to non-head-injured young controls. Normal aging and head injury in young may similarly compromise one's ability to reject highly familiar, but distracting, information during recognition. Older adult and TBI participants were also slower to complete the Trail Making task and had poorer output on a Digit Span task, suggesting these two populations share a deficit in executive function and working memory. Similar changes in frontal lobe function may underlie these shared cognitive deficits.

  17. A predictive study of reading comprehension in third-grade Spanish students.

    PubMed

    López-Escribano, Carmen; Elosúa de Juan, María Rosa; Gómez-Veiga, Isabel; García-Madruga, Juan Antonio

    2013-01-01

    The study of the contribution of language and cognitive skills to reading comprehension is an important goal of current reading research. However, reading comprehension is not easily assessed by a single instrument, as different comprehension tests vary in the type of tasks used and in the cognitive demands required. This study examines the contribution of basic language and cognitive skills (decoding, word recognition, reading speed, verbal and nonverbal intelligence and working memory) to reading comprehension, assessed by two tests utilizing various tasks that require different skill sets in third-grade Spanish-speaking students. Linguistic and cognitive abilities predicted reading comprehension. A measure of reading speed (the reading time of pseudo-words) was the best predictor of reading comprehension when assessed by the PROLEC-R test. However, measures of word recognition (the orthographic choice task) and verbal working memory were the best predictors of reading comprehension when assessed by means of the DARC test. These results show, on the one hand, that reading speed and word recognition are better predictors of Spanish language comprehension than reading accuracy. On the other, the reading comprehension test applied here serves as a critical variable when analyzing and interpreting results regarding this topic.

  18. Parallel language activation and inhibitory control in bimodal bilinguals.

    PubMed

    Giezen, Marcel R; Blumenfeld, Henrike K; Shook, Anthony; Marian, Viorica; Emmorey, Karen

    2015-08-01

    Findings from recent studies suggest that spoken-language bilinguals engage nonlinguistic inhibitory control mechanisms to resolve cross-linguistic competition during auditory word recognition. Bilingual advantages in inhibitory control might stem from the need to resolve perceptual competition between similar-sounding words both within and between their two languages. If so, these advantages should be lessened or eliminated when there is no perceptual competition between two languages. The present study investigated the extent of inhibitory control recruitment during bilingual language comprehension by examining associations between language co-activation and nonlinguistic inhibitory control abilities in bimodal bilinguals, whose two languages do not perceptually compete. Cross-linguistic distractor activation was identified in the visual world paradigm, and correlated significantly with performance on a nonlinguistic spatial Stroop task within a group of 27 hearing ASL-English bilinguals. Smaller Stroop effects (indexing more efficient inhibition) were associated with reduced co-activation of ASL signs during the early stages of auditory word recognition. These results suggest that inhibitory control in auditory word recognition is not limited to resolving perceptual linguistic competition in phonological input, but is also used to moderate competition that originates at the lexico-semantic level. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. Levodopa enhances explicit new-word learning in healthy adults: a preliminary study.

    PubMed

    Shellshear, Leanne; MacDonald, Anna D; Mahoney, Jeffrey; Finch, Emma; McMahon, Katie; Silburn, Peter; Nathan, Pradeep J; Copland, David A

    2015-09-01

    While the role of dopamine in modulating executive function, working memory and associative learning has been established; its role in word learning and language processing more generally is not clear. This preliminary study investigated the impact of increased synaptic dopamine levels on new-word learning ability in healthy young adults using an explicit learning paradigm. A double-blind, placebo-controlled, between-groups design was used. Participants completed five learning sessions over 1 week with levodopa or placebo administered at each session (five doses, 100 mg). Each session involved a study phase followed by a test phase. Test phases involved recall and recognition tests of the new (non-word) names previously paired with unfamiliar objects (half with semantic descriptions) during the study phase. The levodopa group showed superior recall accuracy for new words over five learning sessions compared with the placebo group and better recognition accuracy at a 1-month follow-up for words learnt with a semantic description. These findings suggest that dopamine boosts initial lexical acquisition and enhances longer-term consolidation of words learnt with semantic information, consistent with dopaminergic enhancement of semantic salience. Copyright © 2015 John Wiley & Sons, Ltd.

  20. Reading as Active Sensing: A Computational Model of Gaze Planning in Word Recognition

    PubMed Central

    Ferro, Marcello; Ognibene, Dimitri; Pezzulo, Giovanni; Pirrelli, Vito

    2010-01-01

    We offer a computational model of gaze planning during reading that consists of two main components: a lexical representation network, acquiring lexical representations from input texts (a subset of the Italian CHILDES database), and a gaze planner, designed to recognize written words by mapping strings of characters onto lexical representations. The model implements an active sensing strategy that selects which characters of the input string are to be fixated, depending on the predictions dynamically made by the lexical representation network. We analyze the developmental trajectory of the system in performing the word recognition task as a function of both increasing lexical competence, and correspondingly increasing lexical prediction ability. We conclude by discussing how our approach can be scaled up in the context of an active sensing strategy applied to a robotic setting. PMID:20577589

  1. Reading as active sensing: a computational model of gaze planning in word recognition.

    PubMed

    Ferro, Marcello; Ognibene, Dimitri; Pezzulo, Giovanni; Pirrelli, Vito

    2010-01-01

    WE OFFER A COMPUTATIONAL MODEL OF GAZE PLANNING DURING READING THAT CONSISTS OF TWO MAIN COMPONENTS: a lexical representation network, acquiring lexical representations from input texts (a subset of the Italian CHILDES database), and a gaze planner, designed to recognize written words by mapping strings of characters onto lexical representations. The model implements an active sensing strategy that selects which characters of the input string are to be fixated, depending on the predictions dynamically made by the lexical representation network. We analyze the developmental trajectory of the system in performing the word recognition task as a function of both increasing lexical competence, and correspondingly increasing lexical prediction ability. We conclude by discussing how our approach can be scaled up in the context of an active sensing strategy applied to a robotic setting.

  2. Recognition and Comprehension of "Narrow Focus" by Young Adults With Prelingual Hearing Loss Using Hearing Aids or Cochlear Implants.

    PubMed

    Segal, Osnat; Kishon-Rabin, Liat

    2017-12-20

    The stressed word in a sentence (narrow focus [NF]) conveys information about the intent of the speaker and is therefore important for processing spoken language and in social interactions. The ability of participants with severe-to-profound prelingual hearing loss to comprehend NF has rarely been investigated. The purpose of this study was to assess the recognition and comprehension of NF by young adults with prelingual hearing loss compared with those of participants with normal hearing (NH). The participants included young adults with hearing aids (HA; n = 10), cochlear implants (CI; n = 12), and NH (n = 18). The test material included the Hebrew Narrow Focus Test (Segal, Kaplan, Patael, & Kishon-Rabin, in press), with 3 subtests, which was used to assess the recognition and comprehension of NF in different contexts. The following results were obtained: (a) CI and HA users successfully recognized the stressed word, with the worst performance for CI; (b) HA and CI comprehended NF less well than NH; and (c) the comprehension of NF was associated with verbal working memory and expressive vocabulary in CI users. Most CI and HA users were able to recognize the stressed word in a sentence but had considerable difficulty understanding it. Different factors may contribute to this difficulty, including the memory load during the task itself and linguistic and pragmatic abilities. https://doi.org/10.23641/asha.5572792.

  3. Speech recognition and parent-ratings from auditory development questionnaires in children who are hard of hearing

    PubMed Central

    McCreery, Ryan W.; Walker, Elizabeth A.; Spratford, Meredith; Oleson, Jacob; Bentler, Ruth; Holte, Lenore; Roush, Patricia

    2015-01-01

    Objectives Progress has been made in recent years in the provision of amplification and early intervention for children who are hard of hearing. However, children who use hearing aids (HA) may have inconsistent access to their auditory environment due to limitations in speech audibility through their HAs or limited HA use. The effects of variability in children’s auditory experience on parent-report auditory skills questionnaires and on speech recognition in quiet and in noise were examined for a large group of children who were followed as part of the Outcomes of Children with Hearing Loss study. Design Parent ratings on auditory development questionnaires and children’s speech recognition were assessed for 306 children who are hard of hearing. Children ranged in age from 12 months to 9 years of age. Three questionnaires involving parent ratings of auditory skill development and behavior were used, including the LittlEARS Auditory Questionnaire, Parents Evaluation of Oral/Aural Performance in Children Rating Scale, and an adaptation of the Speech, Spatial and Qualities of Hearing scale. Speech recognition in quiet was assessed using the Open and Closed set task, Early Speech Perception Test, Lexical Neighborhood Test, and Phonetically-balanced Kindergarten word lists. Speech recognition in noise was assessed using the Computer-Assisted Speech Perception Assessment. Children who are hard of hearing were compared to peers with normal hearing matched for age, maternal educational level and nonverbal intelligence. The effects of aided audibility, HA use and language ability on parent responses to auditory development questionnaires and on children’s speech recognition were also examined. Results Children who are hard of hearing had poorer performance than peers with normal hearing on parent ratings of auditory skills and had poorer speech recognition. Significant individual variability among children who are hard of hearing was observed. Children with greater aided audibility through their HAs, more hours of HA use and better language abilities generally had higher parent ratings of auditory skills and better speech recognition abilities in quiet and in noise than peers with less audibility, more limited HA use or poorer language abilities. In addition to the auditory and language factors that were predictive for speech recognition in quiet, phonological working memory was also a positive predictor for word recognition abilities in noise. Conclusions Children who are hard of hearing continue to experience delays in auditory skill development and speech recognition abilities compared to peers with normal hearing. However, significant improvements in these domains have occurred in comparison to similar data reported prior to the adoption of universal newborn hearing screening and early intervention programs for children who are hard of hearing. Increasing the audibility of speech has a direct positive effect on auditory skill development and speech recognition abilities, and may also enhance these skills by improving language abilities in children who are hard of hearing. Greater number of hours of HA use also had a significant positive impact on parent ratings of auditory skills and children’s speech recognition. PMID:26731160

  4. The comprehension of ambiguous idioms in aphasic patients.

    PubMed

    Cacciari, Cristina; Reati, Fabiola; Colombo, Maria Rosa; Padovani, Roberto; Rizzo, Silvia; Papagno, Costanza

    2006-01-01

    The ability to understand ambiguous idioms was assessed in 15 aphasic patients with preserved comprehension at a single word level. A string-to-word matching task was used. Patients were requested to choose one among four alternatives: a word associated with the figurative meaning of the idiom string; a word semantically associate with the last constituent of the idiom string; and two unrelated words. The results showed that patients' performance was impaired with respect to a group of matched controls, with patients showing a frontal and/or temporal lesion being the most impaired. A significant number of semantically associate errors were produced, suggesting an impairment of inhibition mechanisms and/or of recognition/activation of the idiomatic meaning.

  5. Lexical influences on competing speech perception in younger, middle-aged, and older adults

    PubMed Central

    Helfer, Karen S.; Jesse, Alexandra

    2015-01-01

    The influence of lexical characteristics of words in to-be-attended and to-be-ignored speech streams was examined in a competing speech task. Older, middle-aged, and younger adults heard pairs of low-cloze probability sentences in which the frequency or neighborhood density of words was manipulated in either the target speech stream or the masking speech stream. All participants also completed a battery of cognitive measures. As expected, for all groups, target words that occur frequently or that are from sparse lexical neighborhoods were easier to recognize than words that are infrequent or from dense neighborhoods. Compared to other groups, these neighborhood density effects were largest for older adults; the frequency effect was largest for middle-aged adults. Lexical characteristics of words in the to-be-ignored speech stream also affected recognition of to-be-attended words, but only when overall performance was relatively good (that is, when younger participants listened to the speech streams at a more advantageous signal-to-noise ratio). For these listeners, to-be-ignored masker words from sparse neighborhoods interfered with recognition of target speech more than masker words from dense neighborhoods. Amount of hearing loss and cognitive abilities relating to attentional control modulated overall performance as well as the strength of lexical influences. PMID:26233036

  6. L2 Word Recognition: Influence of L1 Orthography on Multi-Syllabic Word Recognition

    ERIC Educational Resources Information Center

    Hamada, Megumi

    2017-01-01

    L2 reading research suggests that L1 orthographic experience influences L2 word recognition. Nevertheless, the findings on multi-syllabic words in English are still limited despite the fact that a vast majority of words are multi-syllabic. The study investigated whether L1 orthography influences the recognition of multi-syllabic words, focusing on…

  7. Research review: reading comprehension in developmental disorders of language and communication.

    PubMed

    Ricketts, Jessie

    2011-11-01

    Deficits in reading airment (SLI), Down syndrome (DS) and autism spectrum disorders (ASD). In this review (based on a search of the ISI Web of Knowledge database to 2011), the Simple View of Reading is used as a framework for considering reading comprehension in these groups. There is substantial evidence for reading comprehension impairments in SLI and growing evidence that weaknesses in this domain are common in DS and ASD. Further, in these groups reading comprehension is typically more impaired than word recognition. However, there is also evidence that some children and adolescents with DS, ASD and a history of SLI develop reading comprehension and word recognition skills at or above the age appropriate level. This review of the literature indicates that factors including word recognition, oral language, nonverbal ability and working memory may explain reading comprehension difficulties in SLI, DS and ASD. In addition, it highlights methodological issues, implications of poor reading comprehension and fruitful areas for future research. © 2011 The Author. Journal of Child Psychology and Psychiatry © 2011 Association for Child and Adolescent Mental Health.

  8. Prosody and Spoken Word Recognition in Early and Late Spanish-English Bilingual Individuals

    ERIC Educational Resources Information Center

    Boutsen, Frank R.; Dvorak, Justin D.; Deweber, Derick D.

    2017-01-01

    Purpose: This study was conducted to compare the influence of word properties on gated single-word recognition in monolingual and bilingual individuals under conditions of native and nonnative accent and to determine whether word-form prosody facilitates recognition in bilingual individuals. Method: Word recognition was assessed in monolingual and…

  9. Working Memory Load Affects Processing Time in Spoken Word Recognition: Evidence from Eye-Movements

    PubMed Central

    Hadar, Britt; Skrzypek, Joshua E.; Wingfield, Arthur; Ben-David, Boaz M.

    2016-01-01

    In daily life, speech perception is usually accompanied by other tasks that tap into working memory capacity. However, the role of working memory on speech processing is not clear. The goal of this study was to examine how working memory load affects the timeline for spoken word recognition in ideal listening conditions. We used the “visual world” eye-tracking paradigm. The task consisted of spoken instructions referring to one of four objects depicted on a computer monitor (e.g., “point at the candle”). Half of the trials presented a phonological competitor to the target word that either overlapped in the initial syllable (onset) or at the last syllable (offset). Eye movements captured listeners' ability to differentiate the target noun from its depicted phonological competitor (e.g., candy or sandal). We manipulated working memory load by using a digit pre-load task, where participants had to retain either one (low-load) or four (high-load) spoken digits for the duration of a spoken word recognition trial. The data show that the high-load condition delayed real-time target discrimination. Specifically, a four-digit load was sufficient to delay the point of discrimination between the spoken target word and its phonological competitor. Our results emphasize the important role working memory plays in speech perception, even when performed by young adults in ideal listening conditions. PMID:27242424

  10. Newly learned word forms are abstract and integrated immediately after acquisition

    PubMed Central

    Kapnoula, Efthymia C.; McMurray, Bob

    2015-01-01

    A hotly debated question in word learning concerns the conditions under which newly learned words compete or interfere with familiar words during spoken word recognition. This has recently been described as a key marker of the integration of a new word into the lexicon and was thought to require consolidation Dumay & Gaskell, (Psychological Science, 18, 35–39, 2007; Gaskell & Dumay, Cognition, 89, 105–132, 2003). Recently, however, Kapnoula, Packard, Gupta, and McMurray, (Cognition, 134, 85–99, 2015) showed that interference can be observed immediately after a word is first learned, implying very rapid integration of new words into the lexicon. It is an open question whether these kinds of effects derive from episodic traces of novel words or from more abstract and lexicalized representations. Here we addressed this question by testing inhibition for newly learned words using training and test stimuli presented in different talker voices. During training, participants were exposed to a set of nonwords spoken by a female speaker. Immediately after training, we assessed the ability of the novel word forms to inhibit familiar words, using a variant of the visual world paradigm. Crucially, the test items were produced by a male speaker. An analysis of fixations showed that even with a change in voice, newly learned words interfered with the recognition of similar known words. These findings show that lexical competition effects from newly learned words spread across different talker voices, which suggests that newly learned words can be sufficiently lexicalized, and abstract with respect to talker voice, without consolidation. PMID:26202702

  11. Color associations to emotion and emotion-laden words: A collection of norms for stimulus construction and selection.

    PubMed

    Sutton, Tina M; Altarriba, Jeanette

    2016-06-01

    Color has the ability to influence a variety of human behaviors, such as object recognition, the identification of facial expressions, and the ability to categorize stimuli as positive or negative. Researchers have started to examine the relationship between emotional words and colors, and the findings have revealed that brightness is often associated with positive emotional words and darkness with negative emotional words (e.g., Meier, Robinson, & Clore, Psychological Science, 15, 82-87, 2004). In addition, words such as anger and failure seem to be inherently associated with the color red (e.g., Kuhbandner & Pekrun). The purpose of the present study was to construct norms for positive and negative emotion and emotion-laden words and their color associations. Participants were asked to provide the first color that came to mind for a set of 160 emotional items. The results revealed that the color RED was most commonly associated with negative emotion and emotion-laden words, whereas YELLOW and WHITE were associated with positive emotion and emotion-laden words, respectively. The present work provides researchers with a large database to aid in stimulus construction and selection.

  12. Automated smartphone audiometry: Validation of a word recognition test app.

    PubMed

    Dewyer, Nicholas A; Jiradejvong, Patpong; Henderson Sabes, Jennifer; Limb, Charles J

    2018-03-01

    Develop and validate an automated smartphone word recognition test. Cross-sectional case-control diagnostic test comparison. An automated word recognition test was developed as an app for a smartphone with earphones. English-speaking adults with recent audiograms and various levels of hearing loss were recruited from an audiology clinic and were administered the smartphone word recognition test. Word recognition scores determined by the smartphone app and the gold standard speech audiometry test performed by an audiologist were compared. Test scores for 37 ears were analyzed. Word recognition scores determined by the smartphone app and audiologist testing were in agreement, with 86% of the data points within a clinically acceptable margin of error and a linear correlation value between test scores of 0.89. The WordRec automated smartphone app accurately determines word recognition scores. 3b. Laryngoscope, 128:707-712, 2018. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.

  13. Executive Dysfunction among Children with Reading Comprehension Deficits

    ERIC Educational Resources Information Center

    Locascio, Gianna; Mahone, E. Mark; Eason, Sarah H.; Cutting, Laurie E.

    2010-01-01

    Emerging research supports the contribution of executive function (EF) to reading comprehension; however, a unique pattern has not been established for children who demonstrate comprehension difficulties despite average word recognition ability (specific reading comprehension deficit; S-RCD). To identify particular EF components on which children…

  14. Word Recognition in Auditory Cortex

    ERIC Educational Resources Information Center

    DeWitt, Iain D. J.

    2013-01-01

    Although spoken word recognition is more fundamental to human communication than text recognition, knowledge of word-processing in auditory cortex is comparatively impoverished. This dissertation synthesizes current models of auditory cortex, models of cortical pattern recognition, models of single-word reading, results in phonetics and results in…

  15. Complete abolition of reading and writing ability with a third ventricle colloid cyst: implications for surgical intervention and proposed neural substrates of visual recognition and visual imaging ability.

    PubMed

    Barker, Lynne Ann; Morton, Nicholas; Romanowski, Charles A J; Gosden, Kevin

    2013-10-24

    We report a rare case of a patient unable to read (alexic) and write (agraphic) after a mild head injury. He had preserved speech and comprehension, could spell aloud, identify words spelt aloud and copy letter features. He was unable to visualise letters but showed no problems with digits. Neuropsychological testing revealed general visual memory, processing speed and imaging deficits. Imaging data revealed an 8 mm colloid cyst of the third ventricle that splayed the fornix. Little is known about functions mediated by fornical connectivity, but this region is thought to contribute to memory recall. Other regions thought to mediate letter recognition and letter imagery, visual word form area and visual pathways were intact. We remediated reading and writing by multimodal letter retraining. The study raises issues about the neural substrates of reading, role of fornical tracts to selective memory in the absence of other pathology, and effective remediation strategies for selective functional deficits.

  16. Individual differences in emotion word processing: A diffusion model analysis.

    PubMed

    Mueller, Christina J; Kuchinke, Lars

    2016-06-01

    The exploratory study investigated individual differences in implicit processing of emotional words in a lexical decision task. A processing advantage for positive words was observed, and differences between happy and fear-related words in response times were predicted by individual differences in specific variables of emotion processing: Whereas more pronounced goal-directed behavior was related to a specific slowdown in processing of fear-related words, the rate of spontaneous eye blinks (indexing brain dopamine levels) was associated with a processing advantage of happy words. Estimating diffusion model parameters revealed that the drift rate (rate of information accumulation) captures unique variance of processing differences between happy and fear-related words, with highest drift rates observed for happy words. Overall emotion recognition ability predicted individual differences in drift rates between happy and fear-related words. The findings emphasize that a significant amount of variance in emotion processing is explained by individual differences in behavioral data.

  17. Modernising speech audiometry: using a smartphone application to test word recognition.

    PubMed

    van Zyl, Marianne; Swanepoel, De Wet; Myburgh, Hermanus C

    2018-04-20

    This study aimed to develop and assess a method to measure word recognition abilities using a smartphone application (App) connected to an audiometer. Word lists were recorded in South African English and Afrikaans. Analyses were conducted to determine the effect of hardware used for presentation (computer, compact-disc player, or smartphone) on the frequency content of recordings. An Android App was developed to enable presentation of recorded materials via a smartphone connected to the auxiliary input of the audiometer. Experiments were performed to test feasibility and validity of the developed App and recordings. Participants were 100 young adults (18-30 years) with pure tone thresholds ≤15 dB across the frequency spectrum (250-8000 Hz). Hardware used for presentation had no significant effect on the frequency content of recordings. Listening experiments indicated good inter-list reliability for recordings in both languages, with no significant differences between scores on different lists at each of the tested intensities. Performance-intensity functions had slopes of 4.05%/dB for English and 4.75%/dB for Afrikaans lists at the 50% point. The developed smartphone App constitutes a feasible and valid method for measuring word recognition scores, and can support standardisation and accessibility of recorded speech audiometry.

  18. The Impact of Age, Background Noise, Semantic Ambiguity, and Hearing Loss on Recognition Memory for Spoken Sentences.

    PubMed

    Koeritzer, Margaret A; Rogers, Chad S; Van Engen, Kristin J; Peelle, Jonathan E

    2018-03-15

    The goal of this study was to determine how background noise, linguistic properties of spoken sentences, and listener abilities (hearing sensitivity and verbal working memory) affect cognitive demand during auditory sentence comprehension. We tested 30 young adults and 30 older adults. Participants heard lists of sentences in quiet and in 8-talker babble at signal-to-noise ratios of +15 dB and +5 dB, which increased acoustic challenge but left the speech largely intelligible. Half of the sentences contained semantically ambiguous words to additionally manipulate cognitive challenge. Following each list, participants performed a visual recognition memory task in which they viewed written sentences and indicated whether they remembered hearing the sentence previously. Recognition memory (indexed by d') was poorer for acoustically challenging sentences, poorer for sentences containing ambiguous words, and differentially poorer for noisy high-ambiguity sentences. Similar patterns were observed for Z-transformed response time data. There were no main effects of age, but age interacted with both acoustic clarity and semantic ambiguity such that older adults' recognition memory was poorer for acoustically degraded high-ambiguity sentences than the young adults'. Within the older adult group, exploratory correlation analyses suggested that poorer hearing ability was associated with poorer recognition memory for sentences in noise, and better verbal working memory was associated with better recognition memory for sentences in noise. Our results demonstrate listeners' reliance on domain-general cognitive processes when listening to acoustically challenging speech, even when speech is highly intelligible. Acoustic challenge and semantic ambiguity both reduce the accuracy of listeners' recognition memory for spoken sentences. https://doi.org/10.23641/asha.5848059.

  19. Lexical and age effects on word recognition in noise in normal-hearing children.

    PubMed

    Ren, Cuncun; Liu, Sha; Liu, Haihong; Kong, Ying; Liu, Xin; Li, Shujing

    2015-12-01

    The purposes of the present study were (1) to examine the lexical and age effects on word recognition of normal-hearing (NH) children in noise, and (2) to compare the word-recognition performance in noise to that in quiet listening conditions. Participants were 213 NH children (age ranged between 3 and 6 years old). Eighty-nine and 124 of the participants were tested in noise and quiet listening conditions, respectively. The Standard-Chinese Lexical Neighborhood Test, which contains lists of words in four lexical categories (i.e., dissyllablic easy (DE), dissyllablic hard (DH), monosyllable easy (ME), and monosyllable hard (MH)) was used to evaluate the Mandarin Chinese word recognition in speech spectrum-shaped noise (SSN) with a signal-to-noise ratio (SNR) of 0dB. A two-way repeated-measures analysis of variance was conducted to examine the lexical effects with syllable length and difficulty level as the main factors on word recognition in the quiet and noise listening conditions. The effects of age on word-recognition performance were examined using a regression model. The word-recognition performance in noise was significantly poorer than that in quiet and the individual variations in performance in noise were much greater than those in quiet. Word recognition scores showed that the lexical effects were significant in the SSN. Children scored higher with dissyllabic words than with monosyllabic words; "easy" words scored higher than "hard" words in the noise condition. The scores of the NH children in the SSN (SNR=0dB) for the DE, DH, ME, and MH words were 85.4, 65.9, 71.7, and 46.2% correct, respectively. The word-recognition performance also increased with age in each lexical category for the NH children tested in noise. Both age and lexical characteristics of words had significant influences on the performance of Mandarin-Chinese word recognition in noise. The lexical effects were more obvious under noise listening conditions than in quiet. The word-recognition performance in noise increased with age in NH children of 3-6 years old and had not reached plateau at 6 years of age in the NH children. Copyright © 2015. Published by Elsevier Ireland Ltd.

  20. Rapid Word Recognition as a Measure of Word-Level Automaticity and Its Relation to Other Measures of Reading

    ERIC Educational Resources Information Center

    Frye, Elizabeth M.; Gosky, Ross

    2012-01-01

    The present study investigated the relationship between rapid recognition of individual words (Word Recognition Test) and two measures of contextual reading: (1) grade-level Passage Reading Test (IRI passage) and (2) performance on standardized STAR Reading Test. To establish if time of presentation on the word recognition test was a factor in…

  1. The effect of word concreteness on recognition memory.

    PubMed

    Fliessbach, K; Weis, S; Klaver, P; Elger, C E; Weber, B

    2006-09-01

    Concrete words that are readily imagined are better remembered than abstract words. Theoretical explanations for this effect either claim a dual coding of concrete words in the form of both a verbal and a sensory code (dual-coding theory), or a more accessible semantic network for concrete words than for abstract words (context-availability theory). However, the neural mechanisms of improved memory for concrete versus abstract words are poorly understood. Here, we investigated the processing of concrete and abstract words during encoding and retrieval in a recognition memory task using event-related functional magnetic resonance imaging (fMRI). As predicted, memory performance was significantly better for concrete words than for abstract words. Abstract words elicited stronger activations of the left inferior frontal cortex both during encoding and recognition than did concrete words. Stronger activation of this area was also associated with successful encoding for both abstract and concrete words. Concrete words elicited stronger activations bilaterally in the posterior inferior parietal lobe during recognition. The left parietal activation was associated with correct identification of old stimuli. The anterior precuneus, left cerebellar hemisphere and the posterior and anterior cingulate cortex showed activations both for successful recognition of concrete words and for online processing of concrete words during encoding. Additionally, we observed a correlation across subjects between brain activity in the left anterior fusiform gyrus and hippocampus during recognition of learned words and the strength of the concreteness effect. These findings support the idea of specific brain processes for concrete words, which are reactivated during successful recognition.

  2. Preserved visual lexicosemantics in global aphasia: a right-hemisphere contribution?

    PubMed

    Gold, B T; Kertesz, A

    2000-12-01

    Extensive testing of a patient, GP, who encountered large-scale destruction of left-hemisphere (LH) language regions was undertaken in order to address several issues concerning the ability of nonperisylvian areas to extract meaning from printed words. Testing revealed recognition of superordinate boundaries of animals, tools, vegetables, fruit, clothes, and furniture. GP was able to distinguish proper names from other nouns and from nonwords. GP was also able to differentiate words representing living things from those denoting nonliving things. The extent of LH infarct resulting in a global impairment to phonological and syntactic processing suggests LH specificity for these functions but considerable right-hemisphere (RH) participation in visual lexicosemantic processing. The relative preservation of visual lexicosemantic abilities despite severe impairment to all aspects of phonological coding demonstrates the importance of the direct route to the meaning of single printed words.

  3. Exploring a recognition-induced recognition decrement

    PubMed Central

    Dopkins, Stephen; Ngo, Catherine Trinh; Sargent, Jesse

    2007-01-01

    Four experiments explored a recognition decrement that is associated with the recognition of a word from a short list. The stimulus material for demonstrating the phenomenon was a list of words of different syntactic types. A word from the list was recognized less well following a decision that a word of the same type had occurred in the list than following a decision that such a word had not occurred in the list. A recognition decrement did not occur for a word of a given type following a positive recognition decision to a word of a different type. A recognition decrement did not occur when the list consisted exclusively of nouns. It was concluded that the phenomenon may reflect a criterion shift but probably does not reflect a list strength effect, suppression, or familiarity attribution consequent to a perceived discrepancy between actual and expected fluency. PMID:17063915

  4. Improving speech-in-noise recognition for children with hearing loss: Potential effects of language abilities, binaural summation, and head shadow

    PubMed Central

    Nittrouer, Susan; Caldwell-Tarr, Amanda; Tarr, Eric; Lowenstein, Joanna H.; Rice, Caitlin; Moberly, Aaron C.

    2014-01-01

    Objective: This study examined speech recognition in noise for children with hearing loss, compared it to recognition for children with normal hearing, and examined mechanisms that might explain variance in children’s abilities to recognize speech in noise. Design: Word recognition was measured in two levels of noise, both when the speech and noise were co-located in front and when the noise came separately from one side. Four mechanisms were examined as factors possibly explaining variance: vocabulary knowledge, sensitivity to phonological structure, binaural summation, and head shadow. Study sample: Participants were 113 eight-year-old children. Forty-eight had normal hearing (NH) and 65 had hearing loss: 18 with hearing aids (HAs), 19 with one cochlear implant (CI), and 28 with two CIs. Results: Phonological sensitivity explained a significant amount of between-groups variance in speech-in-noise recognition. Little evidence of binaural summation was found. Head shadow was similar in magnitude for children with NH and with CIs, regardless of whether they wore one or two CIs. Children with HAs showed reduced head shadow effects. Conclusion: These outcomes suggest that in order to improve speech-in-noise recognition for children with hearing loss, intervention needs to be comprehensive, focusing on both language abilities and auditory mechanisms. PMID:23834373

  5. Recognition of oral spelling is diagnostic of the central reading processes.

    PubMed

    Schubert, Teresa; McCloskey, Michael

    2015-01-01

    The task of recognition of oral spelling (stimulus: "C-A-T", response: "cat") is often administered to individuals with acquired written language disorders, yet there is no consensus about the underlying cognitive processes. We adjudicate between two existing hypotheses: Recognition of oral spelling uses central reading processes, or recognition of oral spelling uses central spelling processes in reverse. We tested the recognition of oral spelling and spelling to dictation abilities of a single individual with acquired dyslexia and dysgraphia. She was impaired relative to matched controls in spelling to dictation but unimpaired in recognition of oral spelling. Recognition of oral spelling for exception words (e.g., colonel) and pronounceable nonwords (e.g., larth) was intact. Our results were predicted by the hypothesis that recognition of oral spelling involves the central reading processes. We conclude that recognition of oral spelling is a useful tool for probing the integrity of the central reading processes.

  6. The emergence of automaticity in reading: Effects of orthographic depth and word decoding ability on an adjusted Stroop measure.

    PubMed

    Megherbi, Hakima; Elbro, Carsten; Oakhill, Jane; Segui, Juan; New, Boris

    2018-02-01

    How long does it take for word reading to become automatic? Does the appearance and development of automaticity differ as a function of orthographic depth (e.g., French vs. English)? These questions were addressed in a longitudinal study of English and French beginning readers. The study focused on automaticity as obligatory processing as measured in the Stroop test. Measures of decoding ability and the Stroop effect were taken at three time points during first grade (and during second grade in the United Kingdom) in 84 children. The study is the first to adjust the classic Stroop effect for inhibition (of distracting colors). The adjusted Stroop effect was zero in the absence of reading ability, and it was found to develop in tandem with decoding ability. After a further control for decoding, no effects of age or orthography were found on the adjusted Stroop measure. The results are in line with theories of the development of whole word recognition that emphasize the importance of the acquisition of the basic orthographic code. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Acquired prosopagnosia without word recognition deficits.

    PubMed

    Susilo, Tirta; Wright, Victoria; Tree, Jeremy J; Duchaine, Bradley

    2015-01-01

    It has long been suggested that face recognition relies on specialized mechanisms that are not involved in visual recognition of other object categories, including those that require expert, fine-grained discrimination at the exemplar level such as written words. But according to the recently proposed many-to-many theory of object recognition (MTMT), visual recognition of faces and words are carried out by common mechanisms [Behrmann, M., & Plaut, D. C. ( 2013 ). Distributed circuits, not circumscribed centers, mediate visual recognition. Trends in Cognitive Sciences, 17, 210-219]. MTMT acknowledges that face and word recognition are lateralized, but posits that the mechanisms that predominantly carry out face recognition still contribute to word recognition and vice versa. MTMT makes a key prediction, namely that acquired prosopagnosics should exhibit some measure of word recognition deficits. We tested this prediction by assessing written word recognition in five acquired prosopagnosic patients. Four patients had lesions limited to the right hemisphere while one had bilateral lesions with more pronounced lesions in the right hemisphere. The patients completed a total of seven word recognition tasks: two lexical decision tasks and five reading aloud tasks totalling more than 1200 trials. The performances of the four older patients (3 female, age range 50-64 years) were compared to those of 12 older controls (8 female, age range 56-66 years), while the performances of the younger prosopagnosic (male, 31 years) were compared to those of 14 younger controls (9 female, age range 20-33 years). We analysed all results at the single-patient level using Crawford's t-test. Across seven tasks, four prosopagnosics performed as quickly and accurately as controls. Our results demonstrate that acquired prosopagnosia can exist without word recognition deficits. These findings are inconsistent with a key prediction of MTMT. They instead support the hypothesis that face recognition is carried out by specialized mechanisms that do not contribute to recognition of written words.

  8. Analytic study of the Tadoma method: background and preliminary results.

    PubMed

    Norton, S J; Schultz, M C; Reed, C M; Braida, L D; Durlach, N I; Rabinowitz, W M; Chomsky, C

    1977-09-01

    Certain deaf-blind persons have been taught, through the Tadoma method of speechreading, to use vibrotactile cues from the face and neck to understand speech. This paper reports the results of preliminary tests of the speechreading ability of one adult Tadoma user. The tests were of four major types: (1) discrimination of speech stimuli; (2) recognition of words in isolation and in sentences; (3) interpretation of prosodic and syntactic features in sentences; and (4) comprehension of written (Braille) and oral speech. Words in highly contextual environments were much better perceived than were words in low-context environments. Many of the word errors involved phonemic substitutions which shared articulatory features with the target phonemes, with a higher error rate for vowels than consonants. Relative to performance on word-recognition tests, performance on some of the discrimination tests was worse than expected. Perception of sentences appeared to be mildly sensitive to rate of talking and to speaker differences. Results of the tests on perception of prosodic and syntactic features, while inconclusive, indicate that many of the features tested were not used in interpreting sentences. On an English comprehension test, a higher score was obtained for items administered in Braille than through oral presentation.

  9. Event-related potentials during word mapping to object shape predict toddlers' vocabulary size

    PubMed Central

    Borgström, Kristina; Torkildsen, Janne von Koss; Lindgren, Magnus

    2015-01-01

    What role does attention to different object properties play in early vocabulary development? This longitudinal study using event-related potentials in combination with behavioral measures investigated 20- and 24-month-olds' (n = 38; n = 34; overlapping n = 24) ability to use object shape and object part information in word-object mapping. The N400 component was used to measure semantic priming by images containing shape or detail information. At 20 months, the N400 to words primed by object shape varied in topography and amplitude depending on vocabulary size, and these differences predicted productive vocabulary size at 24 months. At 24 months, when most of the children had vocabularies of several hundred words, the relation between vocabulary size and the N400 effect in a shape context was weaker. Detached object parts did not function as word primes regardless of age or vocabulary size, although the part-objects were identified behaviorally. The behavioral measure, however, also showed relatively poor recognition of the part-objects compared to the shape-objects. These three findings provide new support for the link between shape recognition and early vocabulary development. PMID:25762957

  10. Improved word recognition for observers with age-related maculopathies using compensation filters

    NASA Technical Reports Server (NTRS)

    Lawton, Teri B.

    1988-01-01

    A method for improving word recognition for people with age-related maculopathies, which cause a loss of central vision, is discussed. It is found that the use of individualized compensation filters based on an person's normalized contrast sensitivity function can improve word recognition for people with age-related maculopathies. It is shown that 27-70 pct more magnification is needed for unfiltered words compared to filtered words. The improvement in word recognition is positively correlated with the severity of vision loss.

  11. [Explicit memory for type font of words in source monitoring and recognition tasks].

    PubMed

    Hatanaka, Yoshiko; Fujita, Tetsuya

    2004-02-01

    We investigated whether people can consciously remember type fonts of words by methods of examining explicit memory; source-monitoring and old/new-recognition. We set matched, non-matched, and non-studied conditions between the study and the test words using two kinds of type fonts; Gothic and MARU. After studying words in one way of encoding, semantic or physical, subjects in a source-monitoring task made a three way discrimination between new words, Gothic words, and MARU words (Exp. 1). Subjects in an old/new-recognition task indicated whether test words were previously presented or not (Exp. 2). We compared the source judgments with old/new recognition data. As a result, these data showed conscious recollection for type font of words on the source monitoring task and dissociation between source monitoring and old/new recognition performance.

  12. Educational Implications of Conductive Hearing Loss in School Children.

    ERIC Educational Resources Information Center

    Lyon, David J.; And Others

    1986-01-01

    The study investigated specific linguistic abilities/disabilities of 15 children with conductive hearing loss and a history of middle ear dysfunction. Results found significant deficits in verbal intelligence, word recognition, and receptive syntactic skills substantiating the finding that conductive hearing loss due to otitis media is deleterious…

  13. Stimulus-Dependent Flexibility in Non-Human Auditory Pitch Processing

    ERIC Educational Resources Information Center

    Bregman, Micah R.; Patel, Aniruddh D.; Gentner, Timothy Q.

    2012-01-01

    Songbirds and humans share many parallels in vocal learning and auditory sequence processing. However, the two groups differ notably in their abilities to recognize acoustic sequences shifted in absolute pitch (pitch height). Whereas humans maintain accurate recognition of words or melodies over large pitch height changes, songbirds are…

  14. Genetic and environmental influences on word recognition and spelling deficits as a function of age.

    PubMed

    Friend, Angela; DeFries, John C; Wadsworth, Sally J; Olson, Richard K

    2007-05-01

    Previous twin studies have suggested a possible developmental dissociation between genetic influences on word recognition and spelling deficits, wherein genetic influence declined across age for word recognition, and increased for spelling recognition. The present study included two measures of word recognition (timed, untimed) and two measures of spelling (recognition, production) in younger and older twins. The heritability estimates for the two word recognition measures were .65 (timed) and .64 (untimed) in the younger group and .65 and .58 respectively in the older group. For spelling, the corresponding estimates were .57 (recognition) and .51 (production) in the younger group and .65 and .67 in the older group. Although these age group differences were not significant, the pattern of decline in heritability across age for reading and increase for spelling conformed to that predicted by the developmental dissociation hypothesis. However, the tests for an interaction between genetic influences on word recognition and spelling deficits as a function of age were not significant.

  15. Sizing up the competition: quantifying the influence of the mental lexicon on auditory and visual spoken word recognition.

    PubMed

    Strand, Julia F; Sommers, Mitchell S

    2011-09-01

    Much research has explored how spoken word recognition is influenced by the architecture and dynamics of the mental lexicon (e.g., Luce and Pisoni, 1998; McClelland and Elman, 1986). A more recent question is whether the processes underlying word recognition are unique to the auditory domain, or whether visually perceived (lipread) speech may also be sensitive to the structure of the mental lexicon (Auer, 2002; Mattys, Bernstein, and Auer, 2002). The current research was designed to test the hypothesis that both aurally and visually perceived spoken words are isolated in the mental lexicon as a function of their modality-specific perceptual similarity to other words. Lexical competition (the extent to which perceptually similar words influence recognition of a stimulus word) was quantified using metrics that are well-established in the literature, as well as a statistical method for calculating perceptual confusability based on the phi-square statistic. Both auditory and visual spoken word recognition were influenced by modality-specific lexical competition as well as stimulus word frequency. These findings extend the scope of activation-competition models of spoken word recognition and reinforce the hypothesis (Auer, 2002; Mattys et al., 2002) that perceptual and cognitive properties underlying spoken word recognition are not specific to the auditory domain. In addition, the results support the use of the phi-square statistic as a better predictor of lexical competition than metrics currently used in models of spoken word recognition. © 2011 Acoustical Society of America

  16. Recognizing Spoken Words: The Neighborhood Activation Model

    PubMed Central

    Luce, Paul A.; Pisoni, David B.

    2012-01-01

    Objective A fundamental problem in the study of human spoken word recognition concerns the structural relations among the sound patterns of words in memory and the effects these relations have on spoken word recognition. In the present investigation, computational and experimental methods were employed to address a number of fundamental issues related to the representation and structural organization of spoken words in the mental lexicon and to lay the groundwork for a model of spoken word recognition. Design Using a computerized lexicon consisting of transcriptions of 20,000 words, similarity neighborhoods for each of the transcriptions were computed. Among the variables of interest in the computation of the similarity neighborhoods were: 1) the number of words occurring in a neighborhood, 2) the degree of phonetic similarity among the words, and 3) the frequencies of occurrence of the words in the language. The effects of these variables on auditory word recognition were examined in a series of behavioral experiments employing three experimental paradigms: perceptual identification of words in noise, auditory lexical decision, and auditory word naming. Results The results of each of these experiments demonstrated that the number and nature of words in a similarity neighborhood affect the speed and accuracy of word recognition. A neighborhood probability rule was developed that adequately predicted identification performance. This rule, based on Luce's (1959) choice rule, combines stimulus word intelligibility, neighborhood confusability, and frequency into a single expression. Based on this rule, a model of auditory word recognition, the neighborhood activation model, was proposed. This model describes the effects of similarity neighborhood structure on the process of discriminating among the acoustic-phonetic representations of words in memory. The results of these experiments have important implications for current conceptions of auditory word recognition in normal and hearing impaired populations of children and adults. PMID:9504270

  17. Word-to-picture recognition is a function of motor components mappings at the stage of retrieval.

    PubMed

    Brouillet, Denis; Brouillet, Thibaut; Milhau, Audrey; Heurley, Loïc; Vagnot, Caroline; Brunel, Lionel

    2016-10-01

    Embodied approaches of cognition argue that retrieval involves the re-enactment of both sensory and motor components of the desired remembering. In this study, we investigated the effect of motor action performed to produce the response in a recognition task when this action is compatible with the affordance of the objects that have to be recognised. In our experiment, participants were first asked to learn a list of words referring to graspable objects, and then told to make recognition judgements on pictures. The pictures represented objects where the graspable part was either pointing to the same or to the opposite side of the "Yes" response key. Results show a robust effect of compatibility between objects affordance and response hand. Moreover, this compatibility improves participants' ability of discrimination, suggesting that motor components are relevant cue for memory judgement at the stage of retrieval in a recognition task. More broadly, our data highlight that memory judgements are a function of motor components mappings at the stage of retrieval. © 2015 International Union of Psychological Science.

  18. Speech perception and communication ability over the telephone by Mandarin-speaking children with cochlear implants.

    PubMed

    Wu, Che-Ming; Liu, Tien-Chen; Wang, Nan-Mai; Chao, Wei-Chieh

    2013-08-01

    (1) To understand speech perception and communication ability through real telephone calls by Mandarin-speaking children with cochlear implants and compare them to live-voice perception, (2) to report the general condition of telephone use of this population, and (3) to investigate the factors that correlate with telephone speech perception performance. Fifty-six children with over 4 years of implant use (aged 6.8-13.6 years, mean duration 8.0 years) took three speech perception tests administered using telephone and live voice to examine sentence, monosyllabic-word and Mandarin tone perception. The children also filled out a questionnaire survey investigating everyday telephone use. Wilcoxon signed-rank test was used to compare the scores between live-voice and telephone tests, and Pearson's test to examine the correlation between them. The mean scores were 86.4%, 69.8% and 70.5% respectively for sentence, word and tone recognition over the telephone. The corresponding live-voice mean scores were 94.3%, 84.0% and 70.8%. Wilcoxon signed-rank test showed the sentence and word scores were significantly different between telephone and live voice test, while the tone recognition scores were not, indicating tone perception was less worsened by telephone transmission than words and sentences. Spearman's test showed that chronological age and duration of implant use were weakly correlated with the perception test scores. The questionnaire survey showed 78% of the children could initiate phone calls and 59% could use the telephone 2 years after implantation. Implanted children are potentially capable of using the telephone 2 years after implantation, and communication ability over the telephone becomes satisfactory 4 years after implantation. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  19. Latency of modality-specific reactivation of auditory and visual information during episodic memory retrieval.

    PubMed

    Ueno, Daisuke; Masumoto, Kouhei; Sutani, Kouichi; Iwaki, Sunao

    2015-04-15

    This study used magnetoencephalography (MEG) to examine the latency of modality-specific reactivation in the visual and auditory cortices during a recognition task to determine the effects of reactivation on episodic memory retrieval. Nine right-handed healthy young adults participated in the experiment. The experiment consisted of a word-encoding phase and two recognition phases. Three encoding conditions were included: encoding words alone (word-only) and encoding words presented with either related pictures (visual) or related sounds (auditory). The recognition task was conducted in the MEG scanner 15 min after the completion of the encoding phase. After the recognition test, a source-recognition task was given, in which participants were required to choose whether each recognition word was not presented or was presented with which information during the encoding phase. Word recognition in the auditory condition was higher than that in the word-only condition. Confidence-of-recognition scores (d') and the source-recognition test showed superior performance in both the visual and the auditory conditions compared with the word-only condition. An equivalent current dipoles analysis of MEG data indicated that higher equivalent current dipole amplitudes in the right fusiform gyrus occurred during the visual condition and in the superior temporal auditory cortices during the auditory condition, both 450-550 ms after onset of the recognition stimuli. Results suggest that reactivation of visual and auditory brain regions during recognition binds language with modality-specific information and that reactivation enhances confidence in one's recognition performance.

  20. Attempting to "Increase Intake from the Input": Attention and Word Learning in Children with Autism.

    PubMed

    Tenenbaum, Elena J; Amso, Dima; Righi, Giulia; Sheinkopf, Stephen J

    2017-06-01

    Previous work has demonstrated that social attention is related to early language abilities. We explored whether we can facilitate word learning among children with autism by directing attention to areas of the scene that have been demonstrated as relevant for successful word learning. We tracked eye movements to faces and objects while children watched videos of a woman teaching them new words. Test trials measured participants' recognition of these novel word-object pairings. Results indicate that for children with autism and typically developing children, pointing to the speaker's mouth while labeling a novel object impaired performance, likely because it distracted participants from the target object. In contrast, for children with autism, holding the object close to the speaker's mouth improved performance.

  1. Executive Dysfunction Among Children With Reading Comprehension Deficits

    PubMed Central

    Locascio, Gianna; Mahone, E. Mark; Eason, Sarah H.; Cutting, Laurie E.

    2010-01-01

    Emerging research supports the contribution of executive function (EF) to reading comprehension; however, a unique pattern has not been established for children who demonstrate comprehension difficulties despite average word recognition ability (specific reading comprehension deficit; S-RCD). To identify particular EF components on which children with S-RCD struggle, a range of EF skills was compared among 86 children, ages 10 to 14, grouped by word reading and comprehension abilities: 24 average readers, 44 with word recognition deficits (WRD), and 18 S-RCD. An exploratory principal components analysis of EF tests identified three latent factors, used in subsequent group comparisons: Planning/Spatial Working Memory, Verbal Working Memory, and Response Inhibition. The WRD group exhibited deficits (relative to controls) on Verbal Working Memory and Inhibition factors; S-RCD children performed more poorly than controls on the Planning factor. Further analyses suggested the WRD group’s poor performance on EF factors was a by-product of core deficits linked to WRD (after controlling for phonological processing, this group no longer showed EF deficits). In contrast, the S-RCD group’s poor performance on the planning component remained significant after controlling for phonological processing. Findings suggest reading comprehension difficulties are linked to executive dysfunction; in particular, poor strategic planning/organizing may lead to reading comprehension problems. PMID:20375294

  2. Executive dysfunction among children with reading comprehension deficits.

    PubMed

    Locascio, Gianna; Mahone, E Mark; Eason, Sarah H; Cutting, Laurie E

    2010-01-01

    Emerging research supports the contribution of executive function (EF) to reading comprehension; however, a unique pattern has not been established for children who demonstrate comprehension difficulties despite average word recognition ability (specific reading comprehension deficit; S-RCD). To identify particular EF components on which children with S-RCD struggle, a range of EF skills was compared among 86 children, ages 10 to 14, grouped by word reading and comprehension abilities: 24 average readers, 44 with word recognition deficits (WRD), and 18 S-RCD. An exploratory principal components analysis of EF tests identified three latent factors, used in subsequent group comparisons: Planning/ Spatial Working Memory, Verbal Working Memory, and Response Inhibition. The WRD group exhibited deficits (relative to controls) on Verbal Working Memory and Inhibition factors; S-RCD children performed more poorly than controls on the Planning factor. Further analyses suggested the WRD group's poor performance on EF factors was a by-product of core deficits linked to WRD (after controlling for phonological processing, this group no longer showed EF deficits). In contrast, the S-RCD group's poor performance on the planning component remained significant after controlling for phonological processing. Findings suggest reading comprehension difficulties are linked to executive dysfunction; in particular, poor strategic planning/organizing may lead to reading comprehension problems.

  3. Recall and recognition of verbal paired associates in early Alzheimer's disease.

    PubMed

    Lowndes, G J; Saling, M M; Ames, D; Chiu, E; Gonzalez, L M; Savage, G R

    2008-07-01

    The primary impairment in early Alzheimer's disease (AD) is encoding/consolidation, resulting from medial temporal lobe (MTL) pathology. AD patients perform poorly on cued-recall paired associate learning (PAL) tasks, which assess the ability of the MTLs to encode relational memory. Since encoding and retrieval processes are confounded within performance indexes on cued-recall PAL, its specificity for AD is limited. Recognition paradigms tend to show good specificity for AD, and are well tolerated, but are typically less sensitive than recall tasks. Associate-recognition is a novel PAL task requiring a combination of recall and recognition processes. We administered a verbal associate-recognition test and cued-recall analogue to 22 early AD patients and 55 elderly controls to compare their ability to discriminate these groups. Both paradigms used eight arbitrarily related word pairs (e.g., pool-teeth) with varying degrees of imageability. Associate-recognition was equally effective as the cued-recall analogue in discriminating the groups, and logistic regression demonstrated classification rates by both tasks were equivalent. These preliminary findings provide support for the clinical value of this recognition tool. Conceptually it has potential for greater specificity in informing neuropsychological diagnosis of AD in clinical samples but this requires further empirical support.

  4. An Investigation of the Role of Grapheme Units in Word Recognition

    ERIC Educational Resources Information Center

    Lupker, Stephen J.; Acha, Joana; Davis, Colin J.; Perea, Manuel

    2012-01-01

    In most current models of word recognition, the word recognition process is assumed to be driven by the activation of letter units (i.e., that letters are the perceptual units in reading). An alternative possibility is that the word recognition process is driven by the activation of grapheme units, that is, that graphemes, rather than letters, are…

  5. Theory of Mind and Emotion Recognition Skills in Children with Specific Language Impairment, Autism Spectrum Disorder and Typical Development: Group Differences and Connection to Knowledge of Grammatical Morphology, Word-Finding Abilities and Verbal Working Memory

    ERIC Educational Resources Information Center

    Loukusa, Soile; Mäkinen, Leena; Kuusikko-Gauffin, Sanna; Ebeling, Hanna; Moilanen, Irma

    2014-01-01

    Background: Social perception skills, such as understanding the mind and emotions of others, affect children's communication abilities in real-life situations. In addition to autism spectrum disorder (ASD), there is increasing knowledge that children with specific language impairment (SLI) also demonstrate difficulties in their social…

  6. Word-level recognition of multifont Arabic text using a feature vector matching approach

    NASA Astrophysics Data System (ADS)

    Erlandson, Erik J.; Trenkle, John M.; Vogt, Robert C., III

    1996-03-01

    Many text recognition systems recognize text imagery at the character level and assemble words from the recognized characters. An alternative approach is to recognize text imagery at the word level, without analyzing individual characters. This approach avoids the problem of individual character segmentation, and can overcome local errors in character recognition. A word-level recognition system for machine-printed Arabic text has been implemented. Arabic is a script language, and is therefore difficult to segment at the character level. Character segmentation has been avoided by recognizing text imagery of complete words. The Arabic recognition system computes a vector of image-morphological features on a query word image. This vector is matched against a precomputed database of vectors from a lexicon of Arabic words. Vectors from the database with the highest match score are returned as hypotheses for the unknown image. Several feature vectors may be stored for each word in the database. Database feature vectors generated using multiple fonts and noise models allow the system to be tuned to its input stream. Used in conjunction with database pruning techniques, this Arabic recognition system has obtained promising word recognition rates on low-quality multifont text imagery.

  7. Reading in developmental prosopagnosia: Evidence for a dissociation between word and face recognition.

    PubMed

    Starrfelt, Randi; Klargaard, Solja K; Petersen, Anders; Gerlach, Christian

    2018-02-01

    Recent models suggest that face and word recognition may rely on overlapping cognitive processes and neural regions. In support of this notion, face recognition deficits have been demonstrated in developmental dyslexia. Here we test whether the opposite association can also be found, that is, impaired reading in developmental prosopagnosia. We tested 10 adults with developmental prosopagnosia and 20 matched controls. All participants completed the Cambridge Face Memory Test, the Cambridge Face Perception test and a Face recognition questionnaire used to quantify everyday face recognition experience. Reading was measured in four experimental tasks, testing different levels of letter, word, and text reading: (a) single word reading with words of varying length,(b) vocal response times in single letter and short word naming, (c) recognition of single letters and short words at brief exposure durations (targeting the word superiority effect), and d) text reading. Participants with developmental prosopagnosia performed strikingly similar to controls across the four reading tasks. Formal analysis revealed a significant dissociation between word and face recognition, as the difference in performance with faces and words was significantly greater for participants with developmental prosopagnosia than for controls. Adult developmental prosopagnosics read as quickly and fluently as controls, while they are seemingly unable to learn efficient strategies for recognizing faces. We suggest that this is due to the differing demands that face and word recognition put on the perceptual system. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  8. Italians Use Abstract Knowledge about Lexical Stress during Spoken-Word Recognition

    ERIC Educational Resources Information Center

    Sulpizio, Simone; McQueen, James M.

    2012-01-01

    In two eye-tracking experiments in Italian, we investigated how acoustic information and stored knowledge about lexical stress are used during the recognition of tri-syllabic spoken words. Experiment 1 showed that Italians use acoustic cues to a word's stress pattern rapidly in word recognition, but only for words with antepenultimate stress.…

  9. The Slow Developmental Time Course of Real-Time Spoken Word Recognition

    ERIC Educational Resources Information Center

    Rigler, Hannah; Farris-Trimble, Ashley; Greiner, Lea; Walker, Jessica; Tomblin, J. Bruce; McMurray, Bob

    2015-01-01

    This study investigated the developmental time course of spoken word recognition in older children using eye tracking to assess how the real-time processing dynamics of word recognition change over development. We found that 9-year-olds were slower to activate the target words and showed more early competition from competitor words than…

  10. Developmental Spelling and Word Recognition: A Validation of Ehri's Model of Word Recognition Development

    ERIC Educational Resources Information Center

    Ebert, Ashlee A.

    2009-01-01

    Ehri's developmental model of word recognition outlines early reading development that spans from the use of logos to advanced knowledge of oral and written language to read words. Henderson's developmental spelling theory presents stages of word knowledge that progress in a similar manner to Ehri's phases. The purpose of this research study was…

  11. Word position affects stimulus recognition: evidence for early ERP short-term plastic modulation.

    PubMed

    Spironelli, Chiara; Galfano, Giovanni; Umiltà, Carlo; Angrilli, Alessandro

    2011-12-01

    The present study was aimed at investigating the short-term plastic changes that follow word learning at a neurophysiological level. The main hypothesis was that word position (left or right visual field, LVF/RH or RVF/LH) in the initial learning phase would leave a trace that affected, in the subsequent recognition phase, the Recognition Potential (i.e., the first negative component distinguishing words from other stimuli) elicited 220-240 ms after centrally presented stimuli. Forty-eight students were administered, in the learning phase, 125 words for 4s, randomly presented half in the left and half in the right visual field. In the recognition phase, participants were split into two equal groups, one was assigned to the Word task, the other to the Picture task (in which half of the 125 pictures were new, and half matched prior studied words). During the Word task, old RVF/LH words elicited significantly greater negativity in left posterior sites with respect to old LVF/RH words, which in turn showed the same pattern of activation evoked by new words. Therefore, correspondence between stimulus spatial position and hemisphere specialized in automatic word recognition created a robust prime for subsequent recognition. During the Picture task, pictures matching old RVF/LH words showed no differences compared with new pictures, but evoked significantly greater negativity than pictures matching old LVF/RH words. Thus, the priming effect vanished when the task required a switch from visual analysis to stored linguistic information, whereas the lack of correspondence between stimulus position and network specialized in automatic word recognition (i.e., when words were presented to the LVF/RH) revealed the implicit costs for recognition. Results support the view that short-term plastic changes occurring in a linguistic learning task interact with both stimulus position and modality (written word vs. picture representation). Copyright © 2011 Elsevier B.V. All rights reserved.

  12. The A2iA French handwriting recognition system at the Rimes-ICDAR2011 competition

    NASA Astrophysics Data System (ADS)

    Menasri, Farès; Louradour, Jérôme; Bianne-Bernard, Anne-Laure; Kermorvant, Christopher

    2012-01-01

    This paper describes the system for the recognition of French handwriting submitted by A2iA to the competition organized at ICDAR2011 using the Rimes database. This system is composed of several recognizers based on three different recognition technologies, combined using a novel combination method. A framework multi-word recognition based on weighted finite state transducers is presented, using an explicit word segmentation, a combination of isolated word recognizers and a language model. The system was tested both for isolated word recognition and for multi-word line recognition and submitted to the RIMES-ICDAR2011 competition. This system outperformed all previously proposed systems on these tasks.

  13. Gender Differences in the Recognition of Vocal Emotions

    PubMed Central

    Lausen, Adi; Schacht, Annekathrin

    2018-01-01

    The conflicting findings from the few studies conducted with regard to gender differences in the recognition of vocal expressions of emotion have left the exact nature of these differences unclear. Several investigators have argued that a comprehensive understanding of gender differences in vocal emotion recognition can only be achieved by replicating these studies while accounting for influential factors such as stimulus type, gender-balanced samples, number of encoders, decoders, and emotional categories. This study aimed to account for these factors by investigating whether emotion recognition from vocal expressions differs as a function of both listeners' and speakers' gender. A total of N = 290 participants were randomly and equally allocated to two groups. One group listened to words and pseudo-words, while the other group listened to sentences and affect bursts. Participants were asked to categorize the stimuli with respect to the expressed emotions in a fixed-choice response format. Overall, females were more accurate than males when decoding vocal emotions, however, when testing for specific emotions these differences were small in magnitude. Speakers' gender had a significant impact on how listeners' judged emotions from the voice. The group listening to words and pseudo-words had higher identification rates for emotions spoken by male than by female actors, whereas in the group listening to sentences and affect bursts the identification rates were higher when emotions were uttered by female than male actors. The mixed pattern for emotion-specific effects, however, indicates that, in the vocal channel, the reliability of emotion judgments is not systematically influenced by speakers' gender and the related stereotypes of emotional expressivity. Together, these results extend previous findings by showing effects of listeners' and speakers' gender on the recognition of vocal emotions. They stress the importance of distinguishing these factors to explain recognition ability in the processing of emotional prosody. PMID:29922202

  14. Research and Implementation of Tibetan Word Segmentation Based on Syllable Methods

    NASA Astrophysics Data System (ADS)

    Jiang, Jing; Li, Yachao; Jiang, Tao; Yu, Hongzhi

    2018-03-01

    Tibetan word segmentation (TWS) is an important problem in Tibetan information processing, while abbreviated word recognition is one of the key and most difficult problems in TWS. Most of the existing methods of Tibetan abbreviated word recognition are rule-based approaches, which need vocabulary support. In this paper, we propose a method based on sequence tagging model for abbreviated word recognition, and then implement in TWS systems with sequence labeling models. The experimental results show that our abbreviated word recognition method is fast and effective and can be combined easily with the segmentation model. This significantly increases the effect of the Tibetan word segmentation.

  15. The low-frequency encoding disadvantage: Word frequency affects processing demands.

    PubMed

    Diana, Rachel A; Reder, Lynne M

    2006-07-01

    Low-frequency words produce more hits and fewer false alarms than high-frequency words in a recognition task. The low-frequency hit rate advantage has sometimes been attributed to processes that operate during the recognition test (e.g., L. M. Reder et al., 2000). When tasks other than recognition, such as recall, cued recall, or associative recognition, are used, the effects seem to contradict a low-frequency advantage in memory. Four experiments are presented to support the claim that in addition to the advantage of low-frequency words at retrieval, there is a low-frequency disadvantage during encoding. That is, low-frequency words require more processing resources to be encoded episodically than high-frequency words. Under encoding conditions in which processing resources are limited, low-frequency words show a larger decrement in recognition than high-frequency words. Also, studying items (pictures and words of varying frequencies) along with low-frequency words reduces performance for those stimuli. Copyright 2006 APA, all rights reserved.

  16. L2 Word Recognition: Influence of L1 Orthography on Multi-syllabic Word Recognition.

    PubMed

    Hamada, Megumi

    2017-10-01

    L2 reading research suggests that L1 orthographic experience influences L2 word recognition. Nevertheless, the findings on multi-syllabic words in English are still limited despite the fact that a vast majority of words are multi-syllabic. The study investigated whether L1 orthography influences the recognition of multi-syllabic words, focusing on the position of an embedded word. The participants were Arabic ESL learners, Chinese ESL learners, and native speakers of English. The task was a word search task, in which the participants identified a target word embedded in a pseudoword at the initial, middle, or final position. The search accuracy and speed indicated that all groups showed a strong preference for the initial position. The accuracy data further indicated group differences. The Arabic group showed higher accuracy in the final than middle, while the Chinese group showed the opposite and the native speakers showed no difference between the two positions. The findings suggest that L2 multi-syllabic word recognition involves unique processes.

  17. Effects of ocular transverse chromatic aberration on peripheral word identification.

    PubMed

    Yang, Shun-Nan; Tai, Yu-chi; Laukkanen, Hannu; Sheedy, James E

    2011-11-01

    Transverse chromatic aberration (TCA) smears the retinal image of peripheral stimuli. We previously found that TCA significantly reduces the ability to recognize letters presented in the near fovea by degrading image quality and exacerbating crowding effect from adjacent letters. The present study examined whether TCA has a significant effect on near foveal and peripheral word identification, and whether within-word orthographic facilitation interacts with TCA effect to affect word identification. Subjects were briefly presented a 6- to 7-letter word of high or low frequency in each trial. Target words were generated with weak or strong horizontal color fringe to attenuate the TCA in the right periphery and exacerbate it in the left. The center of the target word was 1°, 2°, 4°, and 6° to the left or right of a fixation point. Subject's eye position was monitored with an eye-tracker to ensure proper fixation before target presentation. They were required to report the identity of the target word as soon and accurately as possible. Results show significant effect of color fringe on the latency and accuracy of word recognition, indicating existing TCA effect. Observed TCA effect was more salient in the right periphery, and was affected by word frequency more there. Individuals' subjective preference of color-fringed text was correlated to the TCA effect in the near periphery. Our results suggest that TCA significantly affects peripheral word identification, especially when it is located in the right periphery. Contextual facilitation such as word frequency interacts with TCA to influence the accuracy and latency of word recognition. Copyright © 2011 Elsevier Ltd. All rights reserved.

  18. A Limited-Vocabulary, Multi-Speaker Automatic Isolated Word Recognition System.

    ERIC Educational Resources Information Center

    Paul, James E., Jr.

    Techniques for automatic recognition of isolated words are investigated, and a computer simulation of a word recognition system is effected. Considered in detail are data acquisition and digitizing, word detection, amplitude and time normalization, short-time spectral estimation including spectral windowing, spectral envelope approximation,…

  19. Emotion and language: Valence and arousal affect word recognition

    PubMed Central

    Brysbaert, Marc; Warriner, Amy Beth

    2014-01-01

    Emotion influences most aspects of cognition and behavior, but emotional factors are conspicuously absent from current models of word recognition. The influence of emotion on word recognition has mostly been reported in prior studies on the automatic vigilance for negative stimuli, but the precise nature of this relationship is unclear. Various models of automatic vigilance have claimed that the effect of valence on response times is categorical, an inverted-U, or interactive with arousal. The present study used a sample of 12,658 words, and included many lexical and semantic control factors, to determine the precise nature of the effects of arousal and valence on word recognition. Converging empirical patterns observed in word-level and trial-level data from lexical decision and naming indicate that valence and arousal exert independent monotonic effects: Negative words are recognized more slowly than positive words, and arousing words are recognized more slowly than calming words. Valence explained about 2% of the variance in word recognition latencies, whereas the effect of arousal was smaller. Valence and arousal do not interact, but both interact with word frequency, such that valence and arousal exert larger effects among low-frequency words than among high-frequency words. These results necessitate a new model of affective word processing whereby the degree of negativity monotonically and independently predicts the speed of responding. This research also demonstrates that incorporating emotional factors, especially valence, improves the performance of models of word recognition. PMID:24490848

  20. The posterior parietal cortex in recognition memory: a neuropsychological study.

    PubMed

    Haramati, Sharon; Soroker, Nachum; Dudai, Yadin; Levy, Daniel A

    2008-01-01

    Several recent functional neuroimaging studies have reported robust bilateral activation (L>R) in lateral posterior parietal cortex and precuneus during recognition memory retrieval tasks. It has not yet been determined what cognitive processes are represented by those activations. In order to examine whether parietal lobe-based processes are necessary for basic episodic recognition abilities, we tested a group of 17 first-incident CVA patients whose cortical damage included (but was not limited to) extensive unilateral posterior parietal lesions. These patients performed a series of tasks that yielded parietal activations in previous fMRI studies: yes/no recognition judgments on visual words and on colored object pictures and identifiable environmental sounds. We found that patients with left hemisphere lesions were not impaired compared to controls in any of the tasks. Patients with right hemisphere lesions were not significantly impaired in memory for visual words, but were impaired in recognition of object pictures and sounds. Two lesion--behavior analyses--area-based correlations and voxel-based lesion symptom mapping (VLSM)---indicate that these impairments resulted from extra-parietal damage, specifically to frontal and lateral temporal areas. These findings suggest that extensive parietal damage does not impair recognition performance. We suggest that parietal activations recorded during recognition memory tasks might reflect peri-retrieval processes, such as the storage of retrieved memoranda in a working memory buffer for further cognitive processing.

  1. Speech Clarity Index (Ψ): A Distance-Based Speech Quality Indicator and Recognition Rate Prediction for Dysarthric Speakers with Cerebral Palsy

    NASA Astrophysics Data System (ADS)

    Kayasith, Prakasith; Theeramunkong, Thanaruk

    It is a tedious and subjective task to measure severity of a dysarthria by manually evaluating his/her speech using available standard assessment methods based on human perception. This paper presents an automated approach to assess speech quality of a dysarthric speaker with cerebral palsy. With the consideration of two complementary factors, speech consistency and speech distinction, a speech quality indicator called speech clarity index (Ψ) is proposed as a measure of the speaker's ability to produce consistent speech signal for a certain word and distinguished speech signal for different words. As an application, it can be used to assess speech quality and forecast speech recognition rate of speech made by an individual dysarthric speaker before actual exhaustive implementation of an automatic speech recognition system for the speaker. The effectiveness of Ψ as a speech recognition rate predictor is evaluated by rank-order inconsistency, correlation coefficient, and root-mean-square of difference. The evaluations had been done by comparing its predicted recognition rates with ones predicted by the standard methods called the articulatory and intelligibility tests based on the two recognition systems (HMM and ANN). The results show that Ψ is a promising indicator for predicting recognition rate of dysarthric speech. All experiments had been done on speech corpus composed of speech data from eight normal speakers and eight dysarthric speakers.

  2. Predictive coding accelerates word recognition and learning in the early stages of language development.

    PubMed

    Ylinen, Sari; Bosseler, Alexis; Junttila, Katja; Huotilainen, Minna

    2017-11-01

    The ability to predict future events in the environment and learn from them is a fundamental component of adaptive behavior across species. Here we propose that inferring predictions facilitates speech processing and word learning in the early stages of language development. Twelve- and 24-month olds' electrophysiological brain responses to heard syllables are faster and more robust when the preceding word context predicts the ending of a familiar word. For unfamiliar, novel word forms, however, word-expectancy violation generates a prediction error response, the strength of which significantly correlates with children's vocabulary scores at 12 months. These results suggest that predictive coding may accelerate word recognition and support early learning of novel words, including not only the learning of heard word forms but also their mapping to meanings. Prediction error may mediate learning via attention, since infants' attention allocation to the entire learning situation in natural environments could account for the link between prediction error and the understanding of word meanings. On the whole, the present results on predictive coding support the view that principles of brain function reported across domains in humans and non-human animals apply to language and its development in the infant brain. A video abstract of this article can be viewed at: http://hy.fi/unitube/video/e1cbb495-41d8-462e-8660-0864a1abd02c. [Correction added on 27 January 2017, after first online publication: The video abstract link was added.]. © 2016 John Wiley & Sons Ltd.

  3. Clinical implications of word recognition differences in earphone and aided conditions

    PubMed Central

    McRackan, Theodore R.; Ahlstrom, Jayne B.; Clinkscales, William B.; Meyer, Ted A.; Dubno, Judy R

    2017-01-01

    Objective To compare word recognition scores for adults with hearing loss measured using earphones and in the sound field without and with hearing aids (HA) Study design Independent review of pre-surgical audiological data from an active middle ear implant (MEI) FDA clinical trial Setting Multicenter prospective FDA clinical trial Patients Ninety-four adult HA users Interventions/Main outcomes measured Pre-operative earphone, unaided and aided pure tone thresholds, word recognition scores, and speech intelligibility index. Results We performed an independent review of pre-surgical audiological data from a MEI FDA trial and compared unaided and aided word recognition scores with participants’ HAs fit according to the NAL-R algorithm. For 52 participants (55.3%), differences in scores between earphone and aided conditions were >10%; for 33 participants (35.1%), earphone scores were higher by 10% or more than aided scores. These participants had significantly higher pure tone thresholds at 250 Hz, 500 Hz, and 1000 Hz), higher pure tone averages, higher speech recognition thresholds, (and higher earphone speech levels (p=0.002). No significant correlation was observed between word recognition scores measured with earphones and with hearing aids (r=.14; p=0.16), whereas a moderately high positive correlation was observed between unaided and aided word recognition (r=0.68; p<0.001). Conclusion Results of the these analyses do not support the common clinical practice of using word recognition scores measured with earphones to predict aided word recognition or hearing aid benefit. Rather, these results provide evidence supporting the measurement of aided word recognition in patients who are considering hearing aids. PMID:27631832

  4. Adult Word Recognition and Visual Sequential Memory

    ERIC Educational Resources Information Center

    Holmes, V. M.

    2012-01-01

    Two experiments were conducted investigating the role of visual sequential memory skill in the word recognition efficiency of undergraduate university students. Word recognition was assessed in a lexical decision task using regularly and strangely spelt words, and nonwords that were either standard orthographically legal strings or items made from…

  5. Interpreting Chicken-Scratch: Lexical Access for Handwritten Words

    ERIC Educational Resources Information Center

    Barnhart, Anthony S.; Goldinger, Stephen D.

    2010-01-01

    Handwritten word recognition is a field of study that has largely been neglected in the psychological literature, despite its prevalence in society. Whereas studies of spoken word recognition almost exclusively employ natural, human voices as stimuli, studies of visual word recognition use synthetic typefaces, thus simplifying the process of word…

  6. Asymmetries in Early Word Recognition: The Case of Stops and Fricatives

    ERIC Educational Resources Information Center

    Altvater-Mackensen, Nicole; van der Feest, Suzanne V. H.; Fikkert, Paula

    2014-01-01

    Toddlers' discrimination of native phonemic contrasts is generally unproblematic. Yet using those native contrasts in word learning and word recognition can be more challenging. In this article, we investigate perceptual versus phonological explanations for asymmetrical patterns found in early word recognition. We systematically investigated the…

  7. Memory Performance in Adults with Down Syndrome.

    ERIC Educational Resources Information Center

    Simon, Elliott W.; And Others

    1995-01-01

    The memory abilities of adults (N=20) with Down Syndrome (DS) were compared to subjects matched on age and IQ and on age alone. Three memory tasks were employed: facial recognition, free recall of pictures and words, and cued recall of separate or interacting pictures. In DS individuals, memory was improved primarily by practice and interactive…

  8. Story Comprehension as a Function of Modality and Reading Ability.

    ERIC Educational Resources Information Center

    Marlowe, Wendy; And Others

    1979-01-01

    In a study 12 normal children and 12 reading disabled (word recognition difficulties) children (mean age 9.2 years) were compared for reading and listening comprehension to test whether disabled readers, given an auditory presentation, would show comprehension of material comparable to that of normal readers given visual presentation. (PHR)

  9. Comparison of Word Recognition Strategies in EFL Adult Learners: Orthography vs. Phonology

    ERIC Educational Resources Information Center

    Sieh, Yu-cheng

    2016-01-01

    In an attempt to compare how orthography and phonology interact in EFL learners with different reading abilities, online measures were administered in this study to two groups of university learners, indexed by their reading scores on the Test of English for International Communication (TOEIC). In terms of "accuracy," the less-skilled…

  10. Using Serial and Discrete Digit Naming to Unravel Word Reading Processes

    PubMed Central

    Altani, Angeliki; Protopapas, Athanassios; Georgiou, George K.

    2018-01-01

    During reading acquisition, word recognition is assumed to undergo a developmental shift from slow serial/sublexical processing of letter strings to fast parallel processing of whole word forms. This shift has been proposed to be detected by examining the size of the relationship between serial- and discrete-trial versions of word reading and rapid naming tasks. Specifically, a strong association between serial naming of symbols and single word reading suggests that words are processed serially, whereas a strong association between discrete naming of symbols and single word reading suggests that words are processed in parallel as wholes. In this study, 429 Grade 1, 3, and 5 English-speaking Canadian children were tested on serial and discrete digit naming and word reading. Across grades, single word reading was more strongly associated with discrete naming than with serial naming of digits, indicating that short high-frequency words are processed as whole units early in the development of reading ability in English. In contrast, serial naming was not a unique predictor of single word reading across grades, suggesting that within-word sequential processing was not required for the successful recognition for this set of words. Factor mixture analysis revealed that our participants could be clustered into two classes, namely beginning and more advanced readers. Serial naming uniquely predicted single word reading only among the first class of readers, indicating that novice readers rely on a serial strategy to decode words. Yet, a considerable proportion of Grade 1 students were assigned to the second class, evidently being able to process short high-frequency words as unitized symbols. We consider these findings together with those from previous studies to challenge the hypothesis of a binary distinction between serial/sublexical and parallel/lexical processing in word reading. We argue instead that sequential processing in word reading operates on a continuum, depending on the level of reading proficiency, the degree of orthographic transparency, and word-specific characteristics. PMID:29706918

  11. Using Serial and Discrete Digit Naming to Unravel Word Reading Processes.

    PubMed

    Altani, Angeliki; Protopapas, Athanassios; Georgiou, George K

    2018-01-01

    During reading acquisition, word recognition is assumed to undergo a developmental shift from slow serial/sublexical processing of letter strings to fast parallel processing of whole word forms. This shift has been proposed to be detected by examining the size of the relationship between serial- and discrete-trial versions of word reading and rapid naming tasks. Specifically, a strong association between serial naming of symbols and single word reading suggests that words are processed serially, whereas a strong association between discrete naming of symbols and single word reading suggests that words are processed in parallel as wholes. In this study, 429 Grade 1, 3, and 5 English-speaking Canadian children were tested on serial and discrete digit naming and word reading. Across grades, single word reading was more strongly associated with discrete naming than with serial naming of digits, indicating that short high-frequency words are processed as whole units early in the development of reading ability in English. In contrast, serial naming was not a unique predictor of single word reading across grades, suggesting that within-word sequential processing was not required for the successful recognition for this set of words. Factor mixture analysis revealed that our participants could be clustered into two classes, namely beginning and more advanced readers. Serial naming uniquely predicted single word reading only among the first class of readers, indicating that novice readers rely on a serial strategy to decode words. Yet, a considerable proportion of Grade 1 students were assigned to the second class, evidently being able to process short high-frequency words as unitized symbols. We consider these findings together with those from previous studies to challenge the hypothesis of a binary distinction between serial/sublexical and parallel/lexical processing in word reading. We argue instead that sequential processing in word reading operates on a continuum, depending on the level of reading proficiency, the degree of orthographic transparency, and word-specific characteristics.

  12. Anticipatory coarticulation facilitates word recognition in toddlers.

    PubMed

    Mahr, Tristan; McMillan, Brianna T M; Saffran, Jenny R; Ellis Weismer, Susan; Edwards, Jan

    2015-09-01

    Children learn from their environments and their caregivers. To capitalize on learning opportunities, young children have to recognize familiar words efficiently by integrating contextual cues across word boundaries. Previous research has shown that adults can use phonetic cues from anticipatory coarticulation during word recognition. We asked whether 18-24 month-olds (n=29) used coarticulatory cues on the word "the" when recognizing the following noun. We performed a looking-while-listening eyetracking experiment to examine word recognition in neutral vs. facilitating coarticulatory conditions. Participants looked to the target image significantly sooner when the determiner contained facilitating coarticulatory cues. These results provide the first evidence that novice word-learners can take advantage of anticipatory sub-phonemic cues during word recognition. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. False recognition production indexes in forward associative strength (FAS) lists with three critical words.

    PubMed

    Beato, María Soledad; Arndt, Jason

    2014-01-01

    False memory illusions have been widely studied using the Deese/Roediger-McDermott paradigm (DRM). In this paradigm, participants study words semantically related to a single nonpresented critical word. In a memory test critical words are often falsely recalled and recognized. The present study was conducted to measure the levels of false recognition for seventy-five Spanish DRM word lists that have multiple critical words per list. Lists included three critical words (e.g., HELL, LUCEFER, and SATAN) simultaneously associated with six studied words (e.g., devil, demon, fire, red, bad, and evil). Different levels of forward associative strength (FAS) between the critical words and their studied associates were used in the construction of the lists. Specifically, we selected lists with the highest FAS values possible and FAS was continuously decreased in order to obtain the 75 lists. Six words per list, simultaneously associated with three critical words, were sufficient to produce false recognition. Furthermore, there was wide variability in rates of false recognition (e.g., 53% for DUNGEON, PRISON, and GRATES; 1% for BRACKETS, GARMENT, and CLOTHING). Finally, there was no correlation between false recognition and associative strength. False recognition variability could not be attributed to differences in the forward associative strength.

  14. Novel grid-based optical Braille conversion: from scanning to wording

    NASA Astrophysics Data System (ADS)

    Yoosefi Babadi, Majid; Jafari, Shahram

    2011-12-01

    Grid-based optical Braille conversion (GOBCO) is explained in this article. The grid-fitting technique involves processing scanned images taken from old hard-copy Braille manuscripts, recognising and converting them into English ASCII text documents inside a computer. The resulted words are verified using the relevant dictionary to provide the final output. The algorithms employed in this article can be easily modified to be implemented on other visual pattern recognition systems and text extraction applications. This technique has several advantages including: simplicity of the algorithm, high speed of execution, ability to help visually impaired persons and blind people to work with fax machines and the like, and the ability to help sighted people with no prior knowledge of Braille to understand hard-copy Braille manuscripts.

  15. Novel Blind Recognition Algorithm of Frame Synchronization Words Based on Soft-Decision in Digital Communication Systems.

    PubMed

    Qin, Jiangyi; Huang, Zhiping; Liu, Chunwu; Su, Shaojing; Zhou, Jing

    2015-01-01

    A novel blind recognition algorithm of frame synchronization words is proposed to recognize the frame synchronization words parameters in digital communication systems. In this paper, a blind recognition method of frame synchronization words based on the hard-decision is deduced in detail. And the standards of parameter recognition are given. Comparing with the blind recognition based on the hard-decision, utilizing the soft-decision can improve the accuracy of blind recognition. Therefore, combining with the characteristics of Quadrature Phase Shift Keying (QPSK) signal, an improved blind recognition algorithm based on the soft-decision is proposed. Meanwhile, the improved algorithm can be extended to other signal modulation forms. Then, the complete blind recognition steps of the hard-decision algorithm and the soft-decision algorithm are given in detail. Finally, the simulation results show that both the hard-decision algorithm and the soft-decision algorithm can recognize the parameters of frame synchronization words blindly. What's more, the improved algorithm can enhance the accuracy of blind recognition obviously.

  16. Recognition intent and visual word recognition.

    PubMed

    Wang, Man-Ying; Ching, Chi-Le

    2009-03-01

    This study adopted a change detection task to investigate whether and how recognition intent affects the construction of orthographic representation in visual word recognition. Chinese readers (Experiment 1-1) and nonreaders (Experiment 1-2) detected color changes in radical components of Chinese characters. Explicit recognition demand was imposed in Experiment 2 by an additional recognition task. When the recognition was implicit, a bias favoring the radical location informative of character identity was found in Chinese readers (Experiment 1-1), but not nonreaders (Experiment 1-2). With explicit recognition demands, the effect of radical location interacted with radical function and word frequency (Experiment 2). An estimate of identification performance under implicit recognition was derived in Experiment 3. These findings reflect the joint influence of recognition intent and orthographic regularity in shaping readers' orthographic representation. The implication for the role of visual attention in word recognition was also discussed.

  17. The Impact of Left and Right Intracranial Tumors on Picture and Word Recognition Memory

    ERIC Educational Resources Information Center

    Goldstein, Bram; Armstrong, Carol L.; Modestino, Edward; Ledakis, George; John, Cameron; Hunter, Jill V.

    2004-01-01

    This study investigated the effects of left and right intracranial tumors on picture and word recognition memory. We hypothesized that left hemispheric (LH) patients would exhibit greater word recognition memory impairment than right hemispheric (RH) patients, with no significant hemispheric group picture recognition memory differences. The LH…

  18. Word segmentation in phonemically identical and prosodically different sequences using cochlear implants: A case study.

    PubMed

    Basirat, Anahita

    2017-01-01

    Cochlear implant (CI) users frequently achieve good speech understanding based on phoneme and word recognition. However, there is a significant variability between CI users in processing prosody. The aim of this study was to examine the abilities of an excellent CI user to segment continuous speech using intonational cues. A post-lingually deafened adult CI user and 22 normal hearing (NH) subjects segmented phonemically identical and prosodically different sequences in French such as 'l'affiche' (the poster) versus 'la fiche' (the sheet), both [lafiʃ]. All participants also completed a minimal pair discrimination task. Stimuli were presented in auditory-only and audiovisual presentation modalities. The performance of the CI user in the minimal pair discrimination task was 97% in the auditory-only and 100% in the audiovisual condition. In the segmentation task, contrary to the NH participants, the performance of the CI user did not differ from the chance level. Visual speech did not improve word segmentation. This result suggests that word segmentation based on intonational cues is challenging when using CIs even when phoneme/word recognition is very well rehabilitated. This finding points to the importance of the assessment of CI users' skills in prosody processing and the need for specific interventions focusing on this aspect of speech communication.

  19. Linguistic Context Versus Semantic Competition in Word Recognition by Younger and Older Adults With Cochlear Implants.

    PubMed

    Amichetti, Nicole M; Atagi, Eriko; Kong, Ying-Yee; Wingfield, Arthur

    The increasing numbers of older adults now receiving cochlear implants raises the question of how the novel signal produced by cochlear implants may interact with cognitive aging in the recognition of words heard spoken within a linguistic context. The objective of this study was to pit the facilitative effects of a constraining linguistic context against a potential age-sensitive negative effect of response competition on effectiveness of word recognition. Younger (n = 8; mean age = 22.5 years) and older (n = 8; mean age = 67.5 years) adult implant recipients heard 20 target words as the final words in sentences that manipulated the target word's probability of occurrence within the sentence context. Data from published norms were also used to measure response entropy, calculated as the total number of different responses and the probability distribution of the responses suggested by the sentence context. Sentence-final words were presented to participants using a word-onset gating paradigm, in which a target word was presented with increasing amounts of its onset duration in 50 msec increments until the word was correctly identified. Results showed that for both younger and older adult implant users, the amount of word-onset information needed for correct recognition of sentence-final words was inversely proportional to their likelihood of occurrence within the sentence context, with older adults gaining differential advantage from the contextual constraints offered by a sentence context. On the negative side, older adults' word recognition was differentially hampered by high response entropy, with this effect being driven primarily by the number of competing responses that might also fit the sentence context. Consistent with previous research with normal-hearing younger and older adults, the present results showed older adult implant users' recognition of spoken words to be highly sensitive to linguistic context. This sensitivity, however, also resulted in a greater degree of interference from other words that might also be activated by the context, with negative effects on ease of word recognition. These results are consistent with an age-related inhibition deficit extending to the domain of semantic constraints on word recognition.

  20. The role of backward associative strength in false recognition of DRM lists with multiple critical words.

    PubMed

    Beato, María S; Arndt, Jason

    2017-08-01

    Memory is a reconstruction of the past and is prone to errors. One of the most widely-used paradigms to examine false memory is the Deese/Roediger-McDermott (DRM) paradigm. In this paradigm, participants studied words associatively related to a non-presented critical word. In a subsequent memory test critical words are often falsely recalled and/or recognized. In the present study, we examined the influence of backward associative strength (BAS) on false recognition using DRM lists with multiple critical words. In forty-eight English DRM lists, we manipulated BAS while controlling forward associative strength (FAS). Lists included four words (e.g., prison, convict, suspect, fugitive) simultaneously associated with two critical words (e.g., CRIMINAL, JAIL). The results indicated that true recognition was similar in high-BAS and low-BAS lists, while false recognition was greater in high-BAS lists than in low-BAS lists. Furthermore, there was a positive correlation between false recognition and the probability of a resonant connection between the studied words and their associates. These findings suggest that BAS and resonant connections influence false recognition, and extend prior research using DRM lists associated with a single critical word to studies of DRM lists associated with multiple critical words.

  1. Not all reading disabilities are dyslexia: distinct neurobiology of specific comprehension deficits.

    PubMed

    Cutting, Laurie E; Clements-Stephens, Amy; Pugh, Kenneth R; Burns, Scott; Cao, Aize; Pekar, James J; Davis, Nicole; Rimrodt, Sheryl L

    2013-01-01

    Although an extensive literature exists on the neurobiological correlates of dyslexia (DYS), to date, no studies have examined the neurobiological profile of those who exhibit poor reading comprehension despite intact word-level abilities (specific reading comprehension deficits [S-RCD]). Here we investigated the word-level abilities of S-RCD as compared to typically developing readers (TD) and those with DYS by examining the blood oxygenation-level dependent response to words varying on frequency. Understanding whether S-RCD process words in the same manner as TD, or show alternate pathways to achieve normal word-reading abilities, may provide insights into the origin of this disorder. Results showed that as compared to TD, DYS showed abnormal covariance during word processing with right-hemisphere homologs of the left-hemisphere reading network in conjunction with left occipitotemporal underactivation. In contrast, S-RCD showed an intact neurobiological response to word stimuli in occipitotemporal regions (associated with fast and efficient word processing); however, inferior frontal gyrus (IFG) abnormalities were observed. Specifically, TD showed a higher-percent signal change within right IFG for low-versus-high frequency words as compared to both S-RCD and DYS. Using psychophysiological interaction analyses, a coupling-by-reading group interaction was found in right IFG for DYS, as indicated by a widespread greater covariance between right IFG and right occipitotemporal cortex/visual word-form areas, as well as bilateral medial frontal gyrus, as compared to TD. For S-RCD, the context-dependent functional interaction anomaly was most prominently seen in left IFG, which covaried to a greater extent with hippocampal, parahippocampal, and prefrontal areas than for TD for low- as compared to high-frequency words. Given the greater lexical access demands of low frequency as compared to high-frequency words, these results may suggest specific weaknesses in accessing lexical-semantic representations during word recognition. These novel findings provide foundational insights into the nature of S-RCD, and set the stage for future investigations of this common, but understudied, reading disorder.

  2. Do handwritten words magnify lexical effects in visual word recognition?

    PubMed

    Perea, Manuel; Gil-López, Cristina; Beléndez, Victoria; Carreiras, Manuel

    2016-01-01

    An examination of how the word recognition system is able to process handwritten words is fundamental to formulate a comprehensive model of visual word recognition. Previous research has revealed that the magnitude of lexical effects (e.g., the word-frequency effect) is greater with handwritten words than with printed words. In the present lexical decision experiments, we examined whether the quality of handwritten words moderates the recruitment of top-down feedback, as reflected in word-frequency effects. Results showed a reading cost for difficult-to-read and easy-to-read handwritten words relative to printed words. But the critical finding was that difficult-to-read handwritten words, but not easy-to-read handwritten words, showed a greater word-frequency effect than printed words. Therefore, the inherent physical variability of handwritten words does not necessarily boost the magnitude of lexical effects.

  3. Deaf Children With Cochlear Implants Do Not Appear to Use Sentence Context to Help Recognize Spoken Words

    PubMed Central

    Conway, Christopher M.; Deocampo, Joanne A.; Walk, Anne M.; Anaya, Esperanza M.; Pisoni, David B.

    2015-01-01

    Purpose The authors investigated the ability of deaf children with cochlear implants (CIs) to use sentence context to facilitate the perception of spoken words. Method Deaf children with CIs (n = 24) and an age-matched group of children with normal hearing (n = 31) were presented with lexically controlled sentences and were asked to repeat each sentence in its entirety. Performance was analyzed at each of 3 word positions of each sentence (first, second, and third key word). Results Whereas the children with normal hearing showed robust effects of contextual facilitation—improved speech perception for the final words in a sentence—the deaf children with CIs on average showed no such facilitation. Regression analyses indicated that for the deaf children with CIs, Forward Digit Span scores significantly predicted accuracy scores for all 3 positions, whereas performance on the Stroop Color and Word Test, Children’s Version (Golden, Freshwater, & Golden, 2003) predicted how much contextual facilitation was observed at the final word. Conclusions The pattern of results suggests that some deaf children with CIs do not use sentence context to improve spoken word recognition. The inability to use sentence context may be due to possible interactions between language experience and cognitive factors that affect the ability to successfully integrate temporal–sequential information in spoken language. PMID:25029170

  4. Word recognition using a lexicon constrained by first/last character decisions

    NASA Astrophysics Data System (ADS)

    Zhao, Sheila X.; Srihari, Sargur N.

    1995-03-01

    In lexicon based recognition of machine-printed word images, the size of the lexicon can be quite extensive. The recognition performance is closely related to the size of the lexicon. Recognition performance drops quickly when lexicon size increases. Here, we present an algorithm to improve the word recognition performance by reducing the size of the given lexicon. The algorithm utilizes the information provided by the first and last characters of a word to reduce the size of the given lexicon. Given a word image and a lexicon that contains the word in the image, the first and last characters are segmented and then recognized by a character classifier. The possible candidates based on the results given by the classifier are selected, which give us the sub-lexicon. Then a word shape analysis algorithm is applied to produce the final ranking of the given lexicon. The algorithm was tested on a set of machine- printed gray-scale word images which includes a wide range of print types and qualities.

  5. Word Spotting and Recognition with Embedded Attributes.

    PubMed

    Almazán, Jon; Gordo, Albert; Fornés, Alicia; Valveny, Ernest

    2014-12-01

    This paper addresses the problems of word spotting and word recognition on images. In word spotting, the goal is to find all instances of a query word in a dataset of images. In recognition, the goal is to recognize the content of the word image, usually aided by a dictionary or lexicon. We describe an approach in which both word images and text strings are embedded in a common vectorial subspace. This is achieved by a combination of label embedding and attributes learning, and a common subspace regression. In this subspace, images and strings that represent the same word are close together, allowing one to cast recognition and retrieval tasks as a nearest neighbor problem. Contrary to most other existing methods, our representation has a fixed length, is low dimensional, and is very fast to compute and, especially, to compare. We test our approach on four public datasets of both handwritten documents and natural images showing results comparable or better than the state-of-the-art on spotting and recognition tasks.

  6. Large-Corpus Phoneme and Word Recognition and the Generality of Lexical Context in CVC Word Perception

    ERIC Educational Resources Information Center

    Gelfand, Jessica T.; Christie, Robert E.; Gelfand, Stanley A.

    2014-01-01

    Purpose: Speech recognition may be analyzed in terms of recognition probabilities for perceptual wholes (e.g., words) and parts (e.g., phonemes), where j or the j-factor reveals the number of independent perceptual units required for recognition of the whole (Boothroyd, 1968b; Boothroyd & Nittrouer, 1988; Nittrouer & Boothroyd, 1990). For…

  7. [The Amsterdam Dementia Screening Test in cognitively healthy and clinical samples. An update of normative data].

    PubMed

    van Toutert, Meta; Diesfeldt, Han; Hoek, Dirk

    2016-10-01

    The six tests in the Amsterdam Dementia Screening Test (ADST) examine the cognitive domains of episodic memory (delayed picture recognition, word learning), orientation, category fluency (animals and occupations), constructional ability (figure copying) and executive function (alternating sequences). New normative data were collected in a sample of 102 elderly volunteers (aged 65-94), including subjects with medical or other health conditions, except dementia or frank cognitive impairment (MMSE > 24). Included subjects were independent in complex instrumental activities of daily living.Fluency, not the other tests, needed adjustment for age and education. A deficit score (0-1) was computed for each test. Summation (range 0-6) proved useful in differentiating patients with dementia (N = 741) from normal elderly (N = 102).Positive and negative predictive power across a range of summed deficit scores and base rates are displayed in Bayesian probability tables.In the normal elderly, delayed recall for eight words was tested and adjusted for initial recall. A recognition test mixed the target words with eight distractors. Delayed recognition was adjusted for immediate and delayed recall.The ADST and the normative data in this paper help the clinical neuropsychologist to make decisions concerning the presence or absence of neurocognitive disorder in individual elderly examinees.

  8. Visual Word Recognition in Deaf Readers: Lexicality Is Modulated by Communication Mode

    PubMed Central

    Barca, Laura; Pezzulo, Giovanni; Castrataro, Marianna; Rinaldi, Pasquale; Caselli, Maria Cristina

    2013-01-01

    Evidence indicates that adequate phonological abilities are necessary to develop proficient reading skills and that later in life phonology also has a role in the covert visual word recognition of expert readers. Impairments of acoustic perception, such as deafness, can lead to atypical phonological representations of written words and letters, which in turn can affect reading proficiency. Here, we report an experiment in which young adults with different levels of acoustic perception (i.e., hearing and deaf individuals) and different modes of communication (i.e., hearing individuals using spoken language, deaf individuals with a preference for sign language, and deaf individuals using the oral modality with less or no competence in sign language) performed a visual lexical decision task, which consisted of categorizing real words and consonant strings. The lexicality effect was restricted to deaf signers who responded faster to real words than consonant strings, showing over-reliance on whole word lexical processing of stimuli. No effect of stimulus type was found in deaf individuals using the oral modality or in hearing individuals. Thus, mode of communication modulates the lexicality effect. This suggests that learning a sign language during development shapes visuo-motor representations of words, which are tuned to the actions used to express them (phono-articulatory movements vs. hand movements) and to associated perceptions. As these visuo-motor representations are elicited during on-line linguistic processing and can overlap with the perceptual-motor processes required to execute the task, they can potentially produce interference or facilitation effects. PMID:23554976

  9. The effect of warnings on false memories in young and older adults.

    PubMed

    McCabe, David P; Smith, Anderson D

    2002-10-01

    In the present experiments, we examined adult age differences in the ability to suppress false memories, using the Deese-Roediger-McDermott (DRM) paradigm (Deese, 1959; Roediger & McDermott, 1995). Participants studied lists of words (e.g., bed, rest, awake, etc.), each related to a nonpresented critical lure word (e.g., sleep). Typically, recognition tests reveal false alarms to critical lures at rates comparable to those for hits for studied words. In two experiments, separate groups of young and older adults were unwarned about the false memory effect, warned before studying the lists, or warned after study and before test. Lists were presented at either a slow rate (4 sec/word) or a faster rate (2 sec/word). Young adults were better able to discriminate between studied words and critical lures when warned about the DRM effect either before study or after study but before retrieval, and their performance improved with a slower presentation rate. Older adults were able to discriminate between studied words and critical lures when given warnings before study, but not when given warnings after study but before retrieval. Performance on a working memory capacity measure predicted false recognition following study and retrieval warnings. The results suggest that effective use of warnings to reduce false memories is contingent on the quality and type of encoded information, as well as on whether that information is accessed at retrieval. Furthermore, discriminating between similar sources of activation is dependent on working memory capacity, which declines with advancing age.

  10. Visual word recognition in deaf readers: lexicality is modulated by communication mode.

    PubMed

    Barca, Laura; Pezzulo, Giovanni; Castrataro, Marianna; Rinaldi, Pasquale; Caselli, Maria Cristina

    2013-01-01

    Evidence indicates that adequate phonological abilities are necessary to develop proficient reading skills and that later in life phonology also has a role in the covert visual word recognition of expert readers. Impairments of acoustic perception, such as deafness, can lead to atypical phonological representations of written words and letters, which in turn can affect reading proficiency. Here, we report an experiment in which young adults with different levels of acoustic perception (i.e., hearing and deaf individuals) and different modes of communication (i.e., hearing individuals using spoken language, deaf individuals with a preference for sign language, and deaf individuals using the oral modality with less or no competence in sign language) performed a visual lexical decision task, which consisted of categorizing real words and consonant strings. The lexicality effect was restricted to deaf signers who responded faster to real words than consonant strings, showing over-reliance on whole word lexical processing of stimuli. No effect of stimulus type was found in deaf individuals using the oral modality or in hearing individuals. Thus, mode of communication modulates the lexicality effect. This suggests that learning a sign language during development shapes visuo-motor representations of words, which are tuned to the actions used to express them (phono-articulatory movements vs. hand movements) and to associated perceptions. As these visuo-motor representations are elicited during on-line linguistic processing and can overlap with the perceptual-motor processes required to execute the task, they can potentially produce interference or facilitation effects.

  11. A pilot study to assess oral health literacy by comparing a word recognition and comprehension tool.

    PubMed

    Khan, Khadija; Ruby, Brendan; Goldblatt, Ruth S; Schensul, Jean J; Reisine, Susan

    2014-11-18

    Oral health literacy is important to oral health outcomes. Very little has been established on comparing word recognition to comprehension in oral health literacy especially in older adults. Our goal was to compare methods to measure oral health literacy in older adults by using the Rapid Estimate of Literacy in Dentistry (REALD-30) tool including word recognition and comprehension and by assessing comprehension of a brochure about dry mouth. 75 males and 75 females were recruited from the University of Connecticut Dental practice. Participants were English speakers and at least 50 years of age. They were asked to read the REALD-30 words out loud (word recognition) and then define them (comprehension). Each correctly-pronounced and defined word was scored 1 for total REALD-30 word recognition and REALD-30 comprehension scores of 0-30. Participants then read the National Institute of Dental and Craniofacial Research brochure "Dry Mouth" and answered three questions defining dry mouth, causes and treatment. Participants also completed a survey on dental behavior. Participants scored higher on REALD-30 word recognition with a mean of 22.98 (SD = 5.1) compared to REALD-30 comprehension with a mean of 16.1 (SD = 4.3). The mean score on the brochure comprehension was 5.1 of a possible total of 7 (SD = 1.6). Pearson correlations demonstrated significant associations among the three measures. Multivariate regression showed that females and those with higher education had significantly higher scores on REALD-30 word-recognition, and dry mouth brochure questions. Being white was significantly related to higher REALD-30 recognition and comprehension scores but not to the scores on the brochure. This pilot study demonstrates the feasibility of using the REALD-30 and a brochure to assess literacy in a University setting among older adults. Participants had higher scores on the word recognition than on comprehension agreeing with other studies that recognition does not imply understanding.

  12. The cingulo-opercular network provides word-recognition benefit.

    PubMed

    Vaden, Kenneth I; Kuchinsky, Stefanie E; Cute, Stephanie L; Ahlstrom, Jayne B; Dubno, Judy R; Eckert, Mark A

    2013-11-27

    Recognizing speech in difficult listening conditions requires considerable focus of attention that is often demonstrated by elevated activity in putative attention systems, including the cingulo-opercular network. We tested the prediction that elevated cingulo-opercular activity provides word-recognition benefit on a subsequent trial. Eighteen healthy, normal-hearing adults (10 females; aged 20-38 years) performed word recognition (120 trials) in multi-talker babble at +3 and +10 dB signal-to-noise ratios during a sparse sampling functional magnetic resonance imaging (fMRI) experiment. Blood oxygen level-dependent (BOLD) contrast was elevated in the anterior cingulate cortex, anterior insula, and frontal operculum in response to poorer speech intelligibility and response errors. These brain regions exhibited significantly greater correlated activity during word recognition compared with rest, supporting the premise that word-recognition demands increased the coherence of cingulo-opercular network activity. Consistent with an adaptive control network explanation, general linear mixed model analyses demonstrated that increased magnitude and extent of cingulo-opercular network activity was significantly associated with correct word recognition on subsequent trials. These results indicate that elevated cingulo-opercular network activity is not simply a reflection of poor performance or error but also supports word recognition in difficult listening conditions.

  13. Storage and retrieval properties of dual codes for pictures and words in recognition memory.

    PubMed

    Snodgrass, J G; McClure, P

    1975-09-01

    Storage and retrieval properties of pictures and words were studied within a recognition memory paradigm. Storage was manipulated by instructing subjects either to image or to verbalize to both picture and word stimuli during the study sequence. Retrieval was manipulated by representing a proportion of the old picture and word items in their opposite form during the recognition test (i.e., some old pictures were tested with their corresponding words and vice versa). Recognition performance for pictures was identical under the two instructional conditions, whereas recognition performance for words was markedly superior under the imagery instruction condition. It was suggested that subjects may engage in dual coding of simple pictures naturally, regardless of instructions, whereas dual coding of words may occur only under imagery instructions. The form of the test item had no effect on recognition performance for either type of stimulus and under either instructional condition. However, change of form of the test item markedly reduced item-by-item correlations between the two instructional conditions. It is tentatively proposed that retrieval is required in recognition, but that the effect of a form change is simply to make the retrieval process less consistent, not less efficient.

  14. The Impact of Strong Assimilation on the Perception of Connected Speech

    ERIC Educational Resources Information Center

    Gaskell, M. Gareth; Snoeren, Natalie D.

    2008-01-01

    Models of compensation for phonological variation in spoken word recognition differ in their ability to accommodate complete assimilatory alternations (such as run assimilating fully to rum in the context of a quick run picks you up). Two experiments addressed whether such complete changes can be observed in casual speech, and if so, whether they…

  15. Double Dissociations in Reading Comprehension Difficulties among Chinese-English Bilinguals and Their Association with Tone Awareness

    ERIC Educational Resources Information Center

    Choi, William; Tong, Xiuli; Deacon, S. Hélène

    2017-01-01

    Poor comprehenders have reading comprehension difficulties but normal word recognition ability. Here, we report the first study, which investigated (i) the dissociation and (ii) the prevalence of L1-L2 reading comprehension difficulties, and (iii) the levels of key metalinguistic skills in poor comprehenders among Chinese-English bilingual…

  16. [Representation of letter position in visual word recognition process].

    PubMed

    Makioka, S

    1994-08-01

    Two experiments investigated the representation of letter position in visual word recognition process. In Experiment 1, subjects (12 undergraduates and graduates) were asked to detect a target word in a briefly-presented probe. Probes consisted of two kanji words. The latters which formed targets (critical letters) were always contained in probes. (e.g. target: [symbol: see text] probe: [symbol: see text]) High false alarm rate was observed when critical letters occupied the same within-word relative position (left or right within the word) in the probe words as in the target word. In Experiment 2 (subject were ten undergraduates and graduates), spaces adjacent to probe words were replaced by randomly chosen hiragana letters (e.g. [symbol: see text]), because spaces are not used to separate words in regular Japanese sentences. In addition to the effect of within-word relative position as in Experiment 1, the effect of between-word relative position (left or right across the probe words) was observed. These results suggest that information about within-word relative position of a letter is used in word recognition process. The effect of within-word relative position was explained by a connectionist model of word recognition.

  17. Eye-fixation behavior, lexical storage, and visual word recognition in a split processing model.

    PubMed

    Shillcock, R; Ellison, T M; Monaghan, P

    2000-10-01

    Some of the implications of a model of visual word recognition in which processing is conditioned by the anatomical splitting of the visual field between the two hemispheres of the brain are explored. The authors investigate the optimal processing of visually presented words within such an architecture, and, for a realistically sized lexicon of English, characterize a computationally optimal fixation point in reading. They demonstrate that this approach motivates a range of behavior observed in reading isolated words and text, including the optimal viewing position and its relationship with the preferred viewing location, the failure to fixate smaller words, asymmetries in hemisphere-specific processing, and the priority given to the exterior letters of words. The authors also show that split architectures facilitate the uptake of all the letter-position information necessary for efficient word recognition and that this information may be less specific than is normally assumed. A split model of word recognition captures a range of behavior in reading that is greater than that covered by existing models of visual word recognition.

  18. A nonmusician with severe Alzheimer's dementia learns a new song.

    PubMed

    Baird, Amee; Umbach, Heidi; Thompson, William Forde

    2017-02-01

    The hallmark symptom of Alzheimer's Dementia (AD) is impaired memory, but memory for familiar music can be preserved. We explored whether a non-musician with severe AD could learn a new song. A 91 year old woman (NC) with severe AD was taught an unfamiliar song. We assessed her delayed song recall (24 hours and 2 weeks), music cognition, two word recall (presented within a familiar song lyric, a famous proverb, or as a word stem completion task), and lyrics and proverb completion. NC's music cognition (pitch and rhythm perception, recognition of familiar music, completion of lyrics) was relatively preserved. She recalled 0/2 words presented in song lyrics or proverbs, but 2/2 word stems, suggesting intact implicit memory function. She could sing along to the newly learnt song on immediate and delayed recall (24 hours and 2 weeks later), and with intermittent prompting could sing it alone. This is the first detailed study of preserved ability to learn a new song in a non-musician with severe AD, and contributes to observations of relatively preserved musical abilities in people with dementia.

  19. Phonological Priming and Cohort Effects in Toddlers

    ERIC Educational Resources Information Center

    Mani, Nivedita; Plunkett, Kim

    2011-01-01

    Adult word recognition is influenced by prior exposure to phonologically or semantically related words ("cup" primes "cat" or "plate") compared to unrelated words ("door"), suggesting that words are organised in the adult lexicon based on their phonological and semantic properties and that word recognition implicates not just the heard word, but…

  20. Surviving blind decomposition: A distributional analysis of the time-course of complex word recognition.

    PubMed

    Schmidtke, Daniel; Matsuki, Kazunaga; Kuperman, Victor

    2017-11-01

    The current study addresses a discrepancy in the psycholinguistic literature about the chronology of information processing during the visual recognition of morphologically complex words. Form-then-meaning accounts of complex word recognition claim that morphemes are processed as units of form prior to any influence of their meanings, whereas form-and-meaning models posit that recognition of complex word forms involves the simultaneous access of morphological and semantic information. The study reported here addresses this theoretical discrepancy by applying a nonparametric distributional technique of survival analysis (Reingold & Sheridan, 2014) to 2 behavioral measures of complex word processing. Across 7 experiments reported here, this technique is employed to estimate the point in time at which orthographic, morphological, and semantic variables exert their earliest discernible influence on lexical decision RTs and eye movement fixation durations. Contrary to form-then-meaning predictions, Experiments 1-4 reveal that surface frequency is the earliest lexical variable to exert a demonstrable influence on lexical decision RTs for English and Dutch derived words (e.g., badness ; bad + ness ), English pseudoderived words (e.g., wander ; wand + er ) and morphologically simple control words (e.g., ballad ; ball + ad ). Furthermore, for derived word processing across lexical decision and eye-tracking paradigms (Experiments 1-2; 5-7), semantic effects emerge early in the time-course of word recognition, and their effects either precede or emerge simultaneously with morphological effects. These results are not consistent with the premises of the form-then-meaning view of complex word recognition, but are convergent with a form-and-meaning account of complex word recognition. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  1. Free Field Word recognition test in the presence of noise in normal hearing adults.

    PubMed

    Almeida, Gleide Viviani Maciel; Ribas, Angela; Calleros, Jorge

    In ideal listening situations, subjects with normal hearing can easily understand speech, as can many subjects who have a hearing loss. To present the validation of the Word Recognition Test in a Free Field in the Presence of Noise in normal-hearing adults. Sample consisted of 100 healthy adults over 18 years of age with normal hearing. After pure tone audiometry, a speech recognition test was applied in free field condition with monosyllables and disyllables, with standardized material in three listening situations: optimal listening condition (no noise), with a signal to noise ratio of 0dB and a signal to noise ratio of -10dB. For these tests, an environment in calibrated free field was arranged where speech was presented to the subject being tested from two speakers located at 45°, and noise from a third speaker, located at 180°. All participants had speech audiometry results in the free field between 88% and 100% in the three listening situations. Word Recognition Test in Free Field in the Presence of Noise proved to be easy to be organized and applied. The results of the test validation suggest that individuals with normal hearing should get between 88% and 100% of the stimuli correct. The test can be an important tool in measuring noise interference on the speech perception abilities. Copyright © 2016 Associação Brasileira de Otorrinolaringologia e Cirurgia Cérvico-Facial. Published by Elsevier Editora Ltda. All rights reserved.

  2. Age-related Effects on Word Recognition: Reliance on Cognitive Control Systems with Structural Declines in Speech-responsive Cortex

    PubMed Central

    Walczak, Adam; Ahlstrom, Jayne; Denslow, Stewart; Horwitz, Amy; Dubno, Judy R.

    2008-01-01

    Speech recognition can be difficult and effortful for older adults, even for those with normal hearing. Declining frontal lobe cognitive control has been hypothesized to cause age-related speech recognition problems. This study examined age-related changes in frontal lobe function for 15 clinically normal hearing adults (21–75 years) when they performed a word recognition task that was made challenging by decreasing word intelligibility. Although there were no age-related changes in word recognition, there were age-related changes in the degree of activity within left middle frontal gyrus (MFG) and anterior cingulate (ACC) regions during word recognition. Older adults engaged left MFG and ACC regions when words were most intelligible compared to younger adults who engaged these regions when words were least intelligible. Declining gray matter volume within temporal lobe regions responsive to word intelligibility significantly predicted left MFG activity, even after controlling for total gray matter volume, suggesting that declining structural integrity of brain regions responsive to speech leads to the recruitment of frontal regions when words are easily understood. Electronic supplementary material The online version of this article (doi:10.1007/s10162-008-0113-3) contains supplementary material, which is available to authorized users. PMID:18274825

  3. Syllable Transposition Effects in Korean Word Recognition

    ERIC Educational Resources Information Center

    Lee, Chang H.; Kwon, Youan; Kim, Kyungil; Rastle, Kathleen

    2015-01-01

    Research on the impact of letter transpositions in visual word recognition has yielded important clues about the nature of orthographic representations. This study investigated the impact of syllable transpositions on the recognition of Korean multisyllabic words. Results showed that rejection latencies in visual lexical decision for…

  4. Longitudinal changes in speech recognition in older persons.

    PubMed

    Dubno, Judy R; Lee, Fu-Shing; Matthews, Lois J; Ahlstrom, Jayne B; Horwitz, Amy R; Mills, John H

    2008-01-01

    Recognition of isolated monosyllabic words in quiet and recognition of key words in low- and high-context sentences in babble were measured in a large sample of older persons enrolled in a longitudinal study of age-related hearing loss. Repeated measures were obtained yearly or every 2 to 3 years. To control for concurrent changes in pure-tone thresholds and speech levels, speech-recognition scores were adjusted using an importance-weighted speech-audibility metric (AI). Linear-regression slope estimated the rate of change in adjusted speech-recognition scores. Recognition of words in quiet declined significantly faster with age than predicted by declines in speech audibility. As subjects aged, observed scores deviated increasingly from AI-predicted scores, but this effect did not accelerate with age. Rate of decline in word recognition was significantly faster for females than males and for females with high serum progesterone levels, whereas noise history had no effect. Rate of decline did not accelerate with age but increased with degree of hearing loss, suggesting that with more severe injury to the auditory system, impairments to auditory function other than reduced audibility resulted in faster declines in word recognition as subjects aged. Recognition of key words in low- and high-context sentences in babble did not decline significantly with age.

  5. Neural networks to classify speaker independent isolated words recorded in radio car environments

    NASA Astrophysics Data System (ADS)

    Alippi, C.; Simeoni, M.; Torri, V.

    1993-02-01

    Many applications, in particular the ones requiring nonlinear signal processing, have proved Artificial Neural Networks (ANN's) to be invaluable tools for model free estimation. The classifying abilities of ANN's are addressed by testing their performance in a speaker independent word recognition application. A real world case requiring implementation of compact integrated devices is taken into account: the classification of isolated words in radio car environment. A multispeaker database of isolated words was recorded in different environments. Data were first processed to determinate the boundaries of each word and then to extract speech features, the latter accomplished by using cepstral coefficient representation, log area ratios and filters bank techniques. Multilayered perceptron and adaptive vector quantization neural paradigms were tested to find a reasonable compromise between performances and network simplicity, fundamental requirement for the implementation of compact real time running neural devices.

  6. Phonotactics Constraints and the Spoken Word Recognition of Chinese Words in Speech

    ERIC Educational Resources Information Center

    Yip, Michael C.

    2016-01-01

    Two word-spotting experiments were conducted to examine the question of whether native Cantonese listeners are constrained by phonotactics information in spoken word recognition of Chinese words in speech. Because no legal consonant clusters occurred within an individual Chinese word, this kind of categorical phonotactics information of Chinese…

  7. L2 Word Recognition Research: A Critical Review.

    ERIC Educational Resources Information Center

    Koda, Keiko

    1996-01-01

    Explores conceptual syntheses advancing second language (L2) word recognition research and uncovers agendas relating to cross-linguistic examinations of L2 processing in a cohort of undergraduate students in France. Describes connections between word recognition and reading, overviews the connectionist construct, and illustrates cross-linguistic…

  8. Orthographic Facilitation in Chinese Spoken Word Recognition: An ERP Study

    ERIC Educational Resources Information Center

    Zou, Lijuan; Desroches, Amy S.; Liu, Youyi; Xia, Zhichao; Shu, Hua

    2012-01-01

    Orthographic influences in spoken word recognition have been previously examined in alphabetic languages. However, it is unknown whether orthographic information affects spoken word recognition in Chinese, which has a clean dissociation between orthography (O) and phonology (P). The present study investigated orthographic effects using event…

  9. Event Recognition Based on Deep Learning in Chinese Texts

    PubMed Central

    Zhang, Yajun; Liu, Zongtian; Zhou, Wen

    2016-01-01

    Event recognition is the most fundamental and critical task in event-based natural language processing systems. Existing event recognition methods based on rules and shallow neural networks have certain limitations. For example, extracting features using methods based on rules is difficult; methods based on shallow neural networks converge too quickly to a local minimum, resulting in low recognition precision. To address these problems, we propose the Chinese emergency event recognition model based on deep learning (CEERM). Firstly, we use a word segmentation system to segment sentences. According to event elements labeled in the CEC 2.0 corpus, we classify words into five categories: trigger words, participants, objects, time and location. Each word is vectorized according to the following six feature layers: part of speech, dependency grammar, length, location, distance between trigger word and core word and trigger word frequency. We obtain deep semantic features of words by training a feature vector set using a deep belief network (DBN), then analyze those features in order to identify trigger words by means of a back propagation neural network. Extensive testing shows that the CEERM achieves excellent recognition performance, with a maximum F-measure value of 85.17%. Moreover, we propose the dynamic-supervised DBN, which adds supervised fine-tuning to a restricted Boltzmann machine layer by monitoring its training performance. Test analysis reveals that the new DBN improves recognition performance and effectively controls the training time. Although the F-measure increases to 88.11%, the training time increases by only 25.35%. PMID:27501231

  10. Event Recognition Based on Deep Learning in Chinese Texts.

    PubMed

    Zhang, Yajun; Liu, Zongtian; Zhou, Wen

    2016-01-01

    Event recognition is the most fundamental and critical task in event-based natural language processing systems. Existing event recognition methods based on rules and shallow neural networks have certain limitations. For example, extracting features using methods based on rules is difficult; methods based on shallow neural networks converge too quickly to a local minimum, resulting in low recognition precision. To address these problems, we propose the Chinese emergency event recognition model based on deep learning (CEERM). Firstly, we use a word segmentation system to segment sentences. According to event elements labeled in the CEC 2.0 corpus, we classify words into five categories: trigger words, participants, objects, time and location. Each word is vectorized according to the following six feature layers: part of speech, dependency grammar, length, location, distance between trigger word and core word and trigger word frequency. We obtain deep semantic features of words by training a feature vector set using a deep belief network (DBN), then analyze those features in order to identify trigger words by means of a back propagation neural network. Extensive testing shows that the CEERM achieves excellent recognition performance, with a maximum F-measure value of 85.17%. Moreover, we propose the dynamic-supervised DBN, which adds supervised fine-tuning to a restricted Boltzmann machine layer by monitoring its training performance. Test analysis reveals that the new DBN improves recognition performance and effectively controls the training time. Although the F-measure increases to 88.11%, the training time increases by only 25.35%.

  11. Effects of Error Correction on Word Recognition and Reading Comprehension.

    ERIC Educational Resources Information Center

    Jenkins, Joseph R.; And Others

    1983-01-01

    Two procedures for correcting oral reading errors, word supply and word drill, were examined to determine their effects on measures of word recognition and comprehension with 17 learning disabled elementary school students. (Author/SW)

  12. Automatic speech recognition technology development at ITT Defense Communications Division

    NASA Technical Reports Server (NTRS)

    White, George M.

    1977-01-01

    An assessment of the applications of automatic speech recognition to defense communication systems is presented. Future research efforts include investigations into the following areas: (1) dynamic programming; (2) recognition of speech degraded by noise; (3) speaker independent recognition; (4) large vocabulary recognition; (5) word spotting and continuous speech recognition; and (6) isolated word recognition.

  13. Speed and accuracy of dyslexic versus typical word recognition: an eye-movement investigation

    PubMed Central

    Kunert, Richard; Scheepers, Christoph

    2014-01-01

    Developmental dyslexia is often characterized by a dual deficit in both word recognition accuracy and general processing speed. While previous research into dyslexic word recognition may have suffered from speed-accuracy trade-off, the present study employed a novel eye-tracking task that is less prone to such confounds. Participants (10 dyslexics and 12 controls) were asked to look at real word stimuli, and to ignore simultaneously presented non-word stimuli, while their eye-movements were recorded. Improvements in word recognition accuracy over time were modeled in terms of a continuous non-linear function. The words' rhyme consistency and the non-words' lexicality (unpronounceable, pronounceable, pseudohomophone) were manipulated within-subjects. Speed-related measures derived from the model fits confirmed generally slower processing in dyslexics, and showed a rhyme consistency effect in both dyslexics and controls. In terms of overall error rate, dyslexics (but not controls) performed less accurately on rhyme-inconsistent words, suggesting a representational deficit for such words in dyslexics. Interestingly, neither group showed a pseudohomophone effect in speed or accuracy, which might call the task-independent pervasiveness of this effect into question. The present results illustrate the importance of distinguishing between speed- vs. accuracy-related effects for our understanding of dyslexic word recognition. PMID:25346708

  14. The Word Shape Hypothesis Re-Examined: Evidence for an External Feature Advantage in Visual Word Recognition

    ERIC Educational Resources Information Center

    Beech, John R.; Mayall, Kate A.

    2005-01-01

    This study investigates the relative roles of internal and external letter features in word recognition. In Experiment 1 the efficacy of outer word fragments (words with all their horizontal internal features removed) was compared with inner word fragments (words with their outer features removed) as primes in a forward masking paradigm. These…

  15. Caffeine cravings impair memory and metacognition.

    PubMed

    Palmer, Matthew A; Sauer, James D; Ling, Angus; Riza, Joshua

    2017-10-01

    Cravings for food and other substances can impair cognition. We extended previous research by testing the effects of caffeine cravings on cued-recall and recognition memory tasks, and on the accuracy of judgements of learning (JOLs; predicted future recall) and feeling-of-knowing (FOK; predicted future recognition for items that cannot be recalled). Participants (N = 55) studied word pairs (POND-BOOK) and completed a cued-recall test and a recognition test. Participants made JOLs prior to the cued-recall test and FOK judgements prior to the recognition test. Participants were randomly allocated to a craving or control condition; we manipulated caffeine cravings via a combination of abstinence, cue exposure, and imagery. Cravings impaired memory performance on the cued-recall and recognition tasks. Cravings also impaired resolution (the ability to distinguish items that would be remembered from those that would not) for FOK judgements but not JOLs, and reduced calibration (correspondence between predicted and actual accuracy) for JOLs but not FOK judgements. Additional analysis of the cued-recall data suggested that cravings also reduced participants' ability to monitor the likely accuracy of answers during the cued-recall test. These findings add to prior research demonstrating that memory strength manipulations have systematically different effects on different types of metacognitive judgements.

  16. Directed forgetting of complex pictures in an item method paradigm.

    PubMed

    Hauswald, Anne; Kissler, Johanna

    2008-11-01

    An item-cued directed forgetting paradigm was used to investigate the ability to control episodic memory and selectively encode complex coloured pictures. A series of photographs was presented to 21 participants who were instructed to either remember or forget each picture after it was presented. Memory performance was later tested with a recognition task where all presented items had to be retrieved, regardless of the initial instructions. A directed forgetting effect--that is, better recognition of "to-be-remembered" than of "to-be-forgotten" pictures--was observed, although its size was smaller than previously reported for words or line drawings. The magnitude of the directed forgetting effect correlated negatively with participants' depression and dissociation scores. The results indicate that, at least in an item method, directed forgetting occurs for complex pictures as well as words and simple line drawings. Furthermore, people with higher levels of dissociative or depressive symptoms exhibit altered memory encoding patterns.

  17. The Development of Word Recognition in a Second Language.

    ERIC Educational Resources Information Center

    Muljani, D.; Koda, Keiko; Moates, Danny R.

    1998-01-01

    A study investigated differences in English word recognition in native speakers of Indonesian (an alphabetic language) and Chinese (a logographic languages) learning English as a Second Language. Results largely confirmed the hypothesis that an alphabetic first language would predict better word recognition in speakers of an alphabetic language,…

  18. The Role of Antibody in Korean Word Recognition

    ERIC Educational Resources Information Center

    Lee, Chang Hwan; Lee, Yoonhyoung; Kim, Kyungil

    2010-01-01

    A subsyllabic phonological unit, the antibody, has received little attention as a potential fundamental processing unit in word recognition. The psychological reality of the antibody in Korean recognition was investigated by looking at the performance of subjects presented with nonwords and words in the lexical decision task. In Experiment 1, the…

  19. The Effects of Explicit Word Recognition Training on Japanese EFL Learners

    ERIC Educational Resources Information Center

    Burrows, Lance; Holsworth, Michael

    2016-01-01

    This study is a quantitative, quasi-experimental investigation focusing on the effects of word recognition training on word recognition fluency, reading speed, and reading comprehension for 151 Japanese university students at a lower-intermediate reading proficiency level. Four treatment groups were given training in orthographic, phonological,…

  20. Interpreting Chicken-Scratch: Lexical Access for Handwritten Words

    PubMed Central

    Barnhart, Anthony S.; Goldinger, Stephen D.

    2014-01-01

    Handwritten word recognition is a field of study that has largely been neglected in the psychological literature, despite its prevalence in society. Whereas studies of spoken word recognition almost exclusively employ natural, human voices as stimuli, studies of visual word recognition use synthetic typefaces, thus simplifying the process of word recognition. The current study examined the effects of handwriting on a series of lexical variables thought to influence bottom-up and top-down processing, including word frequency, regularity, bidirectional consistency, and imageability. The results suggest that the natural physical ambiguity of handwritten stimuli forces a greater reliance on top-down processes, because almost all effects were magnified, relative to conditions with computer print. These findings suggest that processes of word perception naturally adapt to handwriting, compensating for physical ambiguity by increasing top-down feedback. PMID:20695708

  1. Does quality of life depend on speech recognition performance for adult cochlear implant users?

    PubMed

    Capretta, Natalie R; Moberly, Aaron C

    2016-03-01

    Current postoperative clinical outcome measures for adults receiving cochlear implants (CIs) consist of testing speech recognition, primarily under quiet conditions. However, it is strongly suspected that results on these measures may not adequately reflect patients' quality of life (QOL) using their implants. This study aimed to evaluate whether QOL for CI users depends on speech recognition performance. Twenty-three postlingually deafened adults with CIs were assessed. Participants were tested for speech recognition (Central Institute for the Deaf word and AzBio sentence recognition in quiet) and completed three QOL measures-the Nijmegen Cochlear Implant Questionnaire; either the Hearing Handicap Inventory for Adults or the Hearing Handicap Inventory for the Elderly; and the Speech, Spatial and Qualities of Hearing Scale questionnaires-to assess a variety of QOL factors. Correlations were sought between speech recognition and QOL scores. Demographics, audiologic history, language, and cognitive skills were also examined as potential predictors of QOL. Only a few QOL scores significantly correlated with postoperative sentence or word recognition in quiet, and correlations were primarily isolated to speech-related subscales on QOL measures. Poorer pre- and postoperative unaided hearing predicted better QOL. Socioeconomic status, duration of deafness, age at implantation, duration of CI use, reading ability, vocabulary size, and cognitive status did not consistently predict QOL scores. For adult, postlingually deafened CI users, clinical speech recognition measures in quiet do not correlate broadly with QOL. Results suggest the need for additional outcome measures of the benefits and limitations of cochlear implantation. 4. Laryngoscope, 126:699-706, 2016. © 2015 The American Laryngological, Rhinological and Otological Society, Inc.

  2. Functional Anatomy of Recognition of Chinese Multi-Character Words: Convergent Evidence from Effects of Transposable Nonwords, Lexicality, and Word Frequency.

    PubMed

    Lin, Nan; Yu, Xi; Zhao, Ying; Zhang, Mingxia

    2016-01-01

    This fMRI study aimed to identify the neural mechanisms underlying the recognition of Chinese multi-character words by partialling out the confounding effect of reaction time (RT). For this purpose, a special type of nonword-transposable nonword-was created by reversing the character orders of real words. These nonwords were included in a lexical decision task along with regular (non-transposable) nonwords and real words. Through conjunction analysis on the contrasts of transposable nonwords versus regular nonwords and words versus regular nonwords, the confounding effect of RT was eliminated, and the regions involved in word recognition were reliably identified. The word-frequency effect was also examined in emerged regions to further assess their functional roles in word processing. Results showed significant conjunctional effect and positive word-frequency effect in the bilateral inferior parietal lobules and posterior cingulate cortex, whereas only conjunctional effect was found in the anterior cingulate cortex. The roles of these brain regions in recognition of Chinese multi-character words were discussed.

  3. Functional Anatomy of Recognition of Chinese Multi-Character Words: Convergent Evidence from Effects of Transposable Nonwords, Lexicality, and Word Frequency

    PubMed Central

    Lin, Nan; Yu, Xi; Zhao, Ying; Zhang, Mingxia

    2016-01-01

    This fMRI study aimed to identify the neural mechanisms underlying the recognition of Chinese multi-character words by partialling out the confounding effect of reaction time (RT). For this purpose, a special type of nonword—transposable nonword—was created by reversing the character orders of real words. These nonwords were included in a lexical decision task along with regular (non-transposable) nonwords and real words. Through conjunction analysis on the contrasts of transposable nonwords versus regular nonwords and words versus regular nonwords, the confounding effect of RT was eliminated, and the regions involved in word recognition were reliably identified. The word-frequency effect was also examined in emerged regions to further assess their functional roles in word processing. Results showed significant conjunctional effect and positive word-frequency effect in the bilateral inferior parietal lobules and posterior cingulate cortex, whereas only conjunctional effect was found in the anterior cingulate cortex. The roles of these brain regions in recognition of Chinese multi-character words were discussed. PMID:26901644

  4. Recognition and reading aloud of kana and kanji word: an fMRI study.

    PubMed

    Ino, Tadashi; Nakai, Ryusuke; Azuma, Takashi; Kimura, Toru; Fukuyama, Hidenao

    2009-03-16

    It has been proposed that different brain regions are recruited for processing two Japanese writing systems, namely, kanji (morphograms) and kana (syllabograms). However, this difference may depend upon what type of word was used and also on what type of task was performed. Using fMRI, we investigated brain activation for processing kanji and kana words with similar high familiarity in two tasks: word recognition and reading aloud. During both tasks, words and non-words were presented side by side, and the subjects were required to press a button corresponding to the real word in the word recognition task and were required to read aloud the real word in the reading aloud task. Brain activations were similar between kanji and kana during reading aloud task, whereas during word recognition task in which accurate identification and selection were required, kanji relative to kana activated regions of bilateral frontal, parietal and occipitotemporal cortices, all of which were related mainly to visual word-form analysis and visuospatial attention. Concerning the difference of brain activity between two tasks, differential activation was found only in the regions associated with task-specific sensorimotor processing for kana, whereas visuospatial attention network also showed greater activation during word recognition task than during reading aloud task for kanji. We conclude that the differences in brain activation between kanji and kana depend on the interaction between the script characteristics and the task demands.

  5. Intact suppression of increased false recognition in schizophrenia.

    PubMed

    Weiss, Anthony P; Dodson, Chad S; Goff, Donald C; Schacter, Daniel L; Heckers, Stephan

    2002-09-01

    Recognition memory is impaired in patients with schizophrenia, as they rely largely on item familiarity, rather than conscious recollection, to make mnemonic decisions. False recognition of novel items (foils) is increased in schizophrenia and may relate to this deficit in conscious recollection. By studying pictures of the target word during encoding, healthy adults can suppress false recognition. This study examined the effect of pictorial encoding on subsequent recognition of repeated foils in patients with schizophrenia. The study included 40 patients with schizophrenia and 32 healthy comparison subjects. After incidental encoding of 60 words or pictures, subjects were tested for recognition of target items intermixed with 60 new foils. These new foils were subsequently repeated following either a two- or 24-word delay. Subjects were instructed to label these repeated foils as new and not to mistake them for old target words. Schizophrenic patients showed greater overall false recognition of repeated foils. The rate of false recognition of repeated foils was lower after picture encoding than after word encoding. Despite higher levels of false recognition of repeated new items, patients and comparison subjects demonstrated a similar degree of false recognition suppression after picture, as compared to word, encoding. Patients with schizophrenia displayed greater false recognition of repeated foils than comparison subjects, suggesting both a decrement of item- (or source-) specific recollection and a consequent reliance on familiarity in schizophrenia. Despite these deficits, presenting pictorial information at encoding allowed schizophrenic subjects to suppress false recognition to a similar degree as the comparison group, implying the intact use of a high-level cognitive strategy in this population.

  6. Psychometric Functions for Shortened Administrations of a Speech Recognition Approach Using Tri-Word Presentations and Phonemic Scoring

    ERIC Educational Resources Information Center

    Gelfand, Stanley A.; Gelfand, Jessica T.

    2012-01-01

    Method: Complete psychometric functions for phoneme and word recognition scores at 8 signal-to-noise ratios from -15 dB to 20 dB were generated for the first 10, 20, and 25, as well as all 50, three-word presentations of the Tri-Word or Computer Assisted Speech Recognition Assessment (CASRA) Test (Gelfand, 1998) based on the results of 12…

  7. Acoustic-Phonetic Versus Lexical Processing in Nonnative Listeners Differing in Their Dominant Language.

    PubMed

    Shi, Lu-Feng; Koenig, Laura L

    2016-09-01

    Nonnative listeners have difficulty recognizing English words due to underdeveloped acoustic-phonetic and/or lexical skills. The present study used Boothroyd and Nittrouer's (1988)j factor to tease apart these two components of word recognition. Participants included 15 native English and 29 native Russian listeners. Fourteen and 15 of the Russian listeners reported English (ED) and Russian (RD) to be their dominant language, respectively. Listeners were presented 119 consonant-vowel-consonant real and nonsense words in speech-spectrum noise at +6 dB SNR. Responses were scored for word and phoneme recognition, the logarithmic quotient of which yielded j. Word and phoneme recognition was comparable between native and ED listeners but poorer in RD listeners. Analysis of j indicated less effective use of lexical information in RD than in native and ED listeners. Lexical processing was strongly correlated with the length of residence in the United States. Language background is important for nonnative word recognition. Lexical skills can be regarded as nativelike in ED nonnative listeners. Compromised word recognition in ED listeners is unlikely a result of poor lexical processing. Performance should be interpreted with caution for listeners dominant in their first language, whose word recognition is affected by both lexical and acoustic-phonetic factors.

  8. Phoneme Awareness, Visual-Verbal Paired-Associate Learning, and Rapid Automatized Naming as Predictors of Individual Differences in Reading Ability

    ERIC Educational Resources Information Center

    Warmington, Meesha; Hulme, Charles

    2012-01-01

    This study examines the concurrent relationships between phoneme awareness, visual-verbal paired-associate learning, rapid automatized naming (RAN), and reading skills in 7- to 11-year-old children. Path analyses showed that visual-verbal paired-associate learning and RAN, but not phoneme awareness, were unique predictors of word recognition,…

  9. Semantic Ambiguity: Do Multiple Meanings Inhibit or Facilitate Word Recognition?

    PubMed

    Haro, Juan; Ferré, Pilar

    2018-06-01

    It is not clear whether multiple unrelated meanings inhibit or facilitate word recognition. Some studies have found a disadvantage for words having multiple meanings with respect to unambiguous words in lexical decision tasks (LDT), whereas several others have shown a facilitation for such words. In the present study, we argue that these inconsistent findings may be due to the approach employed to select ambiguous words across studies. To address this issue, we conducted three LDT experiments in which we varied the measure used to classify ambiguous and unambiguous words. The results suggest that multiple unrelated meanings facilitate word recognition. In addition, we observed that the approach employed to select ambiguous words may affect the pattern of experimental results. This evidence has relevant implications for theoretical accounts of ambiguous words processing and representation.

  10. Continuous multiword recognition performance of young and elderly listeners in ambient noise

    NASA Astrophysics Data System (ADS)

    Sato, Hiroshi

    2005-09-01

    Hearing threshold shift due to aging is known as a dominant factor to degrade speech recognition performance in noisy conditions. On the other hand, cognitive factors of aging-relating speech recognition performance in various speech-to-noise conditions are not well established. In this study, two kinds of speech test were performed to examine how working memory load relates to speech recognition performance. One is word recognition test with high-familiarity, four-syllable Japanese words (single-word test). In this test, each word was presented to listeners; the listeners were asked to write the word down on paper with enough time to answer. In the other test, five continuous word were presented to listeners and listeners were asked to write the word down after just five words were presented (multiword test). Both tests were done in various speech-to-noise ratios under 50-dBA Hoth spectrum noise with more than 50 young and elderly subjects. The results of two experiments suggest that (1) Hearing level is related to scores of both tests. (2) Scores of single-word test are well correlated with those of multiword test. (3) Scores of multiword test are not improved as speech-to-noise ratio improves in the condition where scores of single-word test reach their ceiling.

  11. Lip-reading aids word recognition most in moderate noise: a Bayesian explanation using high-dimensional feature space.

    PubMed

    Ma, Wei Ji; Zhou, Xiang; Ross, Lars A; Foxe, John J; Parra, Lucas C

    2009-01-01

    Watching a speaker's facial movements can dramatically enhance our ability to comprehend words, especially in noisy environments. From a general doctrine of combining information from different sensory modalities (the principle of inverse effectiveness), one would expect that the visual signals would be most effective at the highest levels of auditory noise. In contrast, we find, in accord with a recent paper, that visual information improves performance more at intermediate levels of auditory noise than at the highest levels, and we show that a novel visual stimulus containing only temporal information does the same. We present a Bayesian model of optimal cue integration that can explain these conflicts. In this model, words are regarded as points in a multidimensional space and word recognition is a probabilistic inference process. When the dimensionality of the feature space is low, the Bayesian model predicts inverse effectiveness; when the dimensionality is high, the enhancement is maximal at intermediate auditory noise levels. When the auditory and visual stimuli differ slightly in high noise, the model makes a counterintuitive prediction: as sound quality increases, the proportion of reported words corresponding to the visual stimulus should first increase and then decrease. We confirm this prediction in a behavioral experiment. We conclude that auditory-visual speech perception obeys the same notion of optimality previously observed only for simple multisensory stimuli.

  12. Lexical precision in skilled readers: Individual differences in masked neighbor priming.

    PubMed

    Andrews, Sally; Hersch, Jolyn

    2010-05-01

    Two experiments investigated the relationship between masked form priming and individual differences in reading and spelling proficiency among university students. Experiment 1 assessed neighbor priming for 4-letter word targets from high- and low-density neighborhoods in 97 university students. The overall results replicated previous evidence of facilitatory neighborhood priming only for low-neighborhood words. However, analyses including measures of reading and spelling proficiency as covariates revealed that better spellers showed inhibitory priming for high-neighborhood words, while poorer spellers showed facilitatory priming. Experiment 2, with 123 participants, replicated the finding of stronger inhibitory neighbor priming in better spellers using 5-letter words and distinguished facilitatory and inhibitory components of priming by comparing neighbor primes with ambiguous and unambiguous partial-word primes (e.g., crow#, cr#wd, crown CROWD). The results indicate that spelling ability is selectively associated with inhibitory effects of lexical competition. The implications for theories of visual word recognition and the lexical quality hypothesis of reading skill are discussed.

  13. Standard-Chinese Lexical Neighborhood Test in normal-hearing young children.

    PubMed

    Liu, Chang; Liu, Sha; Zhang, Ning; Yang, Yilin; Kong, Ying; Zhang, Luo

    2011-06-01

    The purposes of the present study were to establish the Standard-Chinese version of Lexical Neighborhood Test (LNT) and to examine the lexical and age effects on spoken-word recognition in normal-hearing children. Six lists of monosyllabic and six lists of disyllabic words (20 words/list) were selected from the database of daily speech materials for normal-hearing (NH) children of ages 3-5 years. The lists were further divided into "easy" and "hard" halves according to the word frequency and neighborhood density in the database based on the theory of Neighborhood Activation Model (NAM). Ninety-six NH children (age ranged between 4.0 and 7.0 years) were divided into three different age groups of 1-year intervals. Speech-perception tests were conducted using the Standard-Chinese monosyllabic and disyllabic LNT. The inter-list performance was found to be equivalent and inter-rater reliability was high with 92.5-95% consistency. Results of word-recognition scores showed that the lexical effects were all significant. Children scored higher with disyllabic words than with monosyllabic words. "Easy" words scored higher than "hard" words. The word-recognition performance also increased with age in each lexical category. A multiple linear regression analysis showed that neighborhood density, age, and word frequency appeared to have increasingly more contributions to Chinese word recognition. The results of the present study indicated that performances of Chinese word recognition were influenced by word frequency, age, and neighborhood density, with word frequency playing a major role. These results were consistent with those in other languages, supporting the application of NAM in the Chinese language. The development of Standard-Chinese version of LNT and the establishment of a database of children of 4-6 years old can provide a reliable means for spoken-word recognition test in children with hearing impairment. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  14. Speech variability effects on recognition accuracy associated with concurrent task performance by pilots

    NASA Technical Reports Server (NTRS)

    Simpson, C. A.

    1985-01-01

    In the present study of the responses of pairs of pilots to aircraft warning classification tasks using an isolated word, speaker-dependent speech recognition system, the induced stress was manipulated by means of different scoring procedures for the classification task and by the inclusion of a competitive manual control task. Both speech patterns and recognition accuracy were analyzed, and recognition errors were recorded by type for an isolated word speaker-dependent system and by an offline technique for a connected word speaker-dependent system. While errors increased with task loading for the isolated word system, there was no such effect for task loading in the case of the connected word system.

  15. Concurrent Correlates of Chinese Word Recognition in Deaf and Hard-of-Hearing Children

    ERIC Educational Resources Information Center

    Ching, Boby Ho-Hong; Nunes, Terezinha

    2015-01-01

    The aim of this study was to explore the relative contributions of phonological, semantic radical, and morphological awareness to Chinese word recognition in deaf and hard-of-hearing (DHH) children. Measures of word recognition, general intelligence, phonological, semantic radical, and morphological awareness were administered to 32 DHH and 35…

  16. Auditory Perception and Word Recognition in Cantonese-Chinese Speaking Children with and without Specific Language Impairment

    ERIC Educational Resources Information Center

    Kidd, Joanna C.; Shum, Kathy K.; Wong, Anita M.-Y.; Ho, Connie S.-H.

    2017-01-01

    Auditory processing and spoken word recognition difficulties have been observed in Specific Language Impairment (SLI), raising the possibility that auditory perceptual deficits disrupt word recognition and, in turn, phonological processing and oral language. In this study, fifty-seven kindergarten children with SLI and fifty-three language-typical…

  17. Formal Models of Word Recognition. Final Report.

    ERIC Educational Resources Information Center

    Travers, Jeffrey R.

    Existing mathematical models of word recognition are reviewed and a new theory is proposed in this research. The new theory integrates earlier proposals within a single framework, sacrificing none of the predictive power of the earlier proposals, but offering a gain in theoretical economy. The theory holds that word recognition is accomplished by…

  18. Surviving Blind Decomposition: A Distributional Analysis of the Time-Course of Complex Word Recognition

    ERIC Educational Resources Information Center

    Schmidtke, Daniel; Matsuki, Kazunaga; Kuperman, Victor

    2017-01-01

    The current study addresses a discrepancy in the psycholinguistic literature about the chronology of information processing during the visual recognition of morphologically complex words. "Form-then-meaning" accounts of complex word recognition claim that morphemes are processed as units of form prior to any influence of their meanings,…

  19. Specifying Theories of Developmental Dyslexia: A Diffusion Model Analysis of Word Recognition

    ERIC Educational Resources Information Center

    Zeguers, Maaike H. T.; Snellings, Patrick; Tijms, Jurgen; Weeda, Wouter D.; Tamboer, Peter; Bexkens, Anika; Huizenga, Hilde M.

    2011-01-01

    The nature of word recognition difficulties in developmental dyslexia is still a topic of controversy. We investigated the contribution of phonological processing deficits and uncertainty to the word recognition difficulties of dyslexic children by mathematical diffusion modeling of visual and auditory lexical decision data. The first study showed…

  20. Individual Differences in Visual Word Recognition: Insights from the English Lexicon Project

    ERIC Educational Resources Information Center

    Yap, Melvin J.; Balota, David A.; Sibley, Daragh E.; Ratcliff, Roger

    2012-01-01

    Empirical work and models of visual word recognition have traditionally focused on group-level performance. Despite the emphasis on the prototypical reader, there is clear evidence that variation in reading skill modulates word recognition performance. In the present study, we examined differences among individuals who contributed to the English…

  1. Learning during processing Word learning doesn’t wait for word recognition to finish

    PubMed Central

    Apfelbaum, Keith S.; McMurray, Bob

    2017-01-01

    Previous research on associative learning has uncovered detailed aspects of the process, including what types of things are learned, how they are learned, and where in the brain such learning occurs. However, perceptual processes, such as stimulus recognition and identification, take time to unfold. Previous studies of learning have not addressed when, during the course of these dynamic recognition processes, learned representations are formed and updated. If learned representations are formed and updated while recognition is ongoing, the result of learning may incorporate spurious, partial information. For example, during word recognition, words take time to be identified, and competing words are often active in parallel. If learning proceeds before this competition resolves, representations may be influenced by the preliminary activations present at the time of learning. In three experiments using word learning as a model domain, we provide evidence that learning reflects the ongoing dynamics of auditory and visual processing during a learning event. These results show that learning can occur before stimulus recognition processes are complete; learning does not wait for ongoing perceptual processing to complete. PMID:27471082

  2. Individual Differences in Visual Word Recognition: Insights from the English Lexicon Project

    PubMed Central

    Yap, Melvin J.; Balota, David A.; Sibley, Daragh E.; Ratcliff, Roger

    2011-01-01

    Empirical work and models of visual word recognition have traditionally focused on group-level performance. Despite the emphasis on the prototypical reader, there is clear evidence that variation in reading skill modulates word recognition performance. In the present study, we examined differences between individuals who contributed to the English Lexicon Project (http://elexicon.wustl.edu), an online behavioral database containing nearly four million word recognition (speeded pronunciation and lexical decision) trials from over 1,200 participants. We observed considerable within- and between-session reliability across distinct sets of items, in terms of overall mean response time (RT), RT distributional characteristics, diffusion model parameters (Ratcliff, Gomez, & McKoon, 2004), and sensitivity to underlying lexical dimensions. This indicates reliably detectable individual differences in word recognition performance. In addition, higher vocabulary knowledge was associated with faster, more accurate word recognition performance, attenuated sensitivity to stimuli characteristics, and more efficient accumulation of information. Finally, in contrast to suggestions in the literature, we did not find evidence that individuals were trading-off in their utilization of lexical and nonlexical information. PMID:21728459

  3. Lexical-Access Ability and Cognitive Predictors of Speech Recognition in Noise in Adult Cochlear Implant Users

    PubMed Central

    Smits, Cas; Merkus, Paul; Festen, Joost M.; Goverts, S. Theo

    2017-01-01

    Not all of the variance in speech-recognition performance of cochlear implant (CI) users can be explained by biographic and auditory factors. In normal-hearing listeners, linguistic and cognitive factors determine most of speech-in-noise performance. The current study explored specifically the influence of visually measured lexical-access ability compared with other cognitive factors on speech recognition of 24 postlingually deafened CI users. Speech-recognition performance was measured with monosyllables in quiet (consonant-vowel-consonant [CVC]), sentences-in-noise (SIN), and digit-triplets in noise (DIN). In addition to a composite variable of lexical-access ability (LA), measured with a lexical-decision test (LDT) and word-naming task, vocabulary size, working-memory capacity (Reading Span test [RSpan]), and a visual analogue of the SIN test (text reception threshold test) were measured. The DIN test was used to correct for auditory factors in SIN thresholds by taking the difference between SIN and DIN: SRTdiff. Correlation analyses revealed that duration of hearing loss (dHL) was related to SIN thresholds. Better working-memory capacity was related to SIN and SRTdiff scores. LDT reaction time was positively correlated with SRTdiff scores. No significant relationships were found for CVC or DIN scores with the predictor variables. Regression analyses showed that together with dHL, RSpan explained 55% of the variance in SIN thresholds. When controlling for auditory performance, LA, LDT, and RSpan separately explained, together with dHL, respectively 37%, 36%, and 46% of the variance in SRTdiff outcome. The results suggest that poor verbal working-memory capacity and to a lesser extent poor lexical-access ability limit speech-recognition ability in listeners with a CI. PMID:29205095

  4. Semantic Ambiguity: Do Multiple Meanings Inhibit or Facilitate Word Recognition?

    ERIC Educational Resources Information Center

    Haro, Juan; Ferré, Pilar

    2018-01-01

    It is not clear whether multiple unrelated meanings inhibit or facilitate word recognition. Some studies have found a disadvantage for words having multiple meanings with respect to unambiguous words in lexical decision tasks (LDT), whereas several others have shown a facilitation for such words. In the present study, we argue that these…

  5. Selective attention and recognition: effects of congruency on episodic learning.

    PubMed

    Rosner, Tamara M; D'Angelo, Maria C; MacLellan, Ellen; Milliken, Bruce

    2015-05-01

    Recent research on cognitive control has focused on the learning consequences of high selective attention demands in selective attention tasks (e.g., Botvinick, Cognit Affect Behav Neurosci 7(4):356-366, 2007; Verguts and Notebaert, Psychol Rev 115(2):518-525, 2008). The current study extends these ideas by examining the influence of selective attention demands on remembering. In Experiment 1, participants read aloud the red word in a pair of red and green spatially interleaved words. Half of the items were congruent (the interleaved words had the same identity), and the other half were incongruent (the interleaved words had different identities). Following the naming phase, participants completed a surprise recognition memory test. In this test phase, recognition memory was better for incongruent than for congruent items. In Experiment 2, context was only partially reinstated at test, and again recognition memory was better for incongruent than for congruent items. In Experiment 3, all of the items contained two different words, but in one condition the words were presented close together and interleaved, while in the other condition the two words were spatially separated. Recognition memory was better for the interleaved than for the separated items. This result rules out an interpretation of the congruency effects on recognition in Experiments 1 and 2 that hinges on stronger relational encoding for items that have two different words. Together, the results support the view that selective attention demands for incongruent items lead to encoding that improves recognition.

  6. Age-Related Differences in Lexical Access Relate to Speech Recognition in Noise.

    PubMed

    Carroll, Rebecca; Warzybok, Anna; Kollmeier, Birger; Ruigendijk, Esther

    2016-01-01

    Vocabulary size has been suggested as a useful measure of "verbal abilities" that correlates with speech recognition scores. Knowing more words is linked to better speech recognition. How vocabulary knowledge translates to general speech recognition mechanisms, how these mechanisms relate to offline speech recognition scores, and how they may be modulated by acoustical distortion or age, is less clear. Age-related differences in linguistic measures may predict age-related differences in speech recognition in noise performance. We hypothesized that speech recognition performance can be predicted by the efficiency of lexical access, which refers to the speed with which a given word can be searched and accessed relative to the size of the mental lexicon. We tested speech recognition in a clinical German sentence-in-noise test at two signal-to-noise ratios (SNRs), in 22 younger (18-35 years) and 22 older (60-78 years) listeners with normal hearing. We also assessed receptive vocabulary, lexical access time, verbal working memory, and hearing thresholds as measures of individual differences. Age group, SNR level, vocabulary size, and lexical access time were significant predictors of individual speech recognition scores, but working memory and hearing threshold were not. Interestingly, longer accessing times were correlated with better speech recognition scores. Hierarchical regression models for each subset of age group and SNR showed very similar patterns: the combination of vocabulary size and lexical access time contributed most to speech recognition performance; only for the younger group at the better SNR (yielding about 85% correct speech recognition) did vocabulary size alone predict performance. Our data suggest that successful speech recognition in noise is mainly modulated by the efficiency of lexical access. This suggests that older adults' poorer performance in the speech recognition task may have arisen from reduced efficiency in lexical access; with an average vocabulary size similar to that of younger adults, they were still slower in lexical access.

  7. Comparison of crisp and fuzzy character networks in handwritten word recognition

    NASA Technical Reports Server (NTRS)

    Gader, Paul; Mohamed, Magdi; Chiang, Jung-Hsien

    1992-01-01

    Experiments involving handwritten word recognition on words taken from images of handwritten address blocks from the United States Postal Service mailstream are described. The word recognition algorithm relies on the use of neural networks at the character level. The neural networks are trained using crisp and fuzzy desired outputs. The fuzzy outputs were defined using a fuzzy k-nearest neighbor algorithm. The crisp networks slightly outperformed the fuzzy networks at the character level but the fuzzy networks outperformed the crisp networks at the word level.

  8. Single-Word Recognition Need Not Depend on Single-Word Features: Narrative Coherence Counteracts Effects of Single-Word Features That Lexical Decision Emphasizes

    ERIC Educational Resources Information Center

    Teng, Dan W.; Wallot, Sebastian; Kelty-Stephen, Damian G.

    2016-01-01

    Research on reading comprehension of connected text emphasizes reliance on single-word features that organize a stable, mental lexicon of words and that speed or slow the recognition of each new word. However, the time needed to recognize a word might not actually be as fixed as previous research indicates, and the stability of the mental lexicon…

  9. The effects of sleep deprivation on item and associative recognition memory.

    PubMed

    Ratcliff, Roger; Van Dongen, Hans P A

    2018-02-01

    Sleep deprivation adversely affects the ability to perform cognitive tasks, but theories range from predicting an overall decline in cognitive functioning because of reduced stability in attentional networks to specific deficits in various cognitive domains or processes. We measured the effects of sleep deprivation on two memory tasks, item recognition ("was this word in the list studied") and associative recognition ("were these two words studied in the same pair"). These tasks test memory for information encoded a few minutes earlier and so do not address effects of sleep deprivation on working memory or consolidation after sleep. A diffusion model was used to decompose accuracy and response time distributions to produce parameter estimates of components of cognitive processing. The model assumes that over time, noisy evidence from the task stimulus is accumulated to one of two decision criteria, and parameters governing this process are extracted and interpreted in terms of distinct cognitive processes. Results showed that sleep deprivation reduces drift rate (evidence used in the decision process), with little effect on the other components of the decision process. These results contrast with the effects of aging, which show little decline in item recognition but large declines in associative recognition. The results suggest that sleep deprivation degrades the quality of information stored in memory and that this may occur through degraded attentional processes. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  10. The Effects of Lexical Pitch Accent on Infant Word Recognition in Japanese

    PubMed Central

    Ota, Mitsuhiko; Yamane, Naoto; Mazuka, Reiko

    2018-01-01

    Learners of lexical tone languages (e.g., Mandarin) develop sensitivity to tonal contrasts and recognize pitch-matched, but not pitch-mismatched, familiar words by 11 months. Learners of non-tone languages (e.g., English) also show a tendency to treat pitch patterns as lexically contrastive up to about 18 months. In this study, we examined if this early-developing capacity to lexically encode pitch variations enables infants to acquire a pitch accent system, in which pitch-based lexical contrasts are obscured by the interaction of lexical and non-lexical (i.e., intonational) features. Eighteen 17-month-olds learning Tokyo Japanese were tested on their recognition of familiar words with the expected pitch or the lexically opposite pitch pattern. In early trials, infants were faster in shifting their eyegaze from the distractor object to the target object than in shifting from the target to distractor in the pitch-matched condition. In later trials, however, infants showed faster distractor-to-target than target-to-distractor shifts in both the pitch-matched and pitch-mismatched conditions. We interpret these results to mean that, in a pitch-accent system, the ability to use pitch variations to recognize words is still in a nascent state at 17 months. PMID:29375452

  11. Exploring multiple feature combination strategies with a recurrent neural network architecture for off-line handwriting recognition

    NASA Astrophysics Data System (ADS)

    Mioulet, L.; Bideault, G.; Chatelain, C.; Paquet, T.; Brunessaux, S.

    2015-01-01

    The BLSTM-CTC is a novel recurrent neural network architecture that has outperformed previous state of the art algorithms in tasks such as speech recognition or handwriting recognition. It has the ability to process long term dependencies in temporal signals in order to label unsegmented data. This paper describes different ways of combining features using a BLSTM-CTC architecture. Not only do we explore the low level combination (feature space combination) but we also explore high level combination (decoding combination) and mid-level (internal system representation combination). The results are compared on the RIMES word database. Our results show that the low level combination works best, thanks to the powerful data modeling of the LSTM neurons.

  12. Acquisition of Malay word recognition skills: lessons from low-progress early readers.

    PubMed

    Lee, Lay Wah; Wheldall, Kevin

    2011-02-01

    Malay is a consistent alphabetic orthography with complex syllable structures. The focus of this research was to investigate word recognition performance in order to inform reading interventions for low-progress early readers. Forty-six Grade 1 students were sampled and 11 were identified as low-progress readers. The results indicated that both syllable awareness and phoneme blending were significant predictors of word recognition, suggesting that both syllable and phonemic grain-sizes are important in Malay word recognition. Item analysis revealed a hierarchical pattern of difficulty based on the syllable and the phonic structure of the words. Error analysis identified the sources of errors to be errors due to inefficient syllable segmentation, oversimplification of syllables, insufficient grapheme-phoneme knowledge and inefficient phonemic code assembly. Evidence also suggests that direct instruction in syllable segmentation, phonemic awareness and grapheme-phoneme correspondence is necessary for low-progress readers to acquire word recognition skills. Finally, a logical sequence to teach grapheme-phoneme decoding in Malay is suggested. Copyright © 2010 John Wiley & Sons, Ltd.

  13. Context effects and false memory for alcohol words in adolescents.

    PubMed

    Zack, Martin; Sharpley, Justin; Dent, Clyde W; Stacy, Alan W

    2009-03-01

    This study assessed incidental recognition of Alcohol and Neutral words in adolescents who encoded the words under distraction. Participants were 171 (87 male) 10th grade students, ages 14-16 (M=15.1) years. Testing was conducted by telephone: Participants listened to a list containing Alcohol and Neutral (Experimental--Group E, n=92) or only Neutral (Control--Group C, n=79) words, while counting backwards from 200 by two's. Recognition was tested immediately thereafter. Group C exhibited higher false recognition of Neutral than Alcohol items, whereas Group E displayed equivalent false rates for both word types. The reported number of alcohol TV ads seen in the past week predicted higher false recognition of Neutral words in Group C and of Alcohol words in Group E. False memory for Alcohol words in Group E was greater in males and high anxiety sensitive participants. These context-dependent biases may contribute to exaggerations in perceived drinking norms previously found to predict alcohol misuse in young drinkers.

  14. Relationships between Structural and Acoustic Properties of Maternal Talk and Children's Early Word Recognition

    ERIC Educational Resources Information Center

    Suttora, Chiara; Salerni, Nicoletta; Zanchi, Paola; Zampini, Laura; Spinelli, Maria; Fasolo, Mirco

    2017-01-01

    This study aimed to investigate specific associations between structural and acoustic characteristics of infant-directed (ID) speech and word recognition. Thirty Italian-acquiring children and their mothers were tested when the children were 1;3. Children's word recognition was measured with the looking-while-listening task. Maternal ID speech was…

  15. The Low-Frequency Encoding Disadvantage: Word Frequency Affects Processing Demands

    ERIC Educational Resources Information Center

    Diana, Rachel A.; Reder, Lynne M.

    2006-01-01

    Low-frequency words produce more hits and fewer false alarms than high-frequency words in a recognition task. The low-frequency hit rate advantage has sometimes been attributed to processes that operate during the recognition test (e.g., L. M. Reder et al., 2000). When tasks other than recognition, such as recall, cued recall, or associative…

  16. Knowledge of a Second Language Influences Auditory Word Recognition in the Native Language

    ERIC Educational Resources Information Center

    Lagrou, Evelyne; Hartsuiker, Robert J.; Duyck, Wouter

    2011-01-01

    Many studies in bilingual visual word recognition have demonstrated that lexical access is not language selective. However, research on bilingual word recognition in the auditory modality has been scarce, and it has yielded mixed results with regard to the degree of this language nonselectivity. In the present study, we investigated whether…

  17. Morphing Images: A Potential Tool for Teaching Word Recognition to Children with Severe Learning Difficulties

    ERIC Educational Resources Information Center

    Sheehy, Kieron

    2005-01-01

    Children with severe learning difficulties who fail to begin word recognition can learn to recognise pictures and symbols relatively easily. However, finding an effective means of using pictures to teach word recognition has proved problematic. This research explores the use of morphing software to support the transition from picture to word…

  18. Examination of the neighborhood activation theory in normal and hearing-impaired listeners.

    PubMed

    Dirks, D D; Takayanagi, S; Moshfegh, A; Noffsinger, P D; Fausti, S A

    2001-02-01

    Experiments were conducted to examine the effects of lexical information on word recognition among normal hearing listeners and individuals with sensorineural hearing loss. The lexical factors of interest were incorporated in the Neighborhood Activation Model (NAM). Central to this model is the concept that words are recognized relationally in the context of other phonemically similar words. NAM suggests that words in the mental lexicon are organized into similarity neighborhoods and the listener is required to select the target word from competing lexical items. Two structural characteristics of similarity neighborhoods that influence word recognition have been identified; "neighborhood density" or the number of phonemically similar words (neighbors) for a particular target item and "neighborhood frequency" or the average frequency of occurrence of all the items within a neighborhood. A third lexical factor, "word frequency" or the frequency of occurrence of a target word in the language, is assumed to optimize the word recognition process by biasing the system toward choosing a high frequency over a low frequency word. Three experiments were performed. In the initial experiments, word recognition for consonant-vowel-consonant (CVC) monosyllables was assessed in young normal hearing listeners by systematically partitioning the items into the eight possible lexical conditions that could be created by two levels of the three lexical factors, word frequency (high and low), neighborhood density (high and low), and average neighborhood frequency (high and low). Neighborhood structure and word frequency were estimated computationally using a large, on-line lexicon-based Webster's Pocket Dictionary. From this program 400 highly familiar, monosyllables were selected and partitioned into eight orthogonal lexical groups (50 words/group). The 400 words were presented randomly to normal hearing listeners in speech-shaped noise (Experiment 1) and "in quiet" (Experiment 2) as well as to an elderly group of listeners with sensorineural hearing loss in the speech-shaped noise (Experiment 3). The results of three experiments verified predictions of NAM in both normal hearing and hearing-impaired listeners. In each experiment, words from low density neighborhoods were recognized more accurately than those from high density neighborhoods. The presence of high frequency neighbors (average neighborhood frequency) produced poorer recognition performance than comparable conditions with low frequency neighbors. Word frequency was found to have a highly significant effect on word recognition. Lexical conditions with high word frequencies produced higher performance scores than conditions with low frequency words. The results supported the basic tenets of NAM theory and identified both neighborhood structural properties and word frequency as significant lexical factors affecting word recognition when listening in noise and "in quiet." The results of the third experiment permit extension of NAM theory to individuals with sensorineural hearing loss. Future development of speech recognition tests should allow for the effects of higher level cognitive (lexical) factors on lower level phonemic processing.

  19. A chimpanzee recognizes synthetic speech with significantly reduced acoustic cues to phonetic content.

    PubMed

    Heimbauer, Lisa A; Beran, Michael J; Owren, Michael J

    2011-07-26

    A long-standing debate concerns whether humans are specialized for speech perception, which some researchers argue is demonstrated by the ability to understand synthetic speech with significantly reduced acoustic cues to phonetic content. We tested a chimpanzee (Pan troglodytes) that recognizes 128 spoken words, asking whether she could understand such speech. Three experiments presented 48 individual words, with the animal selecting a corresponding visuographic symbol from among four alternatives. Experiment 1 tested spectrally reduced, noise-vocoded (NV) synthesis, originally developed to simulate input received by human cochlear-implant users. Experiment 2 tested "impossibly unspeechlike" sine-wave (SW) synthesis, which reduces speech to just three moving tones. Although receiving only intermittent and noncontingent reward, the chimpanzee performed well above chance level, including when hearing synthetic versions for the first time. Recognition of SW words was least accurate but improved in experiment 3 when natural words in the same session were rewarded. The chimpanzee was more accurate with NV than SW versions, as were 32 human participants hearing these items. The chimpanzee's ability to spontaneously recognize acoustically reduced synthetic words suggests that experience rather than specialization is critical for speech-perception capabilities that some have suggested are uniquely human. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. Embedded Words in Visual Word Recognition: Does the Left Hemisphere See the Rain in Brain?

    ERIC Educational Resources Information Center

    McCormick, Samantha F.; Davis, Colin J.; Brysbaert, Marc

    2010-01-01

    To examine whether interhemispheric transfer during foveal word recognition entails a discontinuity between the information presented to the left and right of fixation, we presented target words in such a way that participants fixated immediately left or right of an embedded word (as in "gr*apple", "bull*et") or in the middle…

  1. Lexico-Semantic Structure and the Word-Frequency Effect in Recognition Memory

    ERIC Educational Resources Information Center

    Monaco, Joseph D.; Abbott, L. F.; Kahana, Michael J.

    2007-01-01

    The word-frequency effect (WFE) in recognition memory refers to the finding that more rare words are better recognized than more common words. We demonstrate that a familiarity-discrimination model operating on data from a semantic word-association space yields a robust WFE in data on both hit rates and false-alarm rates. Our modeling results…

  2. Not All Reading Disabilities Are Dyslexia: Distinct Neurobiology of Specific Comprehension Deficits

    PubMed Central

    Clements-Stephens, Amy; Pugh, Kenneth R.; Burns, Scott; Cao, Aize; Pekar, James J.; Davis, Nicole; Rimrodt, Sheryl L.

    2013-01-01

    Abstract Although an extensive literature exists on the neurobiological correlates of dyslexia (DYS), to date, no studies have examined the neurobiological profile of those who exhibit poor reading comprehension despite intact word-level abilities (specific reading comprehension deficits [S-RCD]). Here we investigated the word-level abilities of S-RCD as compared to typically developing readers (TD) and those with DYS by examining the blood oxygenation-level dependent response to words varying on frequency. Understanding whether S-RCD process words in the same manner as TD, or show alternate pathways to achieve normal word-reading abilities, may provide insights into the origin of this disorder. Results showed that as compared to TD, DYS showed abnormal covariance during word processing with right-hemisphere homologs of the left-hemisphere reading network in conjunction with left occipitotemporal underactivation. In contrast, S-RCD showed an intact neurobiological response to word stimuli in occipitotemporal regions (associated with fast and efficient word processing); however, inferior frontal gyrus (IFG) abnormalities were observed. Specifically, TD showed a higher-percent signal change within right IFG for low-versus-high frequency words as compared to both S-RCD and DYS. Using psychophysiological interaction analyses, a coupling-by-reading group interaction was found in right IFG for DYS, as indicated by a widespread greater covariance between right IFG and right occipitotemporal cortex/visual word-form areas, as well as bilateral medial frontal gyrus, as compared to TD. For S-RCD, the context-dependent functional interaction anomaly was most prominently seen in left IFG, which covaried to a greater extent with hippocampal, parahippocampal, and prefrontal areas than for TD for low- as compared to high-frequency words. Given the greater lexical access demands of low frequency as compared to high-frequency words, these results may suggest specific weaknesses in accessing lexical-semantic representations during word recognition. These novel findings provide foundational insights into the nature of S-RCD, and set the stage for future investigations of this common, but understudied, reading disorder. PMID:23273430

  3. Phonological Activation in Multi-Syllabic Sord Recognition

    ERIC Educational Resources Information Center

    Lee, Chang H.

    2007-01-01

    Three experiments were conducted to test the phonological recoding hypothesis in visual word recognition. Most studies on this issue have been conducted using mono-syllabic words, eventually constructing various models of phonological processing. Yet in many languages including English, the majority of words are multi-syllabic words. English…

  4. Preschool Children's Memory for Word Forms Remains Stable Over Several Days, but Gradually Decreases after 6 Months.

    PubMed

    Gordon, Katherine R; McGregor, Karla K; Waldier, Brigitte; Curran, Maura K; Gomez, Rebecca L; Samuelson, Larissa K

    2016-01-01

    Research on word learning has focused on children's ability to identify a target object when given the word form after a minimal number of exposures to novel word-object pairings. However, relatively little research has focused on children's ability to retrieve the word form when given the target object. The exceptions involve asking children to recall and produce forms, and children typically perform near floor on these measures. In the current study, 3- to 5-year-old children were administered a novel test of word form that allowed for recognition memory and manual responses. Specifically, when asked to label a previously trained object, children were given three forms to choose from: the target, a minimally different form, and a maximally different form. Children demonstrated memory for word forms at three post-training delays: 10 mins (short-term), 2-3 days (long-term), and 6 months to 1 year (very long-term). However, children performed worse at the very long-term delay than the other time points, and the length of the very long-term delay was negatively related to performance. When in error, children were no more likely to select the minimally different form than the maximally different form at all time points. Overall, these results suggest that children remember word forms that are linked to objects over extended post-training intervals, but that their memory for the forms gradually decreases over time without further exposures. Furthermore, memory traces for word forms do not become less phonologically specific over time; rather children either identify the correct form, or they perform at chance.

  5. Nonlinear changes in brain activity during continuous word repetition: an event-related multiparametric functional MR imaging study.

    PubMed

    Hagenbeek, R E; Rombouts, S A R B; Veltman, D J; Van Strien, J W; Witter, M P; Scheltens, P; Barkhof, F

    2007-10-01

    Changes in brain activation as a function of continuous multiparametric word recognition have not been studied before by using functional MR imaging (fMRI), to our knowledge. Our aim was to identify linear changes in brain activation and, what is more interesting, nonlinear changes in brain activation as a function of extended word repetition. Fifteen healthy young right-handed individuals participated in this study. An event-related extended continuous word-recognition task with 30 target words was used to study the parametric effect of word recognition on brain activation. Word-recognition-related brain activation was studied as a function of 9 word repetitions. fMRI data were analyzed with a general linear model with regressors for linearly changing signal intensity and nonlinearly changing signal intensity, according to group average reaction time (RT) and individual RTs. A network generally associated with episodic memory recognition showed either constant or linearly decreasing brain activation as a function of word repetition. Furthermore, both anterior and posterior cingulate cortices and the left middle frontal gyrus followed the nonlinear curve of the group RT, whereas the anterior cingulate cortex was also associated with individual RT. Linear alteration in brain activation as a function of word repetition explained most changes in blood oxygen level-dependent signal intensity. Using a hierarchically orthogonalized model, we found evidence for nonlinear activation associated with both group and individual RTs.

  6. Lexical and sublexical units in speech perception.

    PubMed

    Giroux, Ibrahima; Rey, Arnaud

    2009-03-01

    Saffran, Newport, and Aslin (1996a) found that human infants are sensitive to statistical regularities corresponding to lexical units when hearing an artificial spoken language. Two sorts of segmentation strategies have been proposed to account for this early word-segmentation ability: bracketing strategies, in which infants are assumed to insert boundaries into continuous speech, and clustering strategies, in which infants are assumed to group certain speech sequences together into units (Swingley, 2005). In the present study, we test the predictions of two computational models instantiating each of these strategies i.e., Serial Recurrent Networks: Elman, 1990; and Parser: Perruchet & Vinter, 1998 in an experiment where we compare the lexical and sublexical recognition performance of adults after hearing 2 or 10 min of an artificial spoken language. The results are consistent with Parser's predictions and the clustering approach, showing that performance on words is better than performance on part-words only after 10 min. This result suggests that word segmentation abilities are not merely due to stronger associations between sublexical units but to the emergence of stronger lexical representations during the development of speech perception processes. Copyright © 2009, Cognitive Science Society, Inc.

  7. Can a Novel Word Repetition Task Be a Language-Neutral Assessment Tool? Evidence from Welsh-English Bilingual Children

    ERIC Educational Resources Information Center

    Sharp, Kathryn M; Gathercole, Virginia C. Mueller

    2013-01-01

    In recent years, there has been growing recognition of a need for a general, non-language-specific assessment tool that could be used to evaluate general speech and language abilities in children, especially to assist in identifying atypical development in bilingual children who speak a language unfamiliar to the assessor. It has been suggested…

  8. Recognition and Comprehension of "Narrow Focus" by Young Adults with Prelingual Hearing Loss Using Hearing Aids or Cochlear Implants

    ERIC Educational Resources Information Center

    Segal, Osnat; Kishon-Rabin, Liat

    2017-01-01

    Purpose: The stressed word in a sentence (narrow focus [NF]) conveys information about the intent of the speaker and is therefore important for processing spoken language and in social interactions. The ability of participants with severe-to-profound prelingual hearing loss to comprehend NF has rarely been investigated. The purpose of this study…

  9. Morphological Influences on the Recognition of Monosyllabic Monomorphemic Words

    ERIC Educational Resources Information Center

    Baayen, R. H.; Feldman, L. B.; Schreuder, R.

    2006-01-01

    Balota et al. [Balota, D., Cortese, M., Sergent-Marshall, S., Spieler, D., & Yap, M. (2004). Visual word recognition for single-syllable words. "Journal of Experimental Psychology: General, 133," 283-316] studied lexical processing in word naming and lexical decision using hierarchical multiple regression techniques for a large data set of…

  10. Orthographic Consistency Affects Spoken Word Recognition at Different Grain-Sizes

    ERIC Educational Resources Information Center

    Dich, Nadya

    2014-01-01

    A number of previous studies found that the consistency of sound-to-spelling mappings (feedback consistency) affects spoken word recognition. In auditory lexical decision experiments, words that can only be spelled one way are recognized faster than words with multiple potential spellings. Previous studies demonstrated this by manipulating…

  11. Modelling the Effects of Semantic Ambiguity in Word Recognition

    ERIC Educational Resources Information Center

    Rodd, Jennifer M.; Gaskell, M. Gareth; Marslen-Wilson, William D.

    2004-01-01

    Most words in English are ambiguous between different interpretations; words can mean different things in different contexts. We investigate the implications of different types of semantic ambiguity for connectionist models of word recognition. We present a model in which there is competition to activate distributed semantic representations. The…

  12. Speech Perception, Word Recognition and the Structure of the Lexicon. Research on Speech Perception Progress Report No. 10.

    ERIC Educational Resources Information Center

    Pisoni, David B.; And Others

    The results of three projects concerned with auditory word recognition and the structure of the lexicon are reported in this paper. The first project described was designed to test experimentally several specific predictions derived from MACS, a simulation model of the Cohort Theory of word recognition. The second project description provides the…

  13. Bilingual Word Recognition in Deaf and Hearing Signers: Effects of Proficiency and Language Dominance on Cross-Language Activation

    ERIC Educational Resources Information Center

    Morford, Jill P.; Kroll, Judith F.; Piñar, Pilar; Wilkinson, Erin

    2014-01-01

    Recent evidence demonstrates that American Sign Language (ASL) signs are active during print word recognition in deaf bilinguals who are highly proficient in both ASL and English. In the present study, we investigate whether signs are active during print word recognition in two groups of unbalanced bilinguals: deaf ASL-dominant and hearing…

  14. Rapid interactions between lexical semantic and word form analysis during word recognition in context: evidence from ERPs.

    PubMed

    Kim, Albert; Lai, Vicky

    2012-05-01

    We used ERPs to investigate the time course of interactions between lexical semantic and sublexical visual word form processing during word recognition. Participants read sentence-embedded pseudowords that orthographically resembled a contextually supported real word (e.g., "She measured the flour so she could bake a ceke…") or did not (e.g., "She measured the flour so she could bake a tont…") along with nonword consonant strings (e.g., "She measured the flour so she could bake a srdt…"). Pseudowords that resembled a contextually supported real word ("ceke") elicited an enhanced positivity at 130 msec (P130), relative to real words (e.g., "She measured the flour so she could bake a cake…"). Pseudowords that did not resemble a plausible real word ("tont") enhanced the N170 component, as did nonword consonant strings ("srdt"). The effect pattern shows that the visual word recognition system is, perhaps, counterintuitively, more rapidly sensitive to minor than to flagrant deviations from contextually predicted inputs. The findings are consistent with rapid interactions between lexical and sublexical representations during word recognition, in which rapid lexical access of a contextually supported word (CAKE) provides top-down excitation of form features ("cake"), highlighting the anomaly of an unexpected word "ceke."

  15. Interdependence of Linguistic and Indexical Speech Perception Skills in School-Aged Children with Early Cochlear Implantation

    PubMed Central

    Geers, Ann; Davidson, Lisa; Uchanski, Rosalie; Nicholas, Johanna

    2013-01-01

    Objectives This study documented the ability of experienced pediatric cochlear implant (CI) users to perceive linguistic properties (what is said) and indexical attributes (emotional intent and talker identity) of speech, and examined the extent to which linguistic (LSP) and indexical (ISP) perception skills are related. Pre-implant aided hearing, age at implantation, speech processor technology, CI-aided thresholds, sequential bilateral cochlear implantation, and academic integration with hearing age-mates were examined for their possible relationships to both LSP and ISP skills. Design Sixty 9–12 year olds, first implanted at an early age (12–38 months), participated in a comprehensive test battery that included the following LSP skills: 1) recognition of monosyllabic words at loud and soft levels, 2) repetition of phonemes and suprasegmental features from non-words, and 3) recognition of keywords from sentences presented within a noise background, and the following ISP skills: 1) discrimination of male from female and female from female talkers and 2) identification and discrimination of emotional content from spoken sentences. A group of 30 age-matched children without hearing loss completed the non-word repetition, and talker- and emotion-perception tasks for comparison. Results Word recognition scores decreased with signal level from a mean of 77% correct at 70 dB SPL to 52% at 50 dB SPL. On average, CI users recognized 50% of keywords presented in sentences that were 9.8 dB above background noise. Phonetic properties were repeated from non-word stimuli at about the same level of accuracy as suprasegmental attributes (70% and 75%, respectively). The majority of CI users identified emotional content and differentiated talkers significantly above chance levels. Scores on LSP and ISP measures were combined into separate principal component scores and these components were highly correlated (r = .76). Both LSP and ISP component scores were higher for children who received a CI at the youngest ages, upgraded to more recent CI technology and had lower CI-aided thresholds. Higher scores, for both LSP and ISP components, were also associated with higher language levels and mainstreaming at younger ages. Higher ISP scores were associated with better social skills. Conclusions Results strongly support a link between indexical and linguistic properties in perceptual analysis of speech. These two channels of information appear to be processed together in parallel by the auditory system and are inseparable in perception. Better speech performance, for both linguistic and indexical perception, is associated with younger age at implantation and use of more recent speech processor technology. Children with better speech perception demonstrated better spoken language, earlier academic mainstreaming, and placement in more typically-sized classrooms (i.e., >20 students). Well-developed social skills were more highly associated with the ability to discriminate the nuances of talker identity and emotion than with the ability to recognize words and sentences through listening. The extent to which early cochlear implantation enabled these early-implanted children to make use of both linguistic and indexical properties of speech influenced not only their development of spoken language, but also their ability to function successfully in a hearing world. PMID:23652814

  16. THE EFFECT OF WORD ASSOCIATIONS ON THE RECOGNITION OF FLASHED WORDS.

    ERIC Educational Resources Information Center

    SAMUELS, S. JAY

    THE HYPOTHESIS THAT WHEN ASSOCIATED PAIRS OF WORDS ARE PRESENTED, SPEED OF RECOGNITION WILL BE FASTER THAN WHEN NONASSOCIATED WORD PAIRS ARE PRESENTED OR WHEN A TARGET WORD IS PRESENTED BY ITSELF WAS TESTED. TWENTY UNIVERSITY STUDENTS, INITIALLY SCREENED FOR VISION, WERE ASSIGNED RANDOMLY TO ROWS OF A 5 X 5 REPEATED-MEASURES LATIN SQUARE DESIGN.…

  17. Factors Affecting Open-Set Word Recognition in Adults with Cochlear Implants

    PubMed Central

    Holden, Laura K.; Finley, Charles C.; Firszt, Jill B.; Holden, Timothy A.; Brenner, Christine; Potts, Lisa G.; Gotter, Brenda D.; Vanderhoof, Sallie S.; Mispagel, Karen; Heydebrand, Gitry; Skinner, Margaret W.

    2012-01-01

    A monosyllabic word test was administered to 114 postlingually-deaf adult cochlear implant (CI) recipients at numerous intervals from two weeks to two years post-initial CI activation. Biographic/audiologic information, electrode position, and cognitive ability were examined to determine factors affecting CI outcomes. Results revealed that Duration of Severe-to-Profound Hearing Loss, Age at Implantation, CI Sound-field Threshold Levels, Percentage of Electrodes in Scala Vestibuli, Medio-lateral Electrode Position, Insertion Depth, and Cognition were among the factors that affected performance. Knowledge of how factors affect performance can influence counseling, device fitting, and rehabilitation for patients and may contribute to improved device design. PMID:23348845

  18. Speaker information affects false recognition of unstudied lexical-semantic associates.

    PubMed

    Luthra, Sahil; Fox, Neal P; Blumstein, Sheila E

    2018-05-01

    Recognition of and memory for a spoken word can be facilitated by a prior presentation of that word spoken by the same talker. However, it is less clear whether this speaker congruency advantage generalizes to facilitate recognition of unheard related words. The present investigation employed a false memory paradigm to examine whether information about a speaker's identity in items heard by listeners could influence the recognition of novel items (critical intruders) phonologically or semantically related to the studied items. In Experiment 1, false recognition of semantically associated critical intruders was sensitive to speaker information, though only when subjects attended to talker identity during encoding. Results from Experiment 2 also provide some evidence that talker information affects the false recognition of critical intruders. Taken together, the present findings indicate that indexical information is able to contact the lexical-semantic network to affect the processing of unheard words.

  19. The impact of inverted text on visual word processing: An fMRI study.

    PubMed

    Sussman, Bethany L; Reddigari, Samir; Newman, Sharlene D

    2018-06-01

    Visual word recognition has been studied for decades. One question that has received limited attention is how different text presentation orientations disrupt word recognition. By examining how word recognition processes may be disrupted by different text orientations it is hoped that new insights can be gained concerning the process. Here, we examined the impact of rotating and inverting text on the neural network responsible for visual word recognition focusing primarily on a region of the occipto-temporal cortex referred to as the visual word form area (VWFA). A lexical decision task was employed in which words and pseudowords were presented in one of three orientations (upright, rotated or inverted). The results demonstrate that inversion caused the greatest disruption of visual word recognition processes. Both rotated and inverted text elicited increased activation in spatial attention regions within the right parietal cortex. However, inverted text recruited phonological and articulatory processing regions within the left inferior frontal and left inferior parietal cortices. Finally, the VWFA was found to not behave similarly to the fusiform face area in that unusual text orientations resulted in increased activation and not decreased activation. It is hypothesized here that the VWFA activation is modulated by feedback from linguistic processes. Copyright © 2018 Elsevier Inc. All rights reserved.

  20. Visual Word Recognition Across the Adult Lifespan

    PubMed Central

    Cohen-Shikora, Emily R.; Balota, David A.

    2016-01-01

    The current study examines visual word recognition in a large sample (N = 148) across the adult lifespan and across a large set of stimuli (N = 1187) in three different lexical processing tasks (pronunciation, lexical decision, and animacy judgments). Although the focus of the present study is on the influence of word frequency, a diverse set of other variables are examined as the system ages and acquires more experience with language. Computational models and conceptual theories of visual word recognition and aging make differing predictions for age-related changes in the system. However, these have been difficult to assess because prior studies have produced inconsistent results, possibly due to sample differences, analytic procedures, and/or task-specific processes. The current study confronts these potential differences by using three different tasks, treating age and word variables as continuous, and exploring the influence of individual differences such as vocabulary, vision, and working memory. The primary finding is remarkable stability in the influence of a diverse set of variables on visual word recognition across the adult age spectrum. This pattern is discussed in reference to previous inconsistent findings in the literature and implications for current models of visual word recognition. PMID:27336629

  1. Aging and IQ effects on associative recognition and priming in item recognition

    PubMed Central

    McKoon, Gail; Ratcliff, Roger

    2012-01-01

    Two ways to examine memory for associative relationships between pairs of words were tested: an explicit method, associative recognition, and an implicit method, priming in item recognition. In an experiment with both kinds of tests, participants were asked to learn pairs of words. For the explicit test, participants were asked to decide whether two words of a test pair had been studied in the same or different pairs. For the implicit test, participants were asked to decide whether single words had or had not been among the studied pairs. Some test words were immediately preceded in the test list by the other word of the same pair and some by a word from a different pair. Diffusion model (Ratcliff, 1978; Ratcliff & McKoon, 2008) analyses were carried out for both tasks for college-age participants, 60–74 year olds, and 75–90 year olds, and for higher- and lower-IQ participants, in order to compare the two measures of associative strength. Results showed parallel behavior of drift rates for associative recognition and priming across ages and across IQ, indicating that they are based, at least to some degree, on the same information in memory. PMID:24976676

  2. Evaluating Effects of Divided Hemispheric Processing on Word Recognition in Foveal and Extrafoveal Displays: The Evidence from Arabic

    PubMed Central

    Almabruk, Abubaker A. A.; Paterson, Kevin B.; McGowan, Victoria; Jordan, Timothy R.

    2011-01-01

    Background Previous studies have claimed that a precise split at the vertical midline of each fovea causes all words to the left and right of fixation to project to the opposite, contralateral hemisphere, and this division in hemispheric processing has considerable consequences for foveal word recognition. However, research in this area is dominated by the use of stimuli from Latinate languages, which may induce specific effects on performance. Consequently, we report two experiments using stimuli from a fundamentally different, non-Latinate language (Arabic) that offers an alternative way of revealing effects of split-foveal processing, if they exist. Methods and Findings Words (and pseudowords) were presented to the left or right of fixation, either close to fixation and entirely within foveal vision, or further from fixation and entirely within extrafoveal vision. Fixation location and stimulus presentations were carefully controlled using an eye-tracker linked to a fixation-contingent display. To assess word recognition, Experiment 1 used the Reicher-Wheeler task and Experiment 2 used the lexical decision task. Results Performance in both experiments indicated a functional division in hemispheric processing for words in extrafoveal locations (in recognition accuracy in Experiment 1 and in reaction times and error rates in Experiment 2) but no such division for words in foveal locations. Conclusions These findings from a non-Latinate language provide new evidence that although a functional division in hemispheric processing exists for word recognition outside the fovea, this division does not extend up to the point of fixation. Some implications for word recognition and reading are discussed. PMID:21559084

  3. The Effect of the Balance of Orthographic Neighborhood Distribution in Visual Word Recognition

    ERIC Educational Resources Information Center

    Robert, Christelle; Mathey, Stephanie; Zagar, Daniel

    2007-01-01

    The present study investigated whether the balance of neighborhood distribution (i.e., the way orthographic neighbors are spread across letter positions) influences visual word recognition. Three word conditions were compared. Word neighbors were either concentrated on one letter position (e.g.,nasse/basse-lasse-tasse-masse) or were unequally…

  4. Morpho-Semantic Processing in Word Recognition: Evidence from Balanced and Biased Ambiguous Morphemes

    ERIC Educational Resources Information Center

    Tsang, Yiu-Kei; Chen, Hsuan-Chih

    2013-01-01

    The role of morphemic meaning in Chinese word recognition was examined with the masked and unmasked priming paradigms. Target words contained ambiguous morphemes biased toward the dominant or the subordinate meanings. Prime words either contained the same ambiguous morphemes in the subordinate interpretations or were unrelated to the targets. In…

  5. Additive and Interactive Effects on Response Time Distributions in Visual Word Recognition

    ERIC Educational Resources Information Center

    Yap, Melvin J.; Balota, David A.

    2007-01-01

    Across 3 different word recognition tasks, distributional analyses were used to examine the joint effects of stimulus quality and word frequency on underlying response time distributions. Consistent with the extant literature, stimulus quality and word frequency produced additive effects in lexical decision, not only in the means but also in the…

  6. Evidence for Early Morphological Decomposition in Visual Word Recognition

    ERIC Educational Resources Information Center

    Solomyak, Olla; Marantz, Alec

    2010-01-01

    We employ a single-trial correlational MEG analysis technique to investigate early processing in the visual recognition of morphologically complex words. Three classes of affixed words were presented in a lexical decision task: free stems (e.g., taxable), bound roots (e.g., tolerable), and unique root words (e.g., vulnerable, the root of which…

  7. Morphological Structures in Visual Word Recognition: The Case of Arabic

    ERIC Educational Resources Information Center

    Abu-Rabia, Salim; Awwad, Jasmin (Shalhoub)

    2004-01-01

    This research examined the function within lexical access of the main morphemic units from which most Arabic words are assembled, namely roots and word patterns. The present study focused on the derivation of nouns, in particular, whether the lexical representation of Arabic words reflects their morphological structure and whether recognition of a…

  8. Does viotin activate violin more than viocin? On the use of visual cues during visual-word recognition.

    PubMed

    Perea, Manuel; Panadero, Victoria

    2014-01-01

    The vast majority of neural and computational models of visual-word recognition assume that lexical access is achieved via the activation of abstract letter identities. Thus, a word's overall shape should play no role in this process. In the present lexical decision experiment, we compared word-like pseudowords like viotín (same shape as its base word: violín) vs. viocín (different shape) in mature (college-aged skilled readers), immature (normally reading children), and immature/impaired (young readers with developmental dyslexia) word-recognition systems. Results revealed similar response times (and error rates) to consistent-shape and inconsistent-shape pseudowords for both adult skilled readers and normally reading children - this is consistent with current models of visual-word recognition. In contrast, young readers with developmental dyslexia made significantly more errors to viotín-like pseudowords than to viocín-like pseudowords. Thus, unlike normally reading children, young readers with developmental dyslexia are sensitive to a word's visual cues, presumably because of poor letter representations.

  9. Lexical leverage: Category knowledge boosts real-time novel word recognition in two-year- olds

    PubMed Central

    Borovsky, Arielle; Ellis, Erica M.; Evans, Julia L.; Elman, Jeffrey L.

    2016-01-01

    Recent research suggests that infants tend to add words to their vocabulary that are semantically related to other known words, though it is not clear why this pattern emerges. In this paper, we explore whether infants to leverage their existing vocabulary and semantic knowledge when interpreting novel label-object mappings in real-time. We initially identified categorical domains for which individual 24-month-old infants have relatively higher and lower levels of knowledge, irrespective of overall vocabulary size. Next, we taught infants novel words in these higher and lower knowledge domains and then asked if their subsequent real-time recognition of these items varied as a function of their category knowledge. While our participants successfully acquired the novel label -object mappings in our task, there were important differences in the way infants recognized these words in real time. Namely, infants showed more robust recognition of high (vs. low) domain knowledge words. These findings suggest that dense semantic structure facilitates early word learning and real-time novel word recognition. PMID:26452444

  10. How does Interhemispheric Communication in Visual Word Recognition Work? Deciding between Early and Late Integration Accounts of the Split Fovea Theory

    ERIC Educational Resources Information Center

    Van der Haegen, Lise; Brysbaert, Marc; Davis, Colin J.

    2009-01-01

    It has recently been shown that interhemispheric communication is needed for the processing of foveally presented words. In this study, we examine whether the integration of information happens at an early stage, before word recognition proper starts, or whether the integration is part of the recognition process itself. Two lexical decision…

  11. Semantic contribution to verbal short-term memory: are pleasant words easier to remember than neutral words in serial recall and serial recognition?

    PubMed

    Monnier, Catherine; Syssau, Arielle

    2008-01-01

    In the four experiments reported here, we examined the role of word pleasantness on immediate serial recall and immediate serial recognition. In Experiment 1, we compared verbal serial recall of pleasant and neutral words, using a limited set of items. In Experiment 2, we replicated Experiment 1 with an open set of words (i.e., new items were used on every trial). In Experiments 3 and 4, we assessed immediate serial recognition of pleasant and neutral words, using item sets from Experiments 1 and 2. Pleasantness was found to have a facilitation effect on both immediate serial recall and immediate serial recognition. This study supplies some new supporting arguments in favor of a semantic contribution to verbal short-term memory performance. The pleasantness effect observed in immediate serial recognition showed that, contrary to a number of earlier findings, performance on this task can also turn out to be dependent on semantic factors. The results are discussed in relation to nonlinguistic and psycholinguistic models of short-term memory.

  12. Iconic gestures prime related concepts: an ERP study.

    PubMed

    Wu, Ying Croon; Coulson, Seana

    2007-02-01

    To assess priming by iconic gestures, we recorded EEG (at 29 scalp sites) in two experiments while adults watched short, soundless videos of spontaneously produced, cospeech iconic gestures followed by related or unrelated probe words. In Experiment 1, participants classified the relatedness between gestures and words. In Experiment 2, they attended to stimuli, and performed an incidental recognition memory test on words presented during the EEG recording session. Event-related potentials (ERPs) time-locked to the onset of probe words were measured, along with response latencies and word recognition rates. Although word relatedness did not affect reaction times or recognition rates, contextually related probe words elicited less-negative ERPs than did unrelated ones between 300 and 500 msec after stimulus onset (N400) in both experiments. These findings demonstrate sensitivity to semantic relations between iconic gestures and words in brain activity engendered during word comprehension.

  13. Evidence for the activation of sensorimotor information during visual word recognition: the body-object interaction effect.

    PubMed

    Siakaluk, Paul D; Pexman, Penny M; Aguilera, Laura; Owen, William J; Sears, Christopher R

    2008-01-01

    We examined the effects of sensorimotor experience in two visual word recognition tasks. Body-object interaction (BOI) ratings were collected for a large set of words. These ratings assess perceptions of the ease with which a human body can physically interact with a word's referent. A set of high BOI words (e.g., mask) and a set of low BOI words (e.g., ship) were created, matched on imageability and concreteness. Facilitatory BOI effects were observed in lexical decision and phonological lexical decision tasks: responses were faster for high BOI words than for low BOI words. We discuss how our findings may be accounted for by (a) semantic feedback within the visual word recognition system, and (b) an embodied view of cognition (e.g., Barsalou's perceptual symbol systems theory), which proposes that semantic knowledge is grounded in sensorimotor interactions with the environment.

  14. Sensory experience ratings (SERs) for 1,659 French words: Relationships with other psycholinguistic variables and visual word recognition.

    PubMed

    Bonin, Patrick; Méot, Alain; Ferrand, Ludovic; Bugaïska, Aurélia

    2015-09-01

    We collected sensory experience ratings (SERs) for 1,659 French words in adults. Sensory experience for words is a recently introduced variable that corresponds to the degree to which words elicit sensory and perceptual experiences (Juhasz & Yap Behavior Research Methods, 45, 160-168, 2013; Juhasz, Yap, Dicke, Taylor, & Gullick Quarterly Journal of Experimental Psychology, 64, 1683-1691, 2011). The relationships of the sensory experience norms with other psycholinguistic variables (e.g., imageability and age of acquisition) were analyzed. We also investigated the degree to which SER predicted performance in visual word recognition tasks (lexical decision, word naming, and progressive demasking). The analyses indicated that SER reliably predicted response times in lexical decision, but not in word naming or progressive demasking. The findings are discussed in relation to the status of SER, the role of semantic code activation in visual word recognition, and the embodied view of cognition.

  15. Applicability of the Compensatory Encoding Model in Foreign Language Reading: An Investigation with Chinese College English Language Learners

    PubMed Central

    Han, Feifei

    2017-01-01

    While some first language (L1) reading models suggest that inefficient word recognition and small working memory tend to inhibit higher-level comprehension processes; the Compensatory Encoding Model maintains that slow word recognition and small working memory do not normally hinder reading comprehension, as readers are able to operate metacognitive strategies to compensate for inefficient word recognition and working memory limitation as long as readers process a reading task without time constraint. Although empirical evidence is accumulated for support of the Compensatory Encoding Model in L1 reading, there is lack of research for testing of the Compensatory Encoding Model in foreign language (FL) reading. This research empirically tested the Compensatory Encoding Model in English reading among Chinese college English language learners (ELLs). Two studies were conducted. Study one focused on testing whether reading condition varying time affects the relationship between word recognition, working memory, and reading comprehension. Students were tested on a computerized English word recognition test, a computerized Operation Span task, and reading comprehension in time constraint and non-time constraint reading. The correlation and regression analyses showed that the strength of association was much stronger between word recognition, working memory, and reading comprehension in time constraint than that in non-time constraint reading condition. Study two examined whether FL readers were able to operate metacognitive reading strategies as a compensatory way of reading comprehension for inefficient word recognition and working memory limitation in non-time constraint reading. The participants were tested on the same computerized English word recognition test and Operation Span test. They were required to think aloud while reading and to complete the comprehension questions. The think-aloud protocols were coded for concurrent use of reading strategies, classified into language-oriented strategies, content-oriented strategies, re-reading, pausing, and meta-comment. The correlation analyses showed that while word recognition and working memory were only significantly related to frequency of language-oriented strategies, re-reading, and pausing, but not with reading comprehension. Jointly viewed, the results of the two studies, complimenting each other, supported the applicability of the Compensatory Encoding Model in FL reading with Chinese college ELLs. PMID:28522984

  16. Applicability of the Compensatory Encoding Model in Foreign Language Reading: An Investigation with Chinese College English Language Learners.

    PubMed

    Han, Feifei

    2017-01-01

    While some first language (L1) reading models suggest that inefficient word recognition and small working memory tend to inhibit higher-level comprehension processes; the Compensatory Encoding Model maintains that slow word recognition and small working memory do not normally hinder reading comprehension, as readers are able to operate metacognitive strategies to compensate for inefficient word recognition and working memory limitation as long as readers process a reading task without time constraint. Although empirical evidence is accumulated for support of the Compensatory Encoding Model in L1 reading, there is lack of research for testing of the Compensatory Encoding Model in foreign language (FL) reading. This research empirically tested the Compensatory Encoding Model in English reading among Chinese college English language learners (ELLs). Two studies were conducted. Study one focused on testing whether reading condition varying time affects the relationship between word recognition, working memory, and reading comprehension. Students were tested on a computerized English word recognition test, a computerized Operation Span task, and reading comprehension in time constraint and non-time constraint reading. The correlation and regression analyses showed that the strength of association was much stronger between word recognition, working memory, and reading comprehension in time constraint than that in non-time constraint reading condition. Study two examined whether FL readers were able to operate metacognitive reading strategies as a compensatory way of reading comprehension for inefficient word recognition and working memory limitation in non-time constraint reading. The participants were tested on the same computerized English word recognition test and Operation Span test. They were required to think aloud while reading and to complete the comprehension questions. The think-aloud protocols were coded for concurrent use of reading strategies, classified into language-oriented strategies, content-oriented strategies, re-reading, pausing, and meta-comment. The correlation analyses showed that while word recognition and working memory were only significantly related to frequency of language-oriented strategies, re-reading, and pausing, but not with reading comprehension. Jointly viewed, the results of the two studies, complimenting each other, supported the applicability of the Compensatory Encoding Model in FL reading with Chinese college ELLs.

  17. Understanding native Russian listeners' errors on an English word recognition test: model-based analysis of phoneme confusion.

    PubMed

    Shi, Lu-Feng; Morozova, Natalia

    2012-08-01

    Word recognition is a basic component in a comprehensive hearing evaluation, but data are lacking for listeners speaking two languages. This study obtained such data for Russian natives in the US and analysed the data using the perceptual assimilation model (PAM) and speech learning model (SLM). Listeners were randomly presented 200 NU-6 words in quiet. Listeners responded verbally and in writing. Performance was scored on words and phonemes (word-initial consonants, vowels, and word-final consonants). Seven normal-hearing, adult monolingual English natives (NM), 16 English-dominant (ED), and 15 Russian-dominant (RD) Russian natives participated. ED and RD listeners differed significantly in their language background. Consistent with the SLM, NM outperformed ED listeners and ED outperformed RD listeners, whether responses were scored on words or phonemes. NM and ED listeners shared similar phoneme error patterns, whereas RD listeners' errors had unique patterns that could be largely understood via the PAM. RD listeners had particular difficulty differentiating vowel contrasts /i-I/, /æ-ε/, and /ɑ-Λ/, word-initial consonant contrasts /p-h/ and /b-f/, and word-final contrasts /f-v/. Both first-language phonology and second-language learning history affect word and phoneme recognition. Current findings may help clinicians differentiate word recognition errors due to language background from hearing pathologies.

  18. Unilateral Hearing Loss: Understanding Speech Recognition and Localization Variability-Implications for Cochlear Implant Candidacy.

    PubMed

    Firszt, Jill B; Reeder, Ruth M; Holden, Laura K

    At a minimum, unilateral hearing loss (UHL) impairs sound localization ability and understanding speech in noisy environments, particularly if the loss is severe to profound. Accompanying the numerous negative consequences of UHL is considerable unexplained individual variability in the magnitude of its effects. Identification of covariables that affect outcome and contribute to variability in UHLs could augment counseling, treatment options, and rehabilitation. Cochlear implantation as a treatment for UHL is on the rise yet little is known about factors that could impact performance or whether there is a group at risk for poor cochlear implant outcomes when hearing is near-normal in one ear. The overall goal of our research is to investigate the range and source of variability in speech recognition in noise and localization among individuals with severe to profound UHL and thereby help determine factors relevant to decisions regarding cochlear implantation in this population. The present study evaluated adults with severe to profound UHL and adults with bilateral normal hearing. Measures included adaptive sentence understanding in diffuse restaurant noise, localization, roving-source speech recognition (words from 1 of 15 speakers in a 140° arc), and an adaptive speech-reception threshold psychoacoustic task with varied noise types and noise-source locations. There were three age-sex-matched groups: UHL (severe to profound hearing loss in one ear and normal hearing in the contralateral ear), normal hearing listening bilaterally, and normal hearing listening unilaterally. Although the normal-hearing-bilateral group scored significantly better and had less performance variability than UHLs on all measures, some UHL participants scored within the range of the normal-hearing-bilateral group on all measures. The normal-hearing participants listening unilaterally had better monosyllabic word understanding than UHLs for words presented on the blocked/deaf side but not the open/hearing side. In contrast, UHLs localized better than the normal-hearing unilateral listeners for stimuli on the open/hearing side but not the blocked/deaf side. This suggests that UHLs had learned strategies for improved localization on the side of the intact ear. The UHL and unilateral normal-hearing participant groups were not significantly different for speech in noise measures. UHL participants with childhood rather than recent hearing loss onset localized significantly better; however, these two groups did not differ for speech recognition in noise. Age at onset in UHL adults appears to affect localization ability differently than understanding speech in noise. Hearing thresholds were significantly correlated with speech recognition for UHL participants but not the other two groups. Auditory abilities of UHLs varied widely and could be explained only in part by hearing threshold levels. Age at onset and length of hearing loss influenced performance on some, but not all measures. Results support the need for a revised and diverse set of clinical measures, including sound localization, understanding speech in varied environments, and careful consideration of functional abilities as individuals with severe to profound UHL are being considered potential cochlear implant candidates.

  19. Tracking the Time Course of Word-Frequency Effects in Auditory Word Recognition with Event-Related Potentials

    ERIC Educational Resources Information Center

    Dufour, Sophie; Brunelliere, Angele; Frauenfelder, Ulrich H.

    2013-01-01

    Although the word-frequency effect is one of the most established findings in spoken-word recognition, the precise processing locus of this effect is still a topic of debate. In this study, we used event-related potentials (ERPs) to track the time course of the word-frequency effect. In addition, the neighborhood density effect, which is known to…

  20. Effect of Orthographic Processes on Letter Identity and Letter-Position Encoding in Dyslexic Children

    PubMed Central

    Reilhac, Caroline; Jucla, Mélanie; Iannuzzi, Stéphanie; Valdois, Sylviane; Démonet, Jean-François

    2012-01-01

    The ability to identify letters and encode their position is a crucial step of the word recognition process. However and despite their word identification problem, the ability of dyslexic children to encode letter identity and letter-position within strings was not systematically investigated. This study aimed at filling this gap and further explored how letter identity and letter-position encoding is modulated by letter context in developmental dyslexia. For this purpose, a letter-string comparison task was administered to French dyslexic children and two chronological age (CA) and reading age (RA)-matched control groups. Children had to judge whether two successively and briefly presented four-letter strings were identical or different. Letter-position and letter identity were manipulated through the transposition (e.g., RTGM vs. RMGT) or substitution of two letters (e.g., TSHF vs. TGHD). Non-words, pseudo-words, and words were used as stimuli to investigate sub-lexical and lexical effects on letter encoding. Dyslexic children showed both substitution and transposition detection problems relative to CA-controls. A substitution advantage over transpositions was only found for words in dyslexic children whereas it extended to pseudo-words in RA-controls and to all type of items in CA-controls. Letters were better identified in the dyslexic group when belonging to orthographically familiar strings. Letter-position encoding was very impaired in dyslexic children who did not show any word context effect in contrast to CA-controls. Overall, the current findings point to a strong letter identity and letter-position encoding disorder in developmental dyslexia. PMID:22661961

  1. Older and Wiser: Older Adults’ Episodic Word Memory Benefits from Sentence Study Contexts

    PubMed Central

    Matzen, Laura E.; Benjamin, Aaron S.

    2013-01-01

    A hallmark of adaptive cognition is the ability to modulate learning in response to the demands posed by different types of tests and different types of materials. Here we evaluate how older adults process words and sentences differently by examining patterns of memory errors. In two experiments, we explored younger and older adults’ sensitivity to lures on a recognition test following study of words in these two types of contexts. Among the studied words were compound words such as “blackmail” and “jailbird” that were related to conjunction lures (e.g. “blackbird”) and semantic lures (e.g. “criminal”). Participants engaged in a recognition test that included old items, conjunction lures, semantic lures, and unrelated new items. In both experiments, younger and older adults had the same general pattern of memory errors: more incorrect endorsements of semantic than conjunction lures following sentence study and more incorrect endorsements of conjunction than semantic lures following list study. The similar pattern reveals that older and younger adults responded to the constraints of the two different study contexts in similar ways. However, while younger and older adults showed similar levels of memory performance for the list study context, the sentence study context elicited superior memory performance in the older participants. It appears as though memory tasks that take advantage of greater expertise in older adults--in this case, greater experience with sentence processing--can reveal superior memory performance in the elderly. PMID:23834493

  2. The locus of word frequency effects in skilled spelling-to-dictation.

    PubMed

    Chua, Shi Min; Liow, Susan J Rickard

    2014-01-01

    In spelling-to-dictation tasks, skilled spellers consistently initiate spelling of high-frequency words faster than that of low-frequency words. Tainturier and Rapp's model of spelling shows three possible loci for this frequency effect: spoken word recognition, orthographic retrieval, and response execution of the first letter. Thus far, researchers have attributed the effect solely to orthographic retrieval without considering spoken word recognition or response execution. To investigate word frequency effects at each of these three loci, Experiment 1 involved a delayed spelling-to-dictation task and Experiment 2 involved a delayed/uncertain task. In Experiment 1, no frequency effect was found in the 1200-ms delayed condition, suggesting that response execution is not affected by word frequency. In Experiment 2, no frequency effect was found in the delayed/uncertain task that reflects the orthographic retrieval, whereas a frequency effect was found in the comparison immediate/uncertain task that reflects both spoken word recognition and orthographic retrieval. The results of this two-part study suggest that frequency effects in spoken word recognition play a substantial role in skilled spelling-to-dictation. Discrepancies between these findings and previous research, and the limitations of the present study, are discussed.

  3. The role of semantically related distractors during encoding and retrieval of words in long-term memory.

    PubMed

    Meade, Melissa E; Fernandes, Myra A

    2016-07-01

    We examined the influence of divided attention (DA) on recognition of words when the concurrent task was semantically related or unrelated to the to-be-recognised target words. Participants were asked to either study or retrieve a target list of semantically related words while simultaneously making semantic decisions (i.e., size judgements) to another set of related or unrelated words heard concurrently. We manipulated semantic relatedness of distractor to target words, and whether DA occurred during the encoding or retrieval phase of memory. Recognition accuracy was significantly diminished relative to full attention, following DA conditions at encoding, regardless of relatedness of distractors to study words. However, response times (RTs) were slower with related compared to unrelated distractors. Similarly, under DA at retrieval, recognition RTs were slower when distractors were semantically related than unrelated to target words. Unlike the effect from DA at encoding, recognition accuracy was worse under DA at retrieval when the distractors were related compared to unrelated to the target words. Results suggest that availability of general attentional resources is critical for successful encoding, whereas successful retrieval is particularly reliant on access to a semantic code, making it sensitive to related distractors under DA conditions.

  4. Testing Measurement Invariance across Groups of Children with and without Attention-Deficit/ Hyperactivity Disorder: Applications for Word Recognition and Spelling Tasks

    PubMed Central

    Lúcio, Patrícia S.; Salum, Giovanni; Swardfager, Walter; Mari, Jair de Jesus; Pan, Pedro M.; Bressan, Rodrigo A.; Gadelha, Ary; Rohde, Luis A.; Cogo-Moreira, Hugo

    2017-01-01

    Although studies have consistently demonstrated that children with attention-deficit/hyperactivity disorder (ADHD) perform significantly lower than controls on word recognition and spelling tests, such studies rely on the assumption that those groups are comparable in these measures. This study investigates comparability of word recognition and spelling tests based on diagnostic status for ADHD through measurement invariance methods. The participants (n = 1,935; 47% female; 11% ADHD) were children aged 6–15 with normal IQ (≥70). Measurement invariance was investigated through Confirmatory Factor Analysis and Multiple Indicators Multiple Causes models. Measurement invariance was attested in both methods, demonstrating the direct comparability of the groups. Children with ADHD were 0.51 SD lower in word recognition and 0.33 SD lower in spelling tests than controls. Results suggest that differences in performance on word recognition and spelling tests are related to true mean differences based on ADHD diagnostic status. Implications for clinical practice and research are discussed. PMID:29118733

  5. Testing Measurement Invariance across Groups of Children with and without Attention-Deficit/ Hyperactivity Disorder: Applications for Word Recognition and Spelling Tasks.

    PubMed

    Lúcio, Patrícia S; Salum, Giovanni; Swardfager, Walter; Mari, Jair de Jesus; Pan, Pedro M; Bressan, Rodrigo A; Gadelha, Ary; Rohde, Luis A; Cogo-Moreira, Hugo

    2017-01-01

    Although studies have consistently demonstrated that children with attention-deficit/hyperactivity disorder (ADHD) perform significantly lower than controls on word recognition and spelling tests, such studies rely on the assumption that those groups are comparable in these measures. This study investigates comparability of word recognition and spelling tests based on diagnostic status for ADHD through measurement invariance methods. The participants ( n = 1,935; 47% female; 11% ADHD) were children aged 6-15 with normal IQ (≥70). Measurement invariance was investigated through Confirmatory Factor Analysis and Multiple Indicators Multiple Causes models. Measurement invariance was attested in both methods, demonstrating the direct comparability of the groups. Children with ADHD were 0.51 SD lower in word recognition and 0.33 SD lower in spelling tests than controls. Results suggest that differences in performance on word recognition and spelling tests are related to true mean differences based on ADHD diagnostic status. Implications for clinical practice and research are discussed.

  6. False recognition production indexes in Spanish for 60 DRM lists with three critical words.

    PubMed

    Beato, Maria Soledad; Díez, Emiliano

    2011-06-01

    A normative study was conducted using the Deese/Roediger-McDermott paradigm (DRM) to obtain false recognition for 60 six-word lists in Spanish, designed with a completely new methodology. For the first time, lists included words (e.g., bridal, newlyweds, bond, commitment, couple, to marry) simultaneously associated with three critical words (e.g., love, wedding, marriage). Backward associative strength between lists and critical words was taken into account when creating the lists. The results showed that all lists produced false recognition. Moreover, some lists had a high false recognition rate (e.g., 65%; jail, inmate, prison: bars, prisoner, cell, offender, penitentiary, imprisonment). This is an aspect of special interest for those DRM experiments that, for example, record brain electrical activity. This type of list will enable researchers to raise the signal-to-noise ratio in false recognition event-related potential studies as they increase the number of critical trials per list, and it will be especially useful for the design of future research.

  7. Contextual diversity facilitates learning new words in the classroom.

    PubMed

    Rosa, Eva; Tapia, José Luis; Perea, Manuel

    2017-01-01

    In the field of word recognition and reading, it is commonly assumed that frequently repeated words create more accessible memory traces than infrequently repeated words, thus capturing the word-frequency effect. Nevertheless, recent research has shown that a seemingly related factor, contextual diversity (defined as the number of different contexts [e.g., films] in which a word appears), is a better predictor than word-frequency in word recognition and sentence reading experiments. Recent research has shown that contextual diversity plays an important role when learning new words in a laboratory setting with adult readers. In the current experiment, we directly manipulated contextual diversity in a very ecological scenario: at school, when Grade 3 children were learning words in the classroom. The new words appeared in different contexts/topics (high-contextual diversity) or only in one of them (low-contextual diversity). Results showed that words encountered in different contexts were learned and remembered more effectively than those presented in redundant contexts. We discuss the practical (educational [e.g., curriculum design]) and theoretical (models of word recognition) implications of these findings.

  8. Contextual diversity facilitates learning new words in the classroom

    PubMed Central

    Tapia, José Luis; Perea, Manuel

    2017-01-01

    In the field of word recognition and reading, it is commonly assumed that frequently repeated words create more accessible memory traces than infrequently repeated words, thus capturing the word-frequency effect. Nevertheless, recent research has shown that a seemingly related factor, contextual diversity (defined as the number of different contexts [e.g., films] in which a word appears), is a better predictor than word-frequency in word recognition and sentence reading experiments. Recent research has shown that contextual diversity plays an important role when learning new words in a laboratory setting with adult readers. In the current experiment, we directly manipulated contextual diversity in a very ecological scenario: at school, when Grade 3 children were learning words in the classroom. The new words appeared in different contexts/topics (high-contextual diversity) or only in one of them (low-contextual diversity). Results showed that words encountered in different contexts were learned and remembered more effectively than those presented in redundant contexts. We discuss the practical (educational [e.g., curriculum design]) and theoretical (models of word recognition) implications of these findings. PMID:28586354

  9. Medical Named Entity Recognition for Indonesian Language Using Word Representations

    NASA Astrophysics Data System (ADS)

    Rahman, Arief

    2018-03-01

    Nowadays, Named Entity Recognition (NER) system is used in medical texts to obtain important medical information, like diseases, symptoms, and drugs. While most NER systems are applied to formal medical texts, informal ones like those from social media (also called semi-formal texts) are starting to get recognition as a gold mine for medical information. We propose a theoretical Named Entity Recognition (NER) model for semi-formal medical texts in our medical knowledge management system by comparing two kinds of word representations: cluster-based word representation and distributed representation.

  10. The impact of left and right intracranial tumors on picture and word recognition memory.

    PubMed

    Goldstein, Bram; Armstrong, Carol L; Modestino, Edward; Ledakis, George; John, Cameron; Hunter, Jill V

    2004-02-01

    This study investigated the effects of left and right intracranial tumors on picture and word recognition memory. We hypothesized that left hemispheric (LH) patients would exhibit greater word recognition memory impairment than right hemispheric (RH) patients, with no significant hemispheric group picture recognition memory differences. The LH patient group obtained a significantly slower mean picture recognition reaction time than the RH group. The LH group had a higher proportion of tumors extending into the temporal lobes, possibly accounting for their greater pictorial processing impairments. Dual coding and enhanced visual imagery may have contributed to the patient groups' similar performance on the remainder of the measures.

  11. Reading skill related to left ventral occipitotemporal cortex during a phonological awareness task in 5-6-year old children.

    PubMed

    Wang, Jin; Joanisse, Marc F; Booth, James R

    2018-04-01

    The left ventral occipitotemporal cortex (vOT) is important in visual word recognition. Studies have shown that the left vOT is generally observed to be involved in spoken language processing in skilled readers, suggesting automatic access to corresponding orthographic information. However, little is known about where and how the left vOT is involved in the spoken language processing of young children with emerging reading ability. In order to answer this question, we examined the relation of reading ability in 5-6-year-old kindergarteners to the activation of vOT during an auditory phonological awareness task. Two experimental conditions: onset word pairs that shared the first phoneme and rhyme word pairs that shared the final biphone/triphone, were compared to allow a measurement of vOT's activation to small (i.e., onsets) and large grain sizes (i.e., rhymes). We found that higher reading ability was associated with better accuracy of the onset, but not the rhyme, condition. In addition, higher reading ability was only associated with greater sensitivity in the posterior left vOT for the contrast of the onset versus rhyme condition. These results suggest that acquisition of reading results in greater specialization of the posterior vOT to smaller rather than larger grain sizes in young children. Copyright © 2018. Published by Elsevier Ltd.

  12. Test of a motor theory of long-term auditory memory

    PubMed Central

    Schulze, Katrin; Vargha-Khadem, Faraneh; Mishkin, Mortimer

    2012-01-01

    Monkeys can easily form lasting central representations of visual and tactile stimuli, yet they seem unable to do the same with sounds. Humans, by contrast, are highly proficient in auditory long-term memory (LTM). These mnemonic differences within and between species raise the question of whether the human ability is supported in some way by speech and language, e.g., through subvocal reproduction of speech sounds and by covert verbal labeling of environmental stimuli. If so, the explanation could be that storing rapidly fluctuating acoustic signals requires assistance from the motor system, which is uniquely organized to chain-link rapid sequences. To test this hypothesis, we compared the ability of normal participants to recognize lists of stimuli that can be easily reproduced, labeled, or both (pseudowords, nonverbal sounds, and words, respectively) versus their ability to recognize a list of stimuli that can be reproduced or labeled only with great difficulty (reversed words, i.e., words played backward). Recognition scores after 5-min delays filled with articulatory-suppression tasks were relatively high (75–80% correct) for all sound types except reversed words; the latter yielded scores that were not far above chance (58% correct), even though these stimuli were discriminated nearly perfectly when presented as reversed-word pairs at short intrapair intervals. The combined results provide preliminary support for the hypothesis that participation of the oromotor system may be essential for laying down the memory of speech sounds and, indeed, that speech and auditory memory may be so critically dependent on each other that they had to coevolve. PMID:22511719

  13. Test of a motor theory of long-term auditory memory.

    PubMed

    Schulze, Katrin; Vargha-Khadem, Faraneh; Mishkin, Mortimer

    2012-05-01

    Monkeys can easily form lasting central representations of visual and tactile stimuli, yet they seem unable to do the same with sounds. Humans, by contrast, are highly proficient in auditory long-term memory (LTM). These mnemonic differences within and between species raise the question of whether the human ability is supported in some way by speech and language, e.g., through subvocal reproduction of speech sounds and by covert verbal labeling of environmental stimuli. If so, the explanation could be that storing rapidly fluctuating acoustic signals requires assistance from the motor system, which is uniquely organized to chain-link rapid sequences. To test this hypothesis, we compared the ability of normal participants to recognize lists of stimuli that can be easily reproduced, labeled, or both (pseudowords, nonverbal sounds, and words, respectively) versus their ability to recognize a list of stimuli that can be reproduced or labeled only with great difficulty (reversed words, i.e., words played backward). Recognition scores after 5-min delays filled with articulatory-suppression tasks were relatively high (75-80% correct) for all sound types except reversed words; the latter yielded scores that were not far above chance (58% correct), even though these stimuli were discriminated nearly perfectly when presented as reversed-word pairs at short intrapair intervals. The combined results provide preliminary support for the hypothesis that participation of the oromotor system may be essential for laying down the memory of speech sounds and, indeed, that speech and auditory memory may be so critically dependent on each other that they had to coevolve.

  14. Sight Word Recognition among Young Children At-Risk: Picture-Supported vs. Word-Only

    ERIC Educational Resources Information Center

    Meadan, Hedda; Stoner, Julia B.; Parette, Howard P.

    2008-01-01

    A quasi-experimental design was used to investigate the impact of Picture Communication Symbols (PCS) on sight word recognition by young children identified as "at risk" for academic and social-behavior difficulties. Ten pre-primer and 10 primer Dolch words were presented to 23 students in the intervention group and 8 students in the…

  15. Word Recognition Error Analysis: Comparing Isolated Word List and Oral Passage Reading

    ERIC Educational Resources Information Center

    Flynn, Lindsay J.; Hosp, John L.; Hosp, Michelle K.; Robbins, Kelly P.

    2011-01-01

    The purpose of this study was to determine the relation between word recognition errors made at a letter-sound pattern level on a word list and on a curriculum-based measurement oral reading fluency measure (CBM-ORF) for typical and struggling elementary readers. The participants were second, third, and fourth grade typical and struggling readers…

  16. The Role of Native-Language Phonology in the Auditory Word Identification and Visual Word Recognition of Russian-English Bilinguals

    ERIC Educational Resources Information Center

    Shafiro, Valeriy; Kharkhurin, Anatoliy V.

    2009-01-01

    Abstract Does native language phonology influence visual word processing in a second language? This question was investigated in two experiments with two groups of Russian-English bilinguals, differing in their English experience, and a monolingual English control group. Experiment 1 tested visual word recognition following semantic…

  17. Word Recognition is Affected by the Meaning of Orthographic Neighbours: Evidence from Semantic Decision Tasks

    ERIC Educational Resources Information Center

    Boot, Inge; Pecher, Diane

    2008-01-01

    Many models of word recognition predict that neighbours of target words will be activated during word processing. Cascaded models can make the additional prediction that semantic features of those neighbours get activated before the target has been uniquely identified. In two semantic decision tasks neighbours that were congruent (i.e., from the…

  18. Semantic Ambiguity Effects in L2 Word Recognition

    ERIC Educational Resources Information Center

    Ishida, Tomomi

    2018-01-01

    The present study examined the ambiguity effects in second language (L2) word recognition. Previous studies on first language (L1) lexical processing have observed that ambiguous words are recognized faster and more accurately than unambiguous words on lexical decision tasks. In this research, L1 and L2 speakers of English were asked whether a…

  19. The Role of Derivative Suffix Productivity in the Visual Word Recognition of Complex Words

    ERIC Educational Resources Information Center

    Lázaro, Miguel; Sainz, Javier; Illera, Víctor

    2015-01-01

    In this article we present two lexical decision experiments that examine the role of base frequency and of derivative suffix productivity in visual recognition of Spanish words. In the first experiment we find that complex words with productive derivative suffixes result in lower response times than those with unproductive derivative suffixes.…

  20. Effects of Visual and Auditory Perceptual Aptitudes and Letter Discrimination Pretraining on Word Recognition.

    ERIC Educational Resources Information Center

    Janssen, David Rainsford

    This study investigated alternate methods of letter discrimination pretraining and word recognition training in young children. Seventy kindergarten children were trained to recognize eight printed words in a vocabulary list by a mixed-list paired-associate method. Four of the stimulus words had visual response choices (pictures) and four had…

  1. From Numbers to Letters: Feedback Regularization in Visual Word Recognition

    ERIC Educational Resources Information Center

    Molinaro, Nicola; Dunabeitia, Jon Andoni; Marin-Gutierrez, Alejandro; Carreiras, Manuel

    2010-01-01

    Word reading in alphabetic languages involves letter identification, independently of the format in which these letters are written. This process of letter "regularization" is sensitive to word context, leading to the recognition of a word even when numbers that resemble letters are inserted among other real letters (e.g., M4TERI4L). The present…

  2. Morphological Processing during Visual Word Recognition in Hebrew as a First and a Second Language

    ERIC Educational Resources Information Center

    Norman, Tal; Degani, Tamar; Peleg, Orna

    2017-01-01

    The present study examined whether sublexical morphological processing takes place during visual word-recognition in Hebrew, and whether morphological decomposition of written words depends on lexical activation of the complete word. Furthermore, it examined whether morphological processing is similar when reading Hebrew as a first language (L1)…

  3. Reading component skills in dyslexia: word recognition, comprehension and processing speed.

    PubMed

    de Oliveira, Darlene G; da Silva, Patrícia B; Dias, Natália M; Seabra, Alessandra G; Macedo, Elizeu C

    2014-01-01

    The cognitive model of reading comprehension (RC) posits that RC is a result of the interaction between decoding and linguistic comprehension. Recently, the notion of decoding skill was expanded to include word recognition. In addition, some studies suggest that other skills could be integrated into this model, like processing speed, and have consistently indicated that this skill influences and is an important predictor of the main components of the model, such as vocabulary for comprehension and phonological awareness of word recognition. The following study evaluated the components of the RC model and predictive skills in children and adolescents with dyslexia. 40 children and adolescents (8-13 years) were divided in a Dyslexic Group (DG; 18 children, MA = 10.78, SD = 1.66) and control group (CG 22 children, MA = 10.59, SD = 1.86). All were students from the 2nd to 8th grade of elementary school and groups were equivalent in school grade, age, gender, and IQ. Oral and RC, word recognition, processing speed, picture naming, receptive vocabulary, and phonological awareness were assessed. There were no group differences regarding the accuracy in oral and RC, phonological awareness, naming, and vocabulary scores. DG performed worse than the CG in word recognition (general score and orthographic confusion items) and were slower in naming. Results corroborated the literature regarding word recognition and processing speed deficits in dyslexia. However, dyslexics can achieve normal scores on RC test. Data supports the importance of delimitation of different reading strategies embedded in the word recognition component. The role of processing speed in reading problems remain unclear.

  4. Evidence for the Activation of Sensorimotor Information during Visual Word Recognition: The Body-Object Interaction Effect

    ERIC Educational Resources Information Center

    Siakaluk, Paul D.; Pexman, Penny M.; Aguilera, Laura; Owen, William J.; Sears, Christopher R.

    2008-01-01

    We examined the effects of sensorimotor experience in two visual word recognition tasks. Body-object interaction (BOI) ratings were collected for a large set of words. These ratings assess perceptions of the ease with which a human body can physically interact with a word's referent. A set of high BOI words (e.g., "mask") and a set of low BOI…

  5. How vocabulary size in two languages relates to efficiency in spoken word recognition by young Spanish-English bilinguals

    PubMed Central

    Marchman, Virginia A.; Fernald, Anne; Hurtado, Nereyda

    2010-01-01

    Research using online comprehension measures with monolingual children shows that speed and accuracy of spoken word recognition are correlated with lexical development. Here we examined speech processing efficiency in relation to vocabulary development in bilingual children learning both Spanish and English (n=26; 2;6 yrs). Between-language associations were weak: vocabulary size in Spanish was uncorrelated with vocabulary in English, and children’s facility in online comprehension in Spanish was unrelated to their facility in English. Instead, efficiency of online processing in one language was significantly related to vocabulary size in that language, after controlling for processing speed and vocabulary size in the other language. These links between efficiency of lexical access and vocabulary knowledge in bilinguals parallel those previously reported for Spanish and English monolinguals, suggesting that children’s ability to abstract information from the input in building a working lexicon relates fundamentally to mechanisms underlying the construction of language. PMID:19726000

  6. A systematic review of readability and comprehension instruments used for print and web-based cancer information.

    PubMed

    Friedman, Daniela B; Hoffman-Goetz, Laurie

    2006-06-01

    Adequate functional literacy skills positively influence individuals' ability to take control of their health. Print and Web-based cancer information is often written at difficult reading levels. This systematic review evaluates readability instruments (FRE, F-K, Fog, SMOG, Fry) used to assess print and Web-based cancer information and word recognition and comprehension tests (Cloze, REALM, TOFHLA, WRAT) that measure people's health literacy. Articles on readability and comprehension instruments explicitly used for cancer information were assembled by searching MEDLINE and Psyc INFO from 1993 to 2003. In all, 23 studies were included; 16 on readability, 6 on comprehension, and 1 on readability and comprehension. Of the readability investigations, 14 focused on print materials, and 2 assessed Internet information. Comprehension and word recognition measures were not applied to Web-based information. None of the formulas were designed to determine the effects of visuals or design factors that could influence readability and comprehension of cancer education information.

  7. Where do dialectal effects on speech processing come from? Evidence from a cross-dialect investigation.

    PubMed

    Larraza, Saioa; Samuel, Arthur G; Oñederra, Miren Lourdes

    2016-07-20

    Accented speech has been seen as an additional impediment for speech processing; it usually adds linguistic and cognitive load to the listener's task. In the current study we analyse where the processing costs of regional dialects come from, a question that has not been answered yet. We quantify the proficiency of Basque-Spanish bilinguals who have different native dialects of Basque on many dimensions and test for costs at each of three levels of processing-phonemic discrimination, word recognition, and semantic processing. The ability to discriminate a dialect-specific contrast is affected by a bilingual's linguistic background less than lexical access is, and an individual's difficulty in lexical access is correlated with basic discrimination problems. Once lexical access is achieved, dialectal variation has little impact on semantic processing. The results are discussed in terms of the presence or absence of correlations between different processing levels. The implications of the results are considered for how models of spoken word recognition handle dialectal variation.

  8. Generalized auditory agnosia with spared music recognition in a left-hander. Analysis of a case with a right temporal stroke.

    PubMed

    Mendez, M F

    2001-02-01

    After a right temporoparietal stroke, a left-handed man lost the ability to understand speech and environmental sounds but developed greater appreciation for music. The patient had preserved reading and writing but poor verbal comprehension. Slower speech, single syllable words, and minimal written cues greatly facilitated his verbal comprehension. On identifying environmental sounds, he made predominant acoustic errors. Although he failed to name melodies, he could match, describe, and sing them. The patient had normal hearing except for presbyacusis, right-ear dominance for phonemes, and normal discrimination of basic psychoacoustic features and rhythm. Further testing disclosed difficulty distinguishing tone sequences and discriminating two clicks and short-versus-long tones, particularly in the left ear. Together, these findings suggest impairment in a direct route for temporal analysis and auditory word forms in his right hemisphere to Wernicke's area in his left hemisphere. The findings further suggest a separate and possibly rhythm-based mechanism for music recognition.

  9. Morphological learning in a novel language: A cross-language comparison.

    PubMed

    Havas, Viktória; Waris, Otto; Vaquero, Lucía; Rodríguez-Fornells, Antoni; Laine, Matti

    2015-01-01

    Being able to extract and interpret the internal structure of complex word forms such as the English word dance+r+s is crucial for successful language learning. We examined whether the ability to extract morphological information during word learning is affected by the morphological features of one's native tongue. Spanish and Finnish adult participants performed a word-picture associative learning task in an artificial language where the target words included a suffix marking the gender of the corresponding animate object. The short exposure phase was followed by a word recognition task and a generalization task for the suffix. The participants' native tongues vary greatly in terms of morphological structure, leading to two opposing hypotheses. On the one hand, Spanish speakers may be more effective in identifying gender in a novel language because this feature is present in Spanish but not in Finnish. On the other hand, Finnish speakers may have an advantage as the abundance of bound morphemes in their language calls for continuous morphological decomposition. The results support the latter alternative, suggesting that lifelong experience on morphological decomposition provides an advantage in novel morphological learning.

  10. Learning and consolidation of new spoken words in autism spectrum disorder.

    PubMed

    Henderson, Lisa; Powell, Anna; Gareth Gaskell, M; Norbury, Courtenay

    2014-11-01

    Autism spectrum disorder (ASD) is characterized by rich heterogeneity in vocabulary knowledge and word knowledge that is not well accounted for by current cognitive theories. This study examines whether individual differences in vocabulary knowledge in ASD might be partly explained by a difficulty with consolidating newly learned spoken words and/or integrating them with existing knowledge. Nineteen boys with ASD and 19 typically developing (TD) boys matched on age and vocabulary knowledge showed similar improvements in recognition and recall of novel words (e.g. 'biscal') 24 hours after training, suggesting an intact ability to consolidate explicit knowledge of new spoken word forms. TD children showed competition effects for existing neighbors (e.g. 'biscuit') after 24 hours, suggesting that the new words had been integrated with existing knowledge over time. In contrast, children with ASD showed immediate competition effects that were not significant after 24 hours, suggesting a qualitative difference in the time course of lexical integration. These results are considered from the perspective of the dual-memory systems framework. © 2014 John Wiley & Sons Ltd.

  11. Reading laterally: the cerebral hemispheric use of spatial frequencies in visual word recognition.

    PubMed

    Tadros, Karine; Dupuis-Roy, Nicolas; Fiset, Daniel; Arguin, Martin; Gosselin, Frédéric

    2013-01-04

    It is generally accepted that the left hemisphere (LH) is more capable for reading than the right hemisphere (RH). Left hemifield presentations (initially processed by the RH) lead to a globally higher error rate, slower word identification, and a significantly stronger word length effect (i.e., slower reaction times for longer words). Because the visuo-perceptual mechanisms of the brain for word recognition are primarily localized in the LH (Cohen et al., 2003), it is possible that this part of the brain possesses better spatial frequency (SF) tuning for processing the visual properties of words than the RH. The main objective of this study is to determine the SF tuning functions of the LH and RH for word recognition. Each word image was randomly sampled in the SF domain using the SF bubbles method (Willenbockel et al., 2010) and was presented laterally to the left or right visual hemifield. As expected, the LH requires less visual information than the RH to reach the same level of performance, illustrating the well-known LH advantage for word recognition. Globally, the SF tuning of both hemispheres is similar. However, these seemingly identical tuning functions hide important differences. Most importantly, we argue that the RH requires higher SFs to identify longer words because of crowding.

  12. No one way ticket from orthography to semantics in recognition memory: N400 and P200 effects of associations.

    PubMed

    Stuellein, Nicole; Radach, Ralph R; Jacobs, Arthur M; Hofmann, Markus J

    2016-05-15

    Computational models of word recognition already successfully used associative spreading from orthographic to semantic levels to account for false memories. But can they also account for semantic effects on event-related potentials in a recognition memory task? To address this question, target words in the present study had either many or few semantic associates in the stimulus set. We found larger P200 amplitudes and smaller N400 amplitudes for old words in comparison to new words. Words with many semantic associates led to larger P200 amplitudes and a smaller N400 in comparison to words with a smaller number of semantic associations. We also obtained inverted response time and accuracy effects for old and new words: faster response times and fewer errors were found for old words that had many semantic associates, whereas new words with a large number of semantic associates produced slower response times and more errors. Both behavioral and electrophysiological results indicate that semantic associations between words can facilitate top-down driven lexical access and semantic integration in recognition memory. Our results support neurophysiologically plausible predictions of the Associative Read-Out Model, which suggests top-down connections from semantic to orthographic layers. Copyright © 2016 Elsevier B.V. All rights reserved.

  13. Modeling open-set spoken word recognition in postlingually deafened adults after cochlear implantation: some preliminary results with the neighborhood activation model.

    PubMed

    Meyer, Ted A; Frisch, Stefan A; Pisoni, David B; Miyamoto, Richard T; Svirsky, Mario A

    2003-07-01

    Do cochlear implants provide enough information to allow adult cochlear implant users to understand words in ways that are similar to listeners with acoustic hearing? Can we use a computational model to gain insight into the underlying mechanisms used by cochlear implant users to recognize spoken words? The Neighborhood Activation Model has been shown to be a reasonable model of word recognition for listeners with normal hearing. The Neighborhood Activation Model assumes that words are recognized in relation to other similar-sounding words in a listener's lexicon. The probability of correctly identifying a word is based on the phoneme perception probabilities from a listener's closed-set consonant and vowel confusion matrices modified by the relative frequency of occurrence of the target word compared with similar-sounding words (neighbors). Common words with few similar-sounding neighbors are more likely to be selected as responses than less common words with many similar-sounding neighbors. Recent studies have shown that several of the assumptions of the Neighborhood Activation Model also hold true for cochlear implant users. Closed-set consonant and vowel confusion matrices were obtained from 26 postlingually deafened adults who use cochlear implants. Confusion matrices were used to represent input errors to the Neighborhood Activation Model. Responses to the different stimuli were then generated by the Neighborhood Activation Model after incorporating the frequency of occurrence counts of the stimuli and their neighbors. Model outputs were compared with obtained performance measures on the Consonant-Vowel Nucleus-Consonant word test. Information transmission analysis was used to assess whether the Neighborhood Activation Model was able to successfully generate and predict word and individual phoneme recognition by cochlear implant users. The Neighborhood Activation Model predicted Consonant-Vowel Nucleus-Consonant test words at levels similar to those correctly identified by the cochlear implant users. The Neighborhood Activation Model also predicted phoneme feature information well. The results obtained suggest that the Neighborhood Activation Model provides a reasonable explanation of word recognition by postlingually deafened adults after cochlear implantation. It appears that multichannel cochlear implants give cochlear implant users access to their mental lexicons in a manner that is similar to listeners with acoustic hearing. The lexical properties of the test stimuli used to assess performance are important to spoken-word recognition and should be included in further models of the word recognition process.

  14. Preschool Children’s Memory for Word Forms Remains Stable Over Several Days, but Gradually Decreases after 6 Months

    PubMed Central

    Gordon, Katherine R.; McGregor, Karla K.; Waldier, Brigitte; Curran, Maura K.; Gomez, Rebecca L.; Samuelson, Larissa K.

    2016-01-01

    Research on word learning has focused on children’s ability to identify a target object when given the word form after a minimal number of exposures to novel word-object pairings. However, relatively little research has focused on children’s ability to retrieve the word form when given the target object. The exceptions involve asking children to recall and produce forms, and children typically perform near floor on these measures. In the current study, 3- to 5-year-old children were administered a novel test of word form that allowed for recognition memory and manual responses. Specifically, when asked to label a previously trained object, children were given three forms to choose from: the target, a minimally different form, and a maximally different form. Children demonstrated memory for word forms at three post-training delays: 10 mins (short-term), 2–3 days (long-term), and 6 months to 1 year (very long-term). However, children performed worse at the very long-term delay than the other time points, and the length of the very long-term delay was negatively related to performance. When in error, children were no more likely to select the minimally different form than the maximally different form at all time points. Overall, these results suggest that children remember word forms that are linked to objects over extended post-training intervals, but that their memory for the forms gradually decreases over time without further exposures. Furthermore, memory traces for word forms do not become less phonologically specific over time; rather children either identify the correct form, or they perform at chance. PMID:27729880

  15. Orthographic neighborhood effects in recognition and recall tasks in a transparent orthography.

    PubMed

    Justi, Francis R R; Jaeger, Antonio

    2017-04-01

    The number of orthographic neighbors of a word influences its probability of being retrieved in recognition and free recall memory tests. Even though this phenomenon is well demonstrated for English words, it has yet to be demonstrated for languages with more predictable grapheme-phoneme mappings than English. To address this issue, 4 experiments were conducted to investigate effects of number of orthographic neighbors (N) and effects of frequency of occurrence of orthographic neighbors (NF) on memory retrieval of Brazilian Portuguese words. One hundred twenty-four Brazilian Portuguese speakers performed first a lexical-decision task (LDT) on words that were factorially manipulated according to N and NF, and intermixed with either nonpronounceable nonwords without orthographic neighbors (Experiments 1A and 2A), or with pronounceable nonwords with a large number of orthographic neighbors (Experiments 1B and 2B). The words were later used as probes on either recognition (Experiments 1A and 1B) or recall tests (Experiments 2A and 2B). Words with 1 orthographic neighbor were consistently better remembered than words with several orthographic neighbors in all recognition and recall tests. Notably, whereas in Experiment 1A higher false alarm rates were yielded for words with several rather than 1 orthographic neighbor, in Experiment 1B higher false alarm rates were yielded for words with 1 rather than several orthographic neighbors. Effects of NF, on the other hand, were not consistent among memory tasks. The effects of N on the recognition and recall tests conducted here are interpreted in light of dual process models of recognition. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  16. Conducting spoken word recognition research online: Validation and a new timing method.

    PubMed

    Slote, Joseph; Strand, Julia F

    2016-06-01

    Models of spoken word recognition typically make predictions that are then tested in the laboratory against the word recognition scores of human subjects (e.g., Luce & Pisoni Ear and Hearing, 19, 1-36, 1998). Unfortunately, laboratory collection of large sets of word recognition data can be costly and time-consuming. Due to the numerous advantages of online research in speed, cost, and participant diversity, some labs have begun to explore the use of online platforms such as Amazon's Mechanical Turk (AMT) to source participation and collect data (Buhrmester, Kwang, & Gosling Perspectives on Psychological Science, 6, 3-5, 2011). Many classic findings in cognitive psychology have been successfully replicated online, including the Stroop effect, task-switching costs, and Simon and flanker interference (Crump, McDonnell, & Gureckis PLoS ONE, 8, e57410, 2013). However, tasks requiring auditory stimulus delivery have not typically made use of AMT. In the present study, we evaluated the use of AMT for collecting spoken word identification and auditory lexical decision data. Although online users were faster and less accurate than participants in the lab, the results revealed strong correlations between the online and laboratory measures for both word identification accuracy and lexical decision speed. In addition, the scores obtained in the lab and online were equivalently correlated with factors that have been well established to predict word recognition, including word frequency and phonological neighborhood density. We also present and analyze a method for precise auditory reaction timing that is novel to behavioral research. Taken together, these findings suggest that AMT can be a viable alternative to the traditional laboratory setting as a source of participation for some spoken word recognition research.

  17. Print exposure modulates the effects of repetition priming during sentence reading.

    PubMed

    Lowder, Matthew W; Gordon, Peter C

    2017-12-01

    Individual readers vary greatly in the quality of their lexical representations, and consequently in how quickly and efficiently they can access orthographic and lexical knowledge. This variability may be explained, at least in part, by individual differences in exposure to printed language, because practice at reading promotes the development of stronger reading skills. In the present eyetracking experiment, we tested the hypothesis that the efficiency of word recognition during reading improves with increases in print exposure, by determining whether the magnitude of the repetition-priming effect is modulated by individual differences in scores on the author recognition test (ART). Lexical repetition of target words was manipulated across pairs of unrelated sentences that were presented on consecutive trials. The magnitude of the repetition effect was modulated by print exposure in early measures of processing, such that the magnitude of the effect was inversely related to scores on the ART. The results showed that low levels of print exposure, and thus lower-quality lexical representations, are associated with high levels of difficulty recognizing words, and thus with the greatest room to benefit from repetition. Furthermore, the interaction between scores on the ART and repetition suggests that print exposure is not simply an index of general reading speed, but rather that higher levels of print exposure are associated with an enhanced ability to access lexical knowledge and recognize words during reading.

  18. The influence of lexical characteristics and talker accent on the recognition of English words by speakers of Japanese.

    PubMed

    Yoneyama, Kiyoko; Munson, Benjamin

    2017-02-01

    Whether or not the influence of listeners' language proficiency on L2 speech recognition was affected by the structure of the lexicon was examined. This specific experiment examined the effect of word frequency (WF) and phonological neighborhood density (PND) on word recognition in native speakers of English and second-language (L2) speakers of English whose first language was Japanese. The stimuli included English words produced by a native speaker of English and English words produced by a native speaker of Japanese (i.e., with Japanese-accented English). The experiment was inspired by the finding of Imai, Flege, and Walley [(2005). J. Acoust. Soc. Am. 117, 896-907] that the influence of talker accent on speech intelligibility for L2 learners of English whose L1 is Spanish varies as a function of words' PND. In the currently study, significant interactions between stimulus accentedness and listener group on the accuracy and speed of spoken word recognition were found, as were significant effects of PND and WF on word-recognition accuracy. However, no significant three-way interaction among stimulus talker, listener group, and PND on either measure was found. Results are discussed in light of recent findings on cross-linguistic differences in the nature of the effects of PND on L2 phonological and lexical processing.

  19. Is nevtral NEUTRAL? Visual similarity effects in the early phases of written-word recognition.

    PubMed

    Marcet, Ana; Perea, Manuel

    2017-08-01

    For simplicity, contemporary models of written-word recognition and reading have unspecified feature/letter levels-they predict that the visually similar substituted-letter nonword PEQPLE is as effective at activating the word PEOPLE as the visually dissimilar substituted-letter nonword PEYPLE. Previous empirical evidence on the effects of visual similarly across letters during written-word recognition is scarce and nonconclusive. To examine whether visual similarity across letters plays a role early in word processing, we conducted two masked priming lexical decision experiments (stimulus-onset asynchrony = 50 ms). The substituted-letter primes were visually very similar to the target letters (u/v in Experiment 1 and i/j in Experiment 2; e.g., nevtral-NEUTRAL). For comparison purposes, we included an identity prime condition (neutral-NEUTRAL) and a dissimilar-letter prime condition (neztral-NEUTRAL). Results showed that the similar-letter prime condition produced faster word identification times than the dissimilar-letter prime condition. We discuss how models of written-word recognition should be amended to capture visual similarity effects across letters.

  20. Word attributes and lateralization revisited: implications for dual coding and discrete versus continuous processing.

    PubMed

    Boles, D B

    1989-01-01

    Three attributes of words are their imageability, concreteness, and familiarity. From a literature review and several experiments, I previously concluded (Boles, 1983a) that only familiarity affects the overall near-threshold recognition of words, and that none of the attributes affects right-visual-field superiority for word recognition. Here these conclusions are modified by two experiments demonstrating a critical mediating influence of intentional versus incidental memory instructions. In Experiment 1, subjects were instructed to remember the words they were shown, for subsequent recall. The results showed effects of both imageability and familiarity on overall recognition, as well as an effect of imageability on lateralization. In Experiment 2, word-memory instructions were deleted and the results essentially reinstated the findings of Boles (1983a). It is concluded that right-hemisphere imagery processes can participate in word recognition under intentional memory instructions. Within the dual coding theory (Paivio, 1971), the results argue that both discrete and continuous processing modes are available, that the modes can be used strategically, and that continuous processing can occur prior to response stages.

  1. Impaired Word and Face Recognition in Older Adults with Type 2 Diabetes.

    PubMed

    Jones, Nicola; Riby, Leigh M; Smith, Michael A

    2016-07-01

    Older adults with type 2 diabetes mellitus (DM2) exhibit accelerated decline in some domains of cognition including verbal episodic memory. Few studies have investigated the influence of DM2 status in older adults on recognition memory for more complex stimuli such as faces. In the present study we sought to compare recognition memory performance for words, objects and faces under conditions of relatively low and high cognitive load. Healthy older adults with good glucoregulatory control (n = 13) and older adults with DM2 (n = 24) were administered recognition memory tasks in which stimuli (faces, objects and words) were presented under conditions of either i) low (stimulus presented without a background pattern) or ii) high (stimulus presented against a background pattern) cognitive load. In a subsequent recognition phase, the DM2 group recognized fewer faces than healthy controls. Further, the DM2 group exhibited word recognition deficits in the low cognitive load condition. The recognition memory impairment observed in patients with DM2 has clear implications for day-to-day functioning. Although these deficits were not amplified under conditions of increased cognitive load, the present study emphasizes that recognition memory impairment for both words and more complex stimuli such as face are a feature of DM2 in older adults. Copyright © 2016 IMSS. Published by Elsevier Inc. All rights reserved.

  2. Talker and lexical effects on audiovisual word recognition by adults with cochlear implants.

    PubMed

    Kaiser, Adam R; Kirk, Karen Iler; Lachs, Lorin; Pisoni, David B

    2003-04-01

    The present study examined how postlingually deafened adults with cochlear implants combine visual information from lipreading with auditory cues in an open-set word recognition task. Adults with normal hearing served as a comparison group. Word recognition performance was assessed using lexically controlled word lists presented under auditory-only, visual-only, and combined audiovisual presentation formats. Effects of talker variability were studied by manipulating the number of talkers producing the stimulus tokens. Lexical competition was investigated using sets of lexically easy and lexically hard test words. To assess the degree of audiovisual integration, a measure of visual enhancement, R(a), was used to assess the gain in performance provided in the audiovisual presentation format relative to the maximum possible performance obtainable in the auditory-only format. Results showed that word recognition performance was highest for audiovisual presentation followed by auditory-only and then visual-only stimulus presentation. Performance was better for single-talker lists than for multiple-talker lists, particularly under the audiovisual presentation format. Word recognition performance was better for the lexically easy than for the lexically hard words regardless of presentation format. Visual enhancement scores were higher for single-talker conditions compared to multiple-talker conditions and tended to be somewhat better for lexically easy words than for lexically hard words. The pattern of results suggests that information from the auditory and visual modalities is used to access common, multimodal lexical representations in memory. The findings are discussed in terms of the complementary nature of auditory and visual sources of information that specify the same underlying gestures and articulatory events in speech.

  3. Talker and Lexical Effects on Audiovisual Word Recognition by Adults With Cochlear Implants

    PubMed Central

    Kaiser, Adam R.; Kirk, Karen Iler; Lachs, Lorin; Pisoni, David B.

    2012-01-01

    The present study examined how postlingually deafened adults with cochlear implants combine visual information from lipreading with auditory cues in an open-set word recognition task. Adults with normal hearing served as a comparison group. Word recognition performance was assessed using lexically controlled word lists presented under auditory-only, visual-only, and combined audiovisual presentation formats. Effects of talker variability were studied by manipulating the number of talkers producing the stimulus tokens. Lexical competition was investigated using sets of lexically easy and lexically hard test words. To assess the degree of audiovisual integration, a measure of visual enhancement, Ra, was used to assess the gain in performance provided in the audiovisual presentation format relative to the maximum possible performance obtainable in the auditory-only format. Results showed that word recognition performance was highest for audiovisual presentation followed by auditory-only and then visual-only stimulus presentation. Performance was better for single-talker lists than for multiple-talker lists, particularly under the audiovisual presentation format. Word recognition performance was better for the lexically easy than for the lexically hard words regardless of presentation format. Visual enhancement scores were higher for single-talker conditions compared to multiple-talker conditions and tended to be somewhat better for lexically easy words than for lexically hard words. The pattern of results suggests that information from the auditory and visual modalities is used to access common, multimodal lexical representations in memory. The findings are discussed in terms of the complementary nature of auditory and visual sources of information that specify the same underlying gestures and articulatory events in speech. PMID:14700380

  4. Adults' Self-Directed Learning of an Artificial Lexicon: The Dynamics of Neighborhood Reorganization

    ERIC Educational Resources Information Center

    Bardhan, Neil Prodeep

    2010-01-01

    Artificial lexicons have previously been used to examine the time course of the learning and recognition of spoken words, the role of segment type in word learning, and the integration of context during spoken word recognition. However, in all of these studies the experimenter determined the frequency and order of the words to be learned. In three…

  5. Phonological Contribution during Visual Word Recognition in Child Readers. An Intermodal Priming Study in Grades 3 and 5

    ERIC Educational Resources Information Center

    Sauval, Karinne; Casalis, Séverine; Perre, Laetitia

    2017-01-01

    This study investigated the phonological contribution during visual word recognition in child readers as a function of general reading expertise (third and fifth grades) and specific word exposure (frequent and less-frequent words). An intermodal priming in lexical decision task was performed. Auditory primes (identical and unrelated) were used in…

  6. The Interaction of Lexical Semantics and Cohort Competition in Spoken Word Recognition: An fMRI Study

    ERIC Educational Resources Information Center

    Zhuang, Jie; Randall, Billi; Stamatakis, Emmanuel A.; Marslen-Wilson, William D.; Tyler, Lorraine K.

    2011-01-01

    Spoken word recognition involves the activation of multiple word candidates on the basis of the initial speech input--the "cohort"--and selection among these competitors. Selection may be driven primarily by bottom-up acoustic-phonetic inputs or it may be modulated by other aspects of lexical representation, such as a word's meaning…

  7. Russian Character Recognition using Self-Organizing Map

    NASA Astrophysics Data System (ADS)

    Gunawan, D.; Arisandi, D.; Ginting, F. M.; Rahmat, R. F.; Amalia, A.

    2017-01-01

    The World Tourism Organization (UNWTO) in 2014 released that there are 28 million visitors who visit Russia. Most of the visitors might have problem in typing Russian word when using digital dictionary. This is caused by the letters, called Cyrillic that used by the Russian and the countries around it, have different shape than Latin letters. The visitors might not familiar with Cyrillic. This research proposes an alternative way to input the Cyrillic words. Instead of typing the Cyrillic words directly, camera can be used to capture image of the words as input. The captured image is cropped, then several pre-processing steps are applied such as noise filtering, binary image processing, segmentation and thinning. Next, the feature extraction process is applied to the image. Cyrillic letters recognition in the image is done by utilizing Self-Organizing Map (SOM) algorithm. SOM successfully recognizes 89.09% Cyrillic letters from the computer-generated images. On the other hand, SOM successfully recognizes 88.89% Cyrillic letters from the image captured by the smartphone’s camera. For the word recognition, SOM successfully recognized 292 words and partially recognized 58 words from the image captured by the smartphone’s camera. Therefore, the accuracy of the word recognition using SOM is 83.42%

  8. The ties that bind what is known to the recognition of what is new.

    PubMed

    Nelson, D L; Zhang, N; McKinney, V M

    2001-09-01

    Recognition success varies with how information is encoded (e.g., level of processing) and with what is already known as a result of past learning (e.g., word frequency). This article presents the results of experiments showing that preexisting connections involving the associates of studied words facilitate their recognition regardless of whether the words are intentionally encoded or are incidentally encoded under semantic or nonsemantic conditions. Words are more likely to be recognized when they have either more resonant connections coming back to them from their associates or more connections among their associates. Such results occur even though attention is never drawn to these associates. Regression analyses showed that these connections affect recognition independently of frequency, so the present results add to the literature showing that prior lexical knowledge contributes to episodic recognition. In addition, equations that use free-association data to derive composite strength indices of resonance and connectivity were evaluated. Implications for theories of recognition are discussed.

  9. Predicting individual differences in reading comprehension: a twin study

    PubMed Central

    Cutting, Laurie; Deater-Deckard, Kirby; DeThorne, Laura S.; Justice, Laura M.; Schatschneider, Chris; Thompson, Lee A.; Petrill, Stephen A.

    2010-01-01

    We examined the Simple View of reading from a behavioral genetic perspective. Two aspects of word decoding (phonological decoding and word recognition), two aspects of oral language skill (listening comprehension and vocabulary), and reading comprehension were assessed in a twin sample at age 9. Using latent factor models, we found that overlap among phonological decoding, word recognition, listening comprehension, vocabulary, and reading comprehension was primarily due to genetic influences. Shared environmental influences accounted for associations among word recognition, listening comprehension, vocabulary, and reading comprehension. Independent of phonological decoding and word recognition, there was a separate genetic link between listening comprehension, vocabulary, and reading comprehension and a specific shared environmental link between vocabulary and reading comprehension. There were no residual genetic or environmental influences on reading comprehension. The findings provide evidence for a genetic basis to the “Simple View” of reading. PMID:20814768

  10. Ease of identifying words degraded by visual noise.

    PubMed

    Barber, P; de la Mahotière, C

    1982-08-01

    A technique is described for investigating word recognition involving the superimposition of 'noise' on the visual target word. For this task a word is printed in the form of letters made up of separate elements; noise consists of additional elements which serve to reduce the ease whereby the words may be recognized, and a threshold-like measure can be obtained in terms of the amount of noise. A word frequency effect was obtained for the noise task, and for words presented tachistoscopically but in conventional typography. For the tachistoscope task, however, the frequency effect depended on the method of presentation. A second study showed no effect of inspection interval on performance on the noise task. A word-frequency effect was also found in a third experiment with tachistoscopic exposure of the noise task stimuli in undegraded form. The question of whether common processes are drawn on by tasks entailing different ways of varying ease of recognition is addressed, and the suitability of different tasks for word recognition research is discussed.

  11. Rapid extraction of gist from visual text and its influence on word recognition.

    PubMed

    Asano, Michiko; Yokosawa, Kazuhiko

    2011-01-01

    Two experiments explored rapid extraction of gist from a visual text and its influence on word recognition. In both, a short text (sentence) containing a target word was presented for 200 ms and was followed by a target recognition task. Results showed that participants recognized contextually anomalous word targets less frequently than contextually consistent counterparts (Experiment 1). This context effect was obtained when sentences contained the same semantic content but with disrupted syntactic structure (Experiment 2). Results demonstrate that words in a briefly presented visual sentence are processed in parallel and that rapid extraction of sentence gist relies on a primitive representation of sentence context (termed protocontext) that is semantically activated by the simultaneous presentation of multiple words (i.e., a sentence) before syntactic processing.

  12. Automatic vigilance for negative words in lexical decision and naming: comment on Larsen, Mercer, and Balota (2006).

    PubMed

    Estes, Zachary; Adelman, James S

    2008-08-01

    An automatic vigilance hypothesis states that humans preferentially attend to negative stimuli, and this attention to negative valence disrupts the processing of other stimulus properties. Thus, negative words typically elicit slower color naming, word naming, and lexical decisions than neutral or positive words. Larsen, Mercer, and Balota analyzed the stimuli from 32 published studies, and they found that word valence was confounded with several lexical factors known to affect word recognition. Indeed, with these lexical factors covaried out, Larsen et al. found no evidence of automatic vigilance. The authors report a more sensitive analysis of 1011 words. Results revealed a small but reliable valence effect, such that negative words (e.g., "shark") elicit slower lexical decisions and naming than positive words (e.g., "beach"). Moreover, the relation between valence and recognition was categorical rather than linear; the extremity of a word's valence did not affect its recognition. This valence effect was not attributable to word length, frequency, orthographic neighborhood size, contextual diversity, first phoneme, or arousal. Thus, the present analysis provides the most powerful demonstration of automatic vigilance to date.

  13. Development of First-Graders' Word Reading Skills: For Whom Can Dynamic Assessment Tell Us More?

    PubMed

    Cho, Eunsoo; Compton, Donald L; Gilbert, Jennifer K; Steacy, Laura M; Collins, Alyson A; Lindström, Esther R

    2017-01-01

    Dynamic assessment (DA) of word reading measures learning potential for early reading development by documenting the amount of assistance needed to learn how to read words with unfamiliar orthography. We examined the additive value of DA for predicting first-grade decoding and word recognition development while controlling for autoregressive effects. Additionally, we examined whether predictive validity of DA would be higher for students who have poor phonological awareness skills. First-grade students (n = 105) were assessed on measures of word reading, phonological awareness, rapid automatized naming, and DA in the fall and again assessed on word reading measures in the spring. A series of planned, moderated multiple regression analyses indicated that DA made a significant and unique contribution in predicting word recognition development above and beyond the autoregressor, particularly for students with poor phonological awareness skills. For these students, DA explained 3.5% of the unique variance in end-of-first-grade word recognition that was not attributable to autoregressive effect. Results suggest that DA provides an important source of individual differences in the development of word recognition skills that cannot be fully captured by merely assessing the present level of reading skills through traditional static assessment, particularly for students at risk for developing reading disabilities. © Hammill Institute on Disabilities 2015.

  14. Does increasing the intelligibility of a competing sound source interfere more with speech comprehension in older adults than it does in younger adults?

    PubMed

    Lu, Zihui; Daneman, Meredyth; Schneider, Bruce A

    2016-11-01

    A previous study (Schneider, Daneman, Murphy, & Kwong See, 2000) found that older listeners' decreased ability to recognize individual words in a noisy auditory background was responsible for most, if not all, of the comprehension difficulties older adults experience when listening to a lecture in a background of unintelligible babble. The present study investigated whether the use of a more intelligible distracter (a competing lecture) might reveal an increased susceptibility to distraction in older adults. The results from Experiments 1 and 2 showed that both normal-hearing and hearing-impaired older adults performed poorer than younger adults when everyone was tested in identical listening situations. However, when the listening situation was individually adjusted to compensate for age-related differences in the ability to recognize individual words in noise, age-related difference in comprehension disappeared. Experiment 3 compared the masking effects of a single-talker competing lecture to a babble of 12 voices directly after adjusting for word recognition. The results showed that the competing lecture interfered more than did the babble for both younger and older listeners. Interestingly, an increase in the level of noise had a deleterious effect on listening when the distractor was babble but had no effect when it was a competing lecture. These findings indicated that the speech comprehension difficulties of healthy older adults in noisy backgrounds primarily reflect age-related declines in the ability to recognize individual words.

  15. Extrinsic Cognitive Load Impairs Spoken Word Recognition in High- and Low-Predictability Sentences.

    PubMed

    Hunter, Cynthia R; Pisoni, David B

    Listening effort (LE) induced by speech degradation reduces performance on concurrent cognitive tasks. However, a converse effect of extrinsic cognitive load on recognition of spoken words in sentences has not been shown. The aims of the present study were to (a) examine the impact of extrinsic cognitive load on spoken word recognition in a sentence recognition task and (b) determine whether cognitive load and/or LE needed to understand spectrally degraded speech would differentially affect word recognition in high- and low-predictability sentences. Downstream effects of speech degradation and sentence predictability on the cognitive load task were also examined. One hundred twenty young adults identified sentence-final spoken words in high- and low-predictability Speech Perception in Noise sentences. Cognitive load consisted of a preload of short (low-load) or long (high-load) sequences of digits, presented visually before each spoken sentence and reported either before or after identification of the sentence-final word. LE was varied by spectrally degrading sentences with four-, six-, or eight-channel noise vocoding. Level of spectral degradation and order of report (digits first or words first) were between-participants variables. Effects of cognitive load, sentence predictability, and speech degradation on accuracy of sentence-final word identification as well as recall of preload digit sequences were examined. In addition to anticipated main effects of sentence predictability and spectral degradation on word recognition, we found an effect of cognitive load, such that words were identified more accurately under low load than high load. However, load differentially affected word identification in high- and low-predictability sentences depending on the level of sentence degradation. Under severe spectral degradation (four-channel vocoding), the effect of cognitive load on word identification was present for high-predictability sentences but not for low-predictability sentences. Under mild spectral degradation (eight-channel vocoding), the effect of load was present for low-predictability sentences but not for high-predictability sentences. There were also reliable downstream effects of speech degradation and sentence predictability on recall of the preload digit sequences. Long digit sequences were more easily recalled following spoken sentences that were less spectrally degraded. When digits were reported after identification of sentence-final words, short digit sequences were recalled more accurately when the spoken sentences were predictable. Extrinsic cognitive load can impair recognition of spectrally degraded spoken words in a sentence recognition task. Cognitive load affected word identification in both high- and low-predictability sentences, suggesting that load may impact both context use and lower-level perceptual processes. Consistent with prior work, LE also had downstream effects on memory for visual digit sequences. Results support the proposal that extrinsic cognitive load and LE induced by signal degradation both draw on a central, limited pool of cognitive resources that is used to recognize spoken words in sentences under adverse listening conditions.

  16. The Influence of Phonotactic Probability on Word Recognition in Toddlers

    ERIC Educational Resources Information Center

    MacRoy-Higgins, Michelle; Shafer, Valerie L.; Schwartz, Richard G.; Marton, Klara

    2014-01-01

    This study examined the influence of phonotactic probability on word recognition in English-speaking toddlers. Typically developing toddlers completed a preferential looking paradigm using familiar words, which consisted of either high or low phonotactic probability sound sequences. The participants' looking behavior was recorded in response to…

  17. Individual Differences in Online Spoken Word Recognition: Implications for SLI

    ERIC Educational Resources Information Center

    McMurray, Bob; Samelson, Vicki M.; Lee, Sung Hee; Tomblin, J. Bruce

    2010-01-01

    Thirty years of research has uncovered the broad principles that characterize spoken word processing across listeners. However, there have been few systematic investigations of individual differences. Such an investigation could help refine models of word recognition by indicating which processing parameters are likely to vary, and could also have…

  18. Influences of Lexical Processing on Reading.

    ERIC Educational Resources Information Center

    Yang, Yu-Fen; Kuo, Hsing-Hsiu

    2003-01-01

    Investigates how early lexical processing (word recognition) could influence reading. Finds that less-proficient readers could not finish the task of word recognition within time limits and their accuracy rates were quite low, whereas the proficient readers processed the physical words immediately and translated them into meanings quickly in order…

  19. A Prerequisite to L1 Homophone Effects in L2 Spoken-Word Recognition

    ERIC Educational Resources Information Center

    Nakai, Satsuki; Lindsay, Shane; Ota, Mitsuhiko

    2015-01-01

    When both members of a phonemic contrast in L2 (second language) are perceptually mapped to a single phoneme in one's L1 (first language), L2 words containing a member of that contrast can spuriously activate L2 words in spoken-word recognition. For example, upon hearing cattle, Dutch speakers of English are reported to experience activation…

  20. The Role of the Association in Recognition Memory.

    ERIC Educational Resources Information Center

    Underwood, Benton J.

    The purpose of the eight experiments was to assess the role which associations between two words played in recognition decisions. The evidence on weak associations established in the laboratory indicated that association was playing a small role, but that the recognition performance on pairs of words was highly predictable from frequency…

  1. Memory effects of sleep, emotional valence, arousal and novelty in children.

    PubMed

    Vermeulen, Marije C M; van der Heijden, Kristiaan B; Benjamins, Jeroen S; Swaab, Hanna; van Someren, Eus J W

    2017-06-01

    Effectiveness of memory consolidation is determined by multiple factors, including sleep after learning, emotional valence, arousal and novelty. Few studies investigated how the effect of sleep compares with (and interacts with) these other factors, of which virtually none are in children. The present study did so by repeated assessment of declarative memory in 386 children (45% boys) aged 9-11 years through an online word-pair task. Children were randomly assigned to either a morning or evening learning session of 30 unrelated word-pairs with positive, neutral or negative valenced cues and neutral targets. After immediately assessing baseline recognition, delayed recognition was recorded either 12 or 24 h later, resulting in four different assessment schedules. One week later, the procedure was repeated with exactly the same word-pairs to evaluate whether effects differed for relearning versus original novel learning. Mixed-effect logistic regression models were used to evaluate how the probability of correct recognition was affected by sleep, valence, arousal, novelty and their interactions. Both immediate and delayed recognition were worse for pairs with negatively valenced or less arousing cue words. Relearning improved immediate and delayed word-pair recognition. In contrast to these effects, sleep did not affect recognition, nor did sleep moderate the effects of arousal, valence and novelty. The findings suggest a robust inclination of children to specifically forget the pairing of words to negatively valenced cue words. In agreement with a recent meta-analysis, children seem to depend less on sleep for the consolidation of information than has been reported for adults, irrespective of the emotional valence, arousal and novelty of word-pairs. © 2017 European Sleep Research Society.

  2. Cognitive Factors Affecting Free Recall, Cued Recall, and Recognition Tasks in Alzheimer's Disease

    PubMed Central

    Yamagishi, Takashi; Sato, Takuya; Sato, Atsushi; Imamura, Toru

    2012-01-01

    Background/Aims Our aim was to identify cognitive factors affecting free recall, cued recall, and recognition tasks in patients with Alzheimer's disease (AD). Subjects: We recruited 349 consecutive AD patients who attended a memory clinic. Methods Each patient was assessed using the Alzheimer's Disease Assessment Scale (ADAS) and the extended 3-word recall test. In this task, each patient was asked to freely recall 3 previously presented words. If patients could not recall 1 or more of the target words, the examiner cued their recall by providing the category of the target word and then provided a forced-choice recognition of the target word with 2 distracters. The patients were divided into groups according to the results of the free recall, cued recall, and recognition tasks. Multivariate logistic regression analysis for repeated measures was carried out to evaluate the net effects of cognitive factors on the free recall, cued recall, and recognition tasks after controlling for the effects of age and recent memory deficit. Results Performance on the ADAS Orientation task was found to be related to performance on the free and cued recall tasks, performance on the ADAS Following Commands task was found to be related to performance on the cued recall task, and performance on the ADAS Ideational Praxis task was found to be related to performance on the free recall, cued recall, and recognition tasks. Conclusion The extended 3-word recall test reflects deficits in a wider range of memory and other cognitive processes, including memory retention after interference, divided attention, and executive functions, compared with word-list recall tasks. The characteristics of the extended 3-word recall test may be advantageous for evaluating patients’ memory impairments in daily living. PMID:22962551

  3. Cognitive factors affecting free recall, cued recall, and recognition tasks in Alzheimer's disease.

    PubMed

    Yamagishi, Takashi; Sato, Takuya; Sato, Atsushi; Imamura, Toru

    2012-01-01

    Our aim was to identify cognitive factors affecting free recall, cued recall, and recognition tasks in patients with Alzheimer's disease (AD). We recruited 349 consecutive AD patients who attended a memory clinic. Each patient was assessed using the Alzheimer's Disease Assessment Scale (ADAS) and the extended 3-word recall test. In this task, each patient was asked to freely recall 3 previously presented words. If patients could not recall 1 or more of the target words, the examiner cued their recall by providing the category of the target word and then provided a forced-choice recognition of the target word with 2 distracters. The patients were divided into groups according to the results of the free recall, cued recall, and recognition tasks. Multivariate logistic regression analysis for repeated measures was carried out to evaluate the net effects of cognitive factors on the free recall, cued recall, and recognition tasks after controlling for the effects of age and recent memory deficit. Performance on the ADAS Orientation task was found to be related to performance on the free and cued recall tasks, performance on the ADAS Following Commands task was found to be related to performance on the cued recall task, and performance on the ADAS Ideational Praxis task was found to be related to performance on the free recall, cued recall, and recognition tasks. The extended 3-word recall test reflects deficits in a wider range of memory and other cognitive processes, including memory retention after interference, divided attention, and executive functions, compared with word-list recall tasks. The characteristics of the extended 3-word recall test may be advantageous for evaluating patients' memory impairments in daily living.

  4. Reconsidering the role of temporal order in spoken word recognition.

    PubMed

    Toscano, Joseph C; Anderson, Nathaniel D; McMurray, Bob

    2013-10-01

    Models of spoken word recognition assume that words are represented as sequences of phonemes. We evaluated this assumption by examining phonemic anadromes, words that share the same phonemes but differ in their order (e.g., sub and bus). Using the visual-world paradigm, we found that listeners show more fixations to anadromes (e.g., sub when bus is the target) than to unrelated words (well) and to words that share the same vowel but not the same set of phonemes (sun). This contrasts with the predictions of existing models and suggests that words are not defined as strict sequences of phonemes.

  5. Coordination of Word Recognition and Oculomotor Control During Reading: The Role of Implicit Lexical Decisions

    PubMed Central

    Choi, Wonil; Gordon, Peter C.

    2013-01-01

    The coordination of word-recognition and oculomotor processes during reading was evaluated in two eye-tracking experiments that examined how word skipping, where a word is not fixated during first-pass reading, is affected by the lexical status of a letter string in the parafovea and ease of recognizing that string. Ease of lexical recognition was manipulated through target-word frequency (Experiment 1) and through repetition priming between prime-target pairs embedded in a sentence (Experiment 2). Using the gaze-contingent boundary technique the target word appeared in the parafovea either with full preview or with transposed-letter (TL) preview. The TL preview strings were nonwords in Experiment 1 (e.g., bilnk created from the target blink), but were words in Experiment 2 (e.g., sacred created from the target scared). Experiment 1 showed greater skipping for high-frequency than low-frequency target words in the full preview condition but not in the TL preview (nonword) condition. Experiment 2 showed greater skipping for target words that repeated an earlier prime word than for those that did not, with this repetition priming occurring both with preview of the full target and with preview of the target’s TL neighbor word. However, time to progress from the word after the target was greater following skips of the TL preview word, whose meaning was anomalous in the sentence context, than following skips of the full preview word whose meaning fit sensibly into the sentence context. Together, the results support the idea that coordination between word-recognition and oculomotor processes occurs at the level of implicit lexical decisions. PMID:23106372

  6. Modeling Open-Set Spoken Word Recognition in Postlingually Deafened Adults after Cochlear Implantation: Some Preliminary Results with the Neighborhood Activation Model

    PubMed Central

    Meyer, Ted A.; Frisch, Stefan A.; Pisoni, David B.; Miyamoto, Richard T.; Svirsky, Mario A.

    2012-01-01

    Hypotheses Do cochlear implants provide enough information to allow adult cochlear implant users to understand words in ways that are similar to listeners with acoustic hearing? Can we use a computational model to gain insight into the underlying mechanisms used by cochlear implant users to recognize spoken words? Background The Neighborhood Activation Model has been shown to be a reasonable model of word recognition for listeners with normal hearing. The Neighborhood Activation Model assumes that words are recognized in relation to other similar-sounding words in a listener’s lexicon. The probability of correctly identifying a word is based on the phoneme perception probabilities from a listener’s closed-set consonant and vowel confusion matrices modified by the relative frequency of occurrence of the target word compared with similar-sounding words (neighbors). Common words with few similar-sounding neighbors are more likely to be selected as responses than less common words with many similar-sounding neighbors. Recent studies have shown that several of the assumptions of the Neighborhood Activation Model also hold true for cochlear implant users. Methods Closed-set consonant and vowel confusion matrices were obtained from 26 postlingually deafened adults who use cochlear implants. Confusion matrices were used to represent input errors to the Neighborhood Activation Model. Responses to the different stimuli were then generated by the Neighborhood Activation Model after incorporating the frequency of occurrence counts of the stimuli and their neighbors. Model outputs were compared with obtained performance measures on the Consonant-Vowel Nucleus-Consonant word test. Information transmission analysis was used to assess whether the Neighborhood Activation Model was able to successfully generate and predict word and individual phoneme recognition by cochlear implant users. Results The Neighborhood Activation Model predicted Consonant-Vowel Nucleus-Consonant test words at levels similar to those correctly identified by the cochlear implant users. The Neighborhood Activation Model also predicted phoneme feature information well. Conclusion The results obtained suggest that the Neighborhood Activation Model provides a reasonable explanation of word recognition by postlingually deafened adults after cochlear implantation. It appears that multichannel cochlear implants give cochlear implant users access to their mental lexicons in a manner that is similar to listeners with acoustic hearing. The lexical properties of the test stimuli used to assess performance are important to spoken-word recognition and should be included in further models of the word recognition process. PMID:12851554

  7. Tracking speech comprehension in space and time.

    PubMed

    Pulvermüller, Friedemann; Shtyrov, Yury; Ilmoniemi, Risto J; Marslen-Wilson, William D

    2006-07-01

    A fundamental challenge for the cognitive neuroscience of language is to capture the spatio-temporal patterns of brain activity that underlie critical functional components of the language comprehension process. We combine here psycholinguistic analysis, whole-head magnetoencephalography (MEG), the Mismatch Negativity (MMN) paradigm, and state-of-the-art source localization techniques (Equivalent Current Dipole and L1 Minimum-Norm Current Estimates) to locate the process of spoken word recognition at a specific moment in space and time. The magnetic MMN to words presented as rare "deviant stimuli" in an oddball paradigm among repetitive "standard" speech stimuli, peaked 100-150 ms after the information in the acoustic input, was sufficient for word recognition. The latency with which words were recognized corresponded to that of an MMN source in the left superior temporal cortex. There was a significant correlation (r = 0.7) of latency measures of word recognition in individual study participants with the latency of the activity peak of the superior temporal source. These results demonstrate a correspondence between the behaviorally determined recognition point for spoken words and the cortical activation in left posterior superior temporal areas. Both the MMN calculated in the classic manner, obtained by subtracting standard from deviant stimulus response recorded in the same experiment, and the identity MMN (iMMN), defined as the difference between the neuromagnetic responses to the same stimulus presented as standard and deviant stimulus, showed the same significant correlation with word recognition processes.

  8. Semantic Neighborhood Effects for Abstract versus Concrete Words

    PubMed Central

    Danguecan, Ashley N.; Buchanan, Lori

    2016-01-01

    Studies show that semantic effects may be task-specific, and thus, that semantic representations are flexible and dynamic. Such findings are critical to the development of a comprehensive theory of semantic processing in visual word recognition, which should arguably account for how semantic effects may vary by task. It has been suggested that semantic effects are more directly examined using tasks that explicitly require meaning processing relative to those for which meaning processing is not necessary (e.g., lexical decision task). The purpose of the present study was to chart the processing of concrete versus abstract words in the context of a global co-occurrence variable, semantic neighborhood density (SND), by comparing word recognition response times (RTs) across four tasks varying in explicit semantic demands: standard lexical decision task (with non-pronounceable non-words), go/no-go lexical decision task (with pronounceable non-words), progressive demasking task, and sentence relatedness task. The same experimental stimulus set was used across experiments and consisted of 44 concrete and 44 abstract words, with half of these being low SND, and half being high SND. In this way, concreteness and SND were manipulated in a factorial design using a number of visual word recognition tasks. A consistent RT pattern emerged across tasks, in which SND effects were found for abstract (but not necessarily concrete) words. Ultimately, these findings highlight the importance of studying interactive effects in word recognition, and suggest that linguistic associative information is particularly important for abstract words. PMID:27458422

  9. Semantic Neighborhood Effects for Abstract versus Concrete Words.

    PubMed

    Danguecan, Ashley N; Buchanan, Lori

    2016-01-01

    Studies show that semantic effects may be task-specific, and thus, that semantic representations are flexible and dynamic. Such findings are critical to the development of a comprehensive theory of semantic processing in visual word recognition, which should arguably account for how semantic effects may vary by task. It has been suggested that semantic effects are more directly examined using tasks that explicitly require meaning processing relative to those for which meaning processing is not necessary (e.g., lexical decision task). The purpose of the present study was to chart the processing of concrete versus abstract words in the context of a global co-occurrence variable, semantic neighborhood density (SND), by comparing word recognition response times (RTs) across four tasks varying in explicit semantic demands: standard lexical decision task (with non-pronounceable non-words), go/no-go lexical decision task (with pronounceable non-words), progressive demasking task, and sentence relatedness task. The same experimental stimulus set was used across experiments and consisted of 44 concrete and 44 abstract words, with half of these being low SND, and half being high SND. In this way, concreteness and SND were manipulated in a factorial design using a number of visual word recognition tasks. A consistent RT pattern emerged across tasks, in which SND effects were found for abstract (but not necessarily concrete) words. Ultimately, these findings highlight the importance of studying interactive effects in word recognition, and suggest that linguistic associative information is particularly important for abstract words.

  10. Face and Word Recognition Can Be Selectively Affected by Brain Injury or Developmental Disorders.

    PubMed

    Robotham, Ro J; Starrfelt, Randi

    2017-01-01

    Face and word recognition have traditionally been thought to rely on highly specialised and relatively independent cognitive processes. Some of the strongest evidence for this has come from patients with seemingly category-specific visual perceptual deficits such as pure prosopagnosia, a selective face recognition deficit, and pure alexia, a selective word recognition deficit. Together, the patterns of impaired reading with preserved face recognition and impaired face recognition with preserved reading constitute a double dissociation. The existence of these selective deficits has been questioned over the past decade. It has been suggested that studies describing patients with these pure deficits have failed to measure the supposedly preserved functions using sensitive enough measures, and that if tested using sensitive measurements, all patients with deficits in one visual category would also have deficits in the other. The implications of this would be immense, with most textbooks in cognitive neuropsychology requiring drastic revisions. In order to evaluate the evidence for dissociations, we review studies that specifically investigate whether face or word recognition can be selectively affected by acquired brain injury or developmental disorders. We only include studies published since 2004, as comprehensive reviews of earlier studies are available. Most of the studies assess the supposedly preserved functions using sensitive measurements. We found convincing evidence that reading can be preserved in acquired and developmental prosopagnosia and also evidence (though weaker) that face recognition can be preserved in acquired or developmental dyslexia, suggesting that face and word recognition are at least in part supported by independent processes.

  11. Audiovisual speech facilitates voice learning.

    PubMed

    Sheffert, Sonya M; Olson, Elizabeth

    2004-02-01

    In this research, we investigated the effects of voice and face information on the perceptual learning of talkers and on long-term memory for spoken words. In the first phase, listeners were trained over several days to identify voices from words presented auditorily or audiovisually. The training data showed that visual information about speakers enhanced voice learning, revealing cross-modal connections in talker processing akin to those observed in speech processing. In the second phase, the listeners completed an auditory or audiovisual word recognition memory test in which equal numbers of words were spoken by familiar and unfamiliar talkers. The data showed that words presented by familiar talkers were more likely to be retrieved from episodic memory, regardless of modality. Together, these findings provide new information about the representational code underlying familiar talker recognition and the role of stimulus familiarity in episodic word recognition.

  12. Sign language ability in young deaf signers predicts comprehension of written sentences in English.

    PubMed

    Andrew, Kathy N; Hoshooley, Jennifer; Joanisse, Marc F

    2014-01-01

    We investigated the robust correlation between American Sign Language (ASL) and English reading ability in 51 young deaf signers ages 7;3 to 19;0. Signers were divided into 'skilled' and 'less-skilled' signer groups based on their performance on three measures of ASL. We next assessed reading comprehension of four English sentence structures (actives, passives, pronouns, reflexive pronouns) using a sentence-to-picture-matching task. Of interest was the extent to which ASL proficiency provided a foundation for lexical and syntactic processes of English. Skilled signers outperformed less-skilled signers overall. Error analyses further indicated greater single-word recognition difficulties in less-skilled signers marked by a higher rate of errors reflecting an inability to identify the actors and actions described in the sentence. Our findings provide evidence that increased ASL ability supports English sentence comprehension both at the levels of individual words and syntax. This is consistent with the theory that first language learning promotes second language through transference of linguistic elements irrespective of the transparency of mapping of grammatical structures between the two languages.

  13. Sign Language Ability in Young Deaf Signers Predicts Comprehension of Written Sentences in English

    PubMed Central

    Andrew, Kathy N.; Hoshooley, Jennifer; Joanisse, Marc F.

    2014-01-01

    We investigated the robust correlation between American Sign Language (ASL) and English reading ability in 51 young deaf signers ages 7;3 to 19;0. Signers were divided into ‘skilled’ and ‘less-skilled’ signer groups based on their performance on three measures of ASL. We next assessed reading comprehension of four English sentence structures (actives, passives, pronouns, reflexive pronouns) using a sentence-to-picture-matching task. Of interest was the extent to which ASL proficiency provided a foundation for lexical and syntactic processes of English. Skilled signers outperformed less-skilled signers overall. Error analyses further indicated greater single-word recognition difficulties in less-skilled signers marked by a higher rate of errors reflecting an inability to identify the actors and actions described in the sentence. Our findings provide evidence that increased ASL ability supports English sentence comprehension both at the levels of individual words and syntax. This is consistent with the theory that first language learning promotes second language through transference of linguistic elements irrespective of the transparency of mapping of grammatical structures between the two languages. PMID:24587174

  14. Interdependence of linguistic and indexical speech perception skills in school-age children with early cochlear implantation.

    PubMed

    Geers, Ann E; Davidson, Lisa S; Uchanski, Rosalie M; Nicholas, Johanna G

    2013-09-01

    This study documented the ability of experienced pediatric cochlear implant (CI) users to perceive linguistic properties (what is said) and indexical attributes (emotional intent and talker identity) of speech, and examined the extent to which linguistic (LSP) and indexical (ISP) perception skills are related. Preimplant-aided hearing, age at implantation, speech processor technology, CI-aided thresholds, sequential bilateral cochlear implantation, and academic integration with hearing age-mates were examined for their possible relationships to both LSP and ISP skills. Sixty 9- to 12-year olds, first implanted at an early age (12 to 38 months), participated in a comprehensive test battery that included the following LSP skills: (1) recognition of monosyllabic words at loud and soft levels, (2) repetition of phonemes and suprasegmental features from nonwords, and (3) recognition of key words from sentences presented within a noise background, and the following ISP skills: (1) discrimination of across-gender and within-gender (female) talkers and (2) identification and discrimination of emotional content from spoken sentences. A group of 30 age-matched children without hearing loss completed the nonword repetition, and talker- and emotion-perception tasks for comparison. Word-recognition scores decreased with signal level from a mean of 77% correct at 70 dB SPL to 52% at 50 dB SPL. On average, CI users recognized 50% of key words presented in sentences that were 9.8 dB above background noise. Phonetic properties were repeated from nonword stimuli at about the same level of accuracy as suprasegmental attributes (70 and 75%, respectively). The majority of CI users identified emotional content and differentiated talkers significantly above chance levels. Scores on LSP and ISP measures were combined into separate principal component scores and these components were highly correlated (r = 0.76). Both LSP and ISP component scores were higher for children who received a CI at the youngest ages, upgraded to more recent CI technology and had lower CI-aided thresholds. Higher scores, for both LSP and ISP components, were also associated with higher language levels and mainstreaming at younger ages. Higher ISP scores were associated with better social skills. Results strongly support a link between indexical and linguistic properties in perceptual analysis of speech. These two channels of information appear to be processed together in parallel by the auditory system and are inseparable in perception. Better speech performance, for both linguistic and indexical perception, is associated with younger age at implantation and use of more recent speech processor technology. Children with better speech perception demonstrated better spoken language, earlier academic mainstreaming, and placement in more typically sized classrooms (i.e., >20 students). Well-developed social skills were more highly associated with the ability to discriminate the nuances of talker identity and emotion than with the ability to recognize words and sentences through listening. The extent to which early cochlear implantation enabled these early-implanted children to make use of both linguistic and indexical properties of speech influenced not only their development of spoken language, but also their ability to function successfully in a hearing world.

  15. The Effective Use of Symbols in Teaching Word Recognition to Children with Severe Learning Difficulties: A Comparison of Word Alone, Integrated Picture Cueing and the Handle Technique.

    ERIC Educational Resources Information Center

    Sheehy, Kieron

    2002-01-01

    A comparison is made between a new technique (the Handle Technique), Integrated Picture Cueing, and a Word Alone Method. Results show using a new combination of teaching strategies enabled logographic symbols to be used effectively in teaching word recognition to 12 children with severe learning difficulties. (Contains references.) (Author/CR)

  16. Spoken Word Recognition in Adolescents with Autism Spectrum Disorders and Specific Language Impairment

    ERIC Educational Resources Information Center

    Loucas, Tom; Riches, Nick; Baird, Gillian; Pickles, Andrew; Simonoff, Emily; Chandler, Susie; Charman, Tony

    2013-01-01

    Spoken word recognition, during gating, appears intact in specific language impairment (SLI). This study used gating to investigate the process in adolescents with autism spectrum disorders plus language impairment (ALI). Adolescents with ALI, SLI, and typical language development (TLD), matched on nonverbal IQ listened to gated words that varied…

  17. Limited Role of Contextual Information in Adult Word Recognition. Technical Report No. 411.

    ERIC Educational Resources Information Center

    Durgunoglu, Aydin Y.

    Recognizing a word in a meaningful text involves processes that combine information from many different sources, and both bottom-up processes (such as feature extraction and letter recognition) and top-down processes (contextual information) are thought to interact when skilled readers recognize words. Two similar experiments investigated word…

  18. Age-of-Acquisition Effects in Visual Word Recognition: Evidence from Expert Vocabularies

    ERIC Educational Resources Information Center

    Stadthagen-Gonzalez, Hans; Bowers, Jeffrey S.; Damian, Markus F.

    2004-01-01

    Three experiments assessed the contributions of age-of-acquisition (AoA) and frequency to visual word recognition. Three databases were created from electronic journals in chemistry, psychology and geology in order to identify technical words that are extremely frequent in each discipline but acquired late in life. In Experiment 1, psychologists…

  19. Foveational Complexity in Single Word Identification: Contralateral Visual Pathways Are Advantaged over Ipsilateral Pathways

    ERIC Educational Resources Information Center

    Obregon, Mateo; Shillcock, Richard

    2012-01-01

    Recognition of a single word is an elemental task in innumerable cognitive psychology experiments, but involves unexpected complexity. We test a controversial claim that the human fovea is vertically divided, with each half projecting to either the contralateral or ipsilateral hemisphere, thereby influencing foveal word recognition. We report a…

  20. The Temporal Dynamics of Spoken Word Recognition in Adverse Listening Conditions

    ERIC Educational Resources Information Center

    Brouwer, Susanne; Bradlow, Ann R.

    2016-01-01

    This study examined the temporal dynamics of spoken word recognition in noise and background speech. In two visual-world experiments, English participants listened to target words while looking at four pictures on the screen: a target (e.g. "candle"), an onset competitor (e.g. "candy"), a rhyme competitor (e.g.…

  1. Using Constant Time Delay to Teach Braille Word Recognition

    ERIC Educational Resources Information Center

    Hooper, Jonathan; Ivy, Sarah; Hatton, Deborah

    2014-01-01

    Introduction: Constant time delay has been identified as an evidence-based practice to teach print sight words and picture recognition (Browder, Ahlbrim-Delzell, Spooner, Mims, & Baker, 2009). For the study presented here, we tested the effectiveness of constant time delay to teach new braille words. Methods: A single-subject multiple baseline…

  2. Spoken Word Recognition of Chinese Words in Continuous Speech

    ERIC Educational Resources Information Center

    Yip, Michael C. W.

    2015-01-01

    The present study examined the role of positional probability of syllables played in recognition of spoken word in continuous Cantonese speech. Because some sounds occur more frequently at the beginning position or ending position of Cantonese syllables than the others, so these kinds of probabilistic information of syllables may cue the locations…

  3. Is Syntactic-Category Processing Obligatory in Visual Word Recognition? Evidence from Chinese

    ERIC Educational Resources Information Center

    Wong, Andus Wing-Kuen; Chen, Hsuan-Chih

    2012-01-01

    Three experiments were conducted to investigate how syntactic-category and semantic information is processed in visual word recognition. The stimuli were two-character Chinese words in which semantic and syntactic-category ambiguities were factorially manipulated. A lexical decision task was employed in Experiment 1, whereas a semantic relatedness…

  4. Got Rhythm...For Better and for Worse. Cross-Modal Effects of Auditory Rhythm on Visual Word Recognition

    ERIC Educational Resources Information Center

    Brochard, Renaud; Tassin, Maxime; Zagar, Daniel

    2013-01-01

    The present research aimed to investigate whether, as previously observed with pictures, background auditory rhythm would also influence visual word recognition. In a lexical decision task, participants were presented with bisyllabic visual words, segmented into two successive groups of letters, while an irrelevant strongly metric auditory…

  5. Genetic and Environmental Influences on Individual Differences in Printed Word Recognition.

    ERIC Educational Resources Information Center

    Gayan, Javier; Olson, Richard K.

    2003-01-01

    Explored genetic and environmental etiologies of individual differences in printed word recognition and related skills in identical and fraternal twin 8- to 18-year-olds. Found evidence for moderate genetic influences common between IQ, phoneme awareness, and word-reading skills and for stronger IQ-independent genetic influences that were common…

  6. L2 Gender Facilitation and Inhibition in Spoken Word Recognition

    ERIC Educational Resources Information Center

    Behney, Jennifer N.

    2011-01-01

    This dissertation investigates the role of grammatical gender facilitation and inhibition in second language (L2) learners' spoken word recognition. Native speakers of languages that have grammatical gender are sensitive to gender marking when hearing and recognizing a word. Gender facilitation refers to when a given noun that is preceded by an…

  7. Parallel language activation and cognitive control during spoken word recognition in bilinguals

    PubMed Central

    Blumenfeld, Henrike K.; Marian, Viorica

    2013-01-01

    Accounts of bilingual cognitive advantages suggest an associative link between cross-linguistic competition and inhibitory control. We investigate this link by examining English-Spanish bilinguals’ parallel language activation during auditory word recognition and nonlinguistic Stroop performance. Thirty-one English-Spanish bilinguals and 30 English monolinguals participated in an eye-tracking study. Participants heard words in English (e.g., comb) and identified corresponding pictures from a display that included pictures of a Spanish competitor (e.g., conejo, English rabbit). Bilinguals with higher Spanish proficiency showed more parallel language activation and smaller Stroop effects than bilinguals with lower Spanish proficiency. Across all bilinguals, stronger parallel language activation between 300–500ms after word onset was associated with smaller Stroop effects; between 633–767ms, reduced parallel language activation was associated with smaller Stroop effects. Results suggest that bilinguals who perform well on the Stroop task show increased cross-linguistic competitor activation during early stages of word recognition and decreased competitor activation during later stages of word recognition. Findings support the hypothesis that cross-linguistic competition impacts domain-general inhibition. PMID:24244842

  8. The Effect of Signal-to-Noise Ratio on Linguistic Processing in a Semantic Judgment Task: An Aging Study.

    PubMed

    Stanley, Nicholas; Davis, Tara; Estis, Julie

    2017-03-01

    Aging effects on speech understanding in noise have primarily been assessed through speech recognition tasks. Recognition tasks, which focus on bottom-up, perceptual aspects of speech understanding, intentionally limit linguistic and cognitive factors by asking participants to only repeat what they have heard. On the other hand, linguistic processing tasks require bottom-up and top-down (linguistic, cognitive) processing skills and are, therefore, more reflective of speech understanding abilities used in everyday communication. The effect of signal-to-noise ratio (SNR) on linguistic processing ability is relatively unknown for either young (YAs) or older adults (OAs). To determine if reduced SNRs would be more deleterious to the linguistic processing of OAs than YAs, as measured by accuracy and reaction time in a semantic judgment task in competing speech. In the semantic judgment task, participants indicated via button press whether word pairs were a semantic Match or No Match. This task was performed in quiet, as well as, +3, 0, -3, and -6 dB SNR with two-talker speech competition. Seventeen YAs (20-30 yr) with normal hearing sensitivity and 17 OAs (60-68 yr) with normal hearing sensitivity or mild-to-moderate sensorineural hearing loss within age-appropriate norms. Accuracy, reaction time, and false alarm rate were measured and analyzed using a mixed design analysis of variance. A decrease in SNR level significantly reduced accuracy and increased reaction time in both YAs and OAs. However, poor SNRs affected accuracy and reaction time of Match and No Match word pairs differently. Accuracy for Match pairs declined at a steeper rate than No Match pairs in both groups as SNR decreased. In addition, reaction time for No Match pairs increased at a greater rate than Match pairs in more difficult SNRs, particularly at -3 and -6 dB SNR. False-alarm rates indicated that participants had a response bias to No Match pairs as the SNR decreased. Age-related differences were limited to No Match pair accuracies at -6 dB SNR. The ability to correctly identify semantically matched word pairs was more susceptible to disruption by a poor SNR than semantically unrelated words in both YAs and OAs. The effect of SNR on this semantic judgment task implies that speech competition differentially affected the facilitation of semantically related words and the inhibition of semantically incompatible words, although processing speed, as measured by reaction time, remained faster for semantically matched pairs. Overall, the semantic judgment task in competing speech elucidated the effect of a poor listening environment on the higher order processing of words. American Academy of Audiology

  9. Synthesis of Common Arabic Handwritings to Aid Optical Character Recognition Research.

    PubMed

    Dinges, Laslo; Al-Hamadi, Ayoub; Elzobi, Moftah; El-Etriby, Sherif

    2016-03-11

    Document analysis tasks such as pattern recognition, word spotting or segmentation, require comprehensive databases for training and validation. Not only variations in writing style but also the used list of words is of importance in the case that training samples should reflect the input of a specific area of application. However, generation of training samples is expensive in the sense of manpower and time, particularly if complete text pages including complex ground truth are required. This is why there is a lack of such databases, especially for Arabic, the second most popular language. However, Arabic handwriting recognition involves different preprocessing, segmentation and recognition methods. Each requires particular ground truth or samples to enable optimal training and validation, which are often not covered by the currently available databases. To overcome this issue, we propose a system that synthesizes Arabic handwritten words and text pages and generates corresponding detailed ground truth. We use these syntheses to validate a new, segmentation based system that recognizes handwritten Arabic words. We found that a modification of an Active Shape Model based character classifiers-that we proposed earlier-improves the word recognition accuracy. Further improvements are achieved, by using a vocabulary of the 50,000 most common Arabic words for error correction.

  10. Synthesis of Common Arabic Handwritings to Aid Optical Character Recognition Research

    PubMed Central

    Dinges, Laslo; Al-Hamadi, Ayoub; Elzobi, Moftah; El-etriby, Sherif

    2016-01-01

    Document analysis tasks such as pattern recognition, word spotting or segmentation, require comprehensive databases for training and validation. Not only variations in writing style but also the used list of words is of importance in the case that training samples should reflect the input of a specific area of application. However, generation of training samples is expensive in the sense of manpower and time, particularly if complete text pages including complex ground truth are required. This is why there is a lack of such databases, especially for Arabic, the second most popular language. However, Arabic handwriting recognition involves different preprocessing, segmentation and recognition methods. Each requires particular ground truth or samples to enable optimal training and validation, which are often not covered by the currently available databases. To overcome this issue, we propose a system that synthesizes Arabic handwritten words and text pages and generates corresponding detailed ground truth. We use these syntheses to validate a new, segmentation based system that recognizes handwritten Arabic words. We found that a modification of an Active Shape Model based character classifiers—that we proposed earlier—improves the word recognition accuracy. Further improvements are achieved, by using a vocabulary of the 50,000 most common Arabic words for error correction. PMID:26978368

  11. The optimal viewing position effect in printed versus cursive words: Evidence of a reading cost for the cursive font.

    PubMed

    Danna, Jérémy; Massendari, Delphine; Furnari, Benjamin; Ducrot, Stéphanie

    2018-06-13

    Two eye-movement experiments were conducted to examine the effects of font type on the recognition of words presented in central vision, using a variable-viewing-position technique. Two main questions were addressed: (1) Is the optimal viewing position (OVP) for word recognition modulated by font type? (2) Is the cursive font more appropriate than the printed font in word recognition in children who exclusively write using a cursive script? In order to disentangle the role of perceptual difficulty associated with the cursive font and the impact of writing habits, we tested French adults (Experiment 1) and second-grade French children, the latter having exclusively learned to write in cursive (Experiment 2). Results revealed that the printed font is more appropriate than the cursive for recognizing words in both adults and children: adults were slightly less accurate in cursive than in printed stimuli recognition and children were slower to identify cursive stimuli than printed stimuli. Eye-movement measures also revealed that the OVP curves were flattened in cursive font in both adults and children. We concluded that the perceptual difficulty of the cursive font degrades word recognition by impacting the OVP stability. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Role of processing speed and depressed mood on encoding, storage, and retrieval memory functions in patients diagnosed with schizophrenia.

    PubMed

    Brébion, Gildas; David, Anthony S; Bressan, Rodrigo A; Pilowsky, Lyn S

    2007-01-01

    The role of various types of slowing of processing speed, as well as the role of depressed mood, on each stage of verbal memory functioning in patients diagnosed with schizophrenia was investigated. Mixed lists of high- and low-frequency words were presented, and immediate and delayed free recall and recognition were required. Two levels of encoding were studied by contrasting the relatively automatic encoding of the high-frequency words and the more effortful encoding of the low-frequency words. Storage was studied by contrasting immediate and delayed recall. Retrieval was studied by contrasting free recall and recognition. Three tests of motor and cognitive processing speed were administered as well. Regression analyses involving the three processing speed measures revealed that cognitive speed was the only predictor of the recall and recognition of the low-frequency words. Furthermore, slowing in cognitive speed accounted for the deficit in recall and recognition of the low-frequency words relative to a healthy control group. Depressed mood was significantly associated with recognition of the low-frequency words. Neither processing speed nor depressed mood was associated with storage efficiency. It is concluded that both cognitive speed slowing and depressed mood impact on effortful encoding processes.

  13. Predicting word-recognition performance in noise by young listeners with normal hearing using acoustic, phonetic, and lexical variables.

    PubMed

    McArdle, Rachel; Wilson, Richard H

    2008-06-01

    To analyze the 50% correct recognition data that were from the Wilson et al (this issue) study and that were obtained from 24 listeners with normal hearing; also to examine whether acoustic, phonetic, or lexical variables can predict recognition performance for monosyllabic words presented in speech-spectrum noise. The specific variables are as follows: (a) acoustic variables (i.e., effective root-mean-square sound pressure level, duration), (b) phonetic variables (i.e., consonant features such as manner, place, and voicing for initial and final phonemes; vowel phonemes), and (c) lexical variables (i.e., word frequency, word familiarity, neighborhood density, neighborhood frequency). The descriptive, correlational study will examine the influence of acoustic, phonetic, and lexical variables on speech recognition in noise performance. Regression analysis demonstrated that 45% of the variance in the 50% point was accounted for by acoustic and phonetic variables whereas only 3% of the variance was accounted for by lexical variables. These findings suggest that monosyllabic word-recognition-in-noise is more dependent on bottom-up processing than on top-down processing. The results suggest that when speech-in-noise testing is used in a pre- and post-hearing-aid-fitting format, the use of monosyllabic words may be sensitive to changes in audibility resulting from amplification.

  14. Hierarchical singleton-type recurrent neural fuzzy networks for noisy speech recognition.

    PubMed

    Juang, Chia-Feng; Chiou, Chyi-Tian; Lai, Chun-Lung

    2007-05-01

    This paper proposes noisy speech recognition using hierarchical singleton-type recurrent neural fuzzy networks (HSRNFNs). The proposed HSRNFN is a hierarchical connection of two singleton-type recurrent neural fuzzy networks (SRNFNs), where one is used for noise filtering and the other for recognition. The SRNFN is constructed by recurrent fuzzy if-then rules with fuzzy singletons in the consequences, and their recurrent properties make them suitable for processing speech patterns with temporal characteristics. In n words recognition, n SRNFNs are created for modeling n words, where each SRNFN receives the current frame feature and predicts the next one of its modeling word. The prediction error of each SRNFN is used as recognition criterion. In filtering, one SRNFN is created, and each SRNFN recognizer is connected to the same SRNFN filter, which filters noisy speech patterns in the feature domain before feeding them to the SRNFN recognizer. Experiments with Mandarin word recognition under different types of noise are performed. Other recognizers, including multilayer perceptron (MLP), time-delay neural networks (TDNNs), and hidden Markov models (HMMs), are also tested and compared. These experiments and comparisons demonstrate good results with HSRNFN for noisy speech recognition tasks.

  15. High-Fidelity Visual Long-Term Memory within an Unattended Blink of an Eye.

    PubMed

    Kuhbandner, Christof; Rosas-Corona, Elizabeth A; Spachtholz, Philipp

    2017-01-01

    What is stored in long-term memory from current sensations is a question that has attracted considerable interest. Over time, several prominent theories have consistently proposed that only attended sensory information leaves a durable memory trace whereas unattended information is not stored beyond the current moment, an assumption that seems to be supported by abundant empirical evidence. Here we show, by using a more sensitive memory test than in previous studies, that this is actually not true. Observers viewed a rapid stream of real-world object pictures overlapped by words (presentation duration per stimulus: 500 ms, interstimulus interval: 200 ms), with the instruction to attend to the words and detect word repetitions, without knowing that their memory would be tested later. In a surprise two-alternative forced-choice recognition test, memory for the unattended object pictures was tested. Memory performance was substantially above chance, even when detailed feature knowledge was necessary for correct recognition, even when tested 24 h later, and even although participants reported that they do not have any memories. These findings suggests that humans have the ability to store at high speed detailed copies of current visual stimulations in long-term memory independently of current intentions and the current attentional focus.

  16. The Role of Morphology in Word Recognition of Hebrew as a Templatic Language

    ERIC Educational Resources Information Center

    Oganyan, Marina

    2017-01-01

    Research on recognition of complex words has primarily focused on affixational complexity in concatenative languages. This dissertation investigates both templatic and affixational complexity in Hebrew, a templatic language, with particular focus on the role of the root and template morphemes in recognition. It also explores the role of morphology…

  17. Using Recall to Reduce False Recognition: Diagnostic and Disqualifying Monitoring

    ERIC Educational Resources Information Center

    Gallo, David A.

    2004-01-01

    Whether recall of studied words (e.g., parsley, rosemary, thyme) could reduce false recognition of related lures (e.g., basil) was investigated. Subjects studied words from several categories for a final recognition memory test. Half of the subjects were given standard test instructions, and half were instructed to use recall to reduce false…

  18. Perceptual learning for speech in noise after application of binary time-frequency masks

    PubMed Central

    Ahmadi, Mahnaz; Gross, Vauna L.; Sinex, Donal G.

    2013-01-01

    Ideal time-frequency (TF) masks can reject noise and improve the recognition of speech-noise mixtures. An ideal TF mask is constructed with prior knowledge of the target speech signal. The intelligibility of a processed speech-noise mixture depends upon the threshold criterion used to define the TF mask. The study reported here assessed the effect of training on the recognition of speech in noise after processing by ideal TF masks that did not restore perfect speech intelligibility. Two groups of listeners with normal hearing listened to speech-noise mixtures processed by TF masks calculated with different threshold criteria. For each group, a threshold criterion that initially produced word recognition scores between 0.56–0.69 was chosen for training. Listeners practiced with one set of TF-masked sentences until their word recognition performance approached asymptote. Perceptual learning was quantified by comparing word-recognition scores in the first and last training sessions. Word recognition scores improved with practice for all listeners with the greatest improvement observed for the same materials used in training. PMID:23464038

  19. Unilateral Hearing Loss: Understanding Speech Recognition and Localization Variability - Implications for Cochlear Implant Candidacy

    PubMed Central

    Firszt, Jill B.; Reeder, Ruth M.; Holden, Laura K.

    2016-01-01

    Objectives At a minimum, unilateral hearing loss (UHL) impairs sound localization ability and understanding speech in noisy environments, particularly if the loss is severe to profound. Accompanying the numerous negative consequences of UHL is considerable unexplained individual variability in the magnitude of its effects. Identification of co-variables that affect outcome and contribute to variability in UHLs could augment counseling, treatment options, and rehabilitation. Cochlear implantation as a treatment for UHL is on the rise yet little is known about factors that could impact performance or whether there is a group at risk for poor cochlear implant outcomes when hearing is near-normal in one ear. The overall goal of our research is to investigate the range and source of variability in speech recognition in noise and localization among individuals with severe to profound UHL and thereby help determine factors relevant to decisions regarding cochlear implantation in this population. Design The present study evaluated adults with severe to profound UHL and adults with bilateral normal hearing. Measures included adaptive sentence understanding in diffuse restaurant noise, localization, roving-source speech recognition (words from 1 of 15 speakers in a 140° arc) and an adaptive speech-reception threshold psychoacoustic task with varied noise types and noise-source locations. There were three age-gender-matched groups: UHL (severe to profound hearing loss in one ear and normal hearing in the contralateral ear), normal hearing listening bilaterally, and normal hearing listening unilaterally. Results Although the normal-hearing-bilateral group scored significantly better and had less performance variability than UHLs on all measures, some UHL participants scored within the range of the normal-hearing-bilateral group on all measures. The normal-hearing participants listening unilaterally had better monosyllabic word understanding than UHLs for words presented on the blocked/deaf side but not the open/hearing side. In contrast, UHLs localized better than the normal hearing unilateral listeners for stimuli on the open/hearing side but not the blocked/deaf side. This suggests that UHLs had learned strategies for improved localization on the side of the intact ear. The UHL and unilateral normal hearing participant groups were not significantly different for speech-in-noise measures. UHL participants with childhood rather than recent hearing loss onset localized significantly better; however, these two groups did not differ for speech recognition in noise. Age at onset in UHL adults appears to affect localization ability differently than understanding speech in noise. Hearing thresholds were significantly correlated with speech recognition for UHL participants but not the other two groups. Conclusions Auditory abilities of UHLs varied widely and could be explained only in part by hearing threshold levels. Age at onset and length of hearing loss influenced performance on some, but not all measures. Results support the need for a revised and diverse set of clinical measures, including sound localization, understanding speech in varied environments and careful consideration of functional abilities as individuals with severe to profound UHL are being considered potential cochlear implant candidates. PMID:28067750

  20. Evidence for Separate Contributions of High and Low Spatial Frequencies during Visual Word Recognition.

    PubMed

    Winsler, Kurt; Holcomb, Phillip J; Midgley, Katherine J; Grainger, Jonathan

    2017-01-01

    Previous studies have shown that different spatial frequency information processing streams interact during the recognition of visual stimuli. However, it is a matter of debate as to the contributions of high and low spatial frequency (HSF and LSF) information for visual word recognition. This study examined the role of different spatial frequencies in visual word recognition using event-related potential (ERP) masked priming. EEG was recorded from 32 scalp sites in 30 English-speaking adults in a go/no-go semantic categorization task. Stimuli were white characters on a neutral gray background. Targets were uppercase five letter words preceded by a forward-mask (#######) and a 50 ms lowercase prime. Primes were either the same word (repeated) or a different word (un-repeated) than the subsequent target and either contained only high, only low, or full spatial frequency information. Additionally within each condition, half of the prime-target pairs were high lexical frequency, and half were low. In the full spatial frequency condition, typical ERP masked priming effects were found with an attenuated N250 (sub-lexical) and N400 (lexical-semantic) for repeated compared to un-repeated primes. For HSF primes there was a weaker N250 effect which interacted with lexical frequency, a significant reversal of the effect around 300 ms, and an N400-like effect for only high lexical frequency word pairs. LSF primes did not produce any of the classic ERP repetition priming effects, however they did elicit a distinct early effect around 200 ms in the opposite direction of typical repetition effects. HSF information accounted for many of the masked repetition priming ERP effects and therefore suggests that HSFs are more crucial for word recognition. However, LSFs did produce their own pattern of priming effects indicating that larger scale information may still play a role in word recognition.

  1. Locus of word frequency effects in spelling to dictation: Still at the orthographic level!

    PubMed

    Bonin, Patrick; Laroche, Betty; Perret, Cyril

    2016-11-01

    The present study was aimed at testing the locus of word frequency effects in spelling to dictation: Are they located at the level of spoken word recognition (Chua & Rickard Liow, 2014) or at the level of the orthographic output lexicon (Delattre, Bonin, & Barry, 2006)? Words that varied on objective word frequency and on phonological neighborhood density were orally presented to adults who had to write them down. Following the additive factors logic (Sternberg, 1969, 2001), if word frequency in spelling to dictation influences a processing level, that is, the orthographic output level, different from that influenced by phonological neighborhood density, that is, spoken word recognition, the impact of the 2 factors should be additive. In contrast, their influence should be overadditive if they act at the same processing level in spelling to dictation, namely the spoken word recognition level. We found that both factors had a reliable influence on the spelling latencies but did not interact. This finding is in line with an orthographic output locus hypothesis of word frequency effects in spelling to dictation. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  2. Effect of phonological and morphological awareness on reading comprehension in Hebrew-speaking adolescents with reading disabilities.

    PubMed

    Schiff, Rachel; Schwartz-Nahshon, Sarit; Nagar, Revital

    2011-06-01

    This research explored phonological and morphological awareness among Hebrew-speaking adolescents with reading disabilities (RD) and its effect on reading comprehension beyond phonological and word-reading abilities. Participants included 39 seventh graders with RD and two matched control groups of normal readers: 40 seventh graders matched for chronological age (CA) and 38 third graders matched for reading age (RA). We assessed phonological awareness, word reading, morphological awareness, and reading comprehension. Findings indicated that the RD group performed similarly to the RA group on phonological awareness but lower on phonological decoding. On the decontextualized morphological task, RD functioned on par with RA, whereas in a contextualized task RD performed above RA but lower than CA. In reading comprehension, RD performed as well as RA. Finally, results indicated that for normal readers contextual morphological awareness uniquely contributed to reading comprehension beyond phonological and word-reading abilities, whereas no such unique contribution emerged for the RD group. The absence of an effect of morphological awareness in predicting reading comprehension was suggested to be related to a different recognition process employed by RD readers which hinder the ability of these readers to use morphosemantic structures. The lexical quality hypothesis was proposed as further support to the findings, suggesting that a low quality of lexical representation in RD students leads to ineffective reading skills and comprehension. Lexical representation is thus critical for both lexical as well as comprehension abilities.

  3. Selective handling of information in patients suffering from restrictive anorexia in an emotional Stroop test and a word recognition test.

    PubMed

    Mendlewicz, L; Nef, F; Simon, Y

    2001-01-01

    Several studies have been carried out using the Stroop test in eating disorders. Some of these studies have brought to light the existence of cognitive and attention deficits linked principally to weight and to food in anorexic and bulimic patients. The aim of the current study is to replicate and to clarify the existence of cognitive and attention deficits in anorexic patients using the Stroop test and a word recognition test. The recognition test is made up of 160 words; 80 words from the previous Stroop experiment mixed at random and matched from a semantic point of view to 80 distractions. The recognition word test is carried out 2 or 3 days after the Stroop test. Thirty-two subjects took part in the study: 16 female patients hospitalised for anorexia nervosa and 16 normal females as controls. Our results do not enable us to confirm the existence of specific cognitive deficits in anorexic patients. Copyright 2001 S. Karger AG, Basel

  4. Distributional structure in language: Contributions to noun–verb difficulty differences in infant word recognition

    PubMed Central

    Willits, Jon A.; Seidenberg, Mark S.; Saffran, Jenny R.

    2014-01-01

    What makes some words easy for infants to recognize, and other words difficult? We addressed this issue in the context of prior results suggesting that infants have difficulty recognizing verbs relative to nouns. In this work, we highlight the role played by the distributional contexts in which nouns and verbs occur. Distributional statistics predict that English nouns should generally be easier to recognize than verbs in fluent speech. However, there are situations in which distributional statistics provide similar support for verbs. The statistics for verbs that occur with the English morpheme –ing, for example, should facilitate verb recognition. In two experiments with 7.5- and 9.5-month-old infants, we tested the importance of distributional statistics for word recognition by varying the frequency of the contextual frames in which verbs occur. The results support the conclusion that distributional statistics are utilized by infant language learners and contribute to noun–verb differences in word recognition. PMID:24908342

  5. Asynchronous glimpsing of speech: Spread of masking and task set-size

    PubMed Central

    Ozmeral, Erol J.; Buss, Emily; Hall, Joseph W.

    2012-01-01

    Howard-Jones and Rosen [(1993). J. Acoust. Soc. Am. 93, 2915–2922] investigated the ability to integrate glimpses of speech that are separated in time and frequency using a “checkerboard” masker, with asynchronous amplitude modulation (AM) across frequency. Asynchronous glimpsing was demonstrated only for spectrally wide frequency bands. It is possible that the reduced evidence of spectro-temporal integration with narrower bands was due to spread of masking at the periphery. The present study tested this hypothesis with a dichotic condition, in which the even- and odd-numbered bands of the target speech and asynchronous AM masker were presented to opposite ears, minimizing the deleterious effects of masking spread. For closed-set consonant recognition, thresholds were 5.1–8.5 dB better for dichotic than for monotic asynchronous AM conditions. Results were similar for closed-set word recognition, but for open-set word recognition the benefit of dichotic presentation was more modest and level dependent, consistent with the effects of spread of masking being level dependent. There was greater evidence of asynchronous glimpsing in the open-set than closed-set tasks. Presenting stimuli dichotically supported asynchronous glimpsing with narrower frequency bands than previously shown, though the magnitude of glimpsing was reduced for narrower bandwidths even in some dichotic conditions. PMID:22894234

  6. Negative words enhance recognition in nonclinical high dissociators: An fMRI study.

    PubMed

    de Ruiter, Michiel B; Veltman, Dick J; Phaf, R Hans; van Dyck, Richard

    2007-08-01

    Memory encoding and retrieval were studied in a nonclinical sample of participants that differed in the amount of reported dissociative experiences (trait dissociation). Behavioral as well as functional imaging (fMRI) indices were used as convergent measures of memory functioning. In a deep vs. shallow encoding paradigm, the influence of dissociative style on elaborative and avoidant encoding was studied, respectively. Furthermore, affectively neutral and negative words were presented, to test whether the effects of dissociative tendencies on memory functioning depended on the affective valence of the stimulus material. Results showed that (a) deep encoding of negative vs. neutral stimuli was associated with higher levels of semantic elaboration in high than in low dissociators, as indicated by increased levels of activity in hippocampus and prefrontal cortex during encoding and higher memory performance during recognition, (b) high dissociators were generally characterized by higher levels of conscious recollection as indicated by increased activity of the hippocampus and posterior parietal areas during recognition, (c) nonclinical high dissociators were not characterized by an avoidant encoding style. These results support the notion that trait dissociation in healthy individuals is associated with high levels of elaborative encoding, resulting in high levels of conscious recollection. These abilities, in addition, seem to depend on the salience of the presented stimulus material.

  7. Improvement in word recognition score with level is associated with hearing aid ownership among patients with hearing loss.

    PubMed

    Halpin, Chris; Rauch, Steven D

    2012-01-01

    Market surveys consistently show that only 22% of those with hearing loss own hearing aids. This is often ascribed to cosmetics, but is it possible that patients apply a different auditory criterion than do audiologists and manufacturers? We tabulated hearing aid ownership in a survey of 1000 consecutive patients. We separated hearing loss cases, with one cohort in which word recognition in quiet could improve with gain (vs. 40 dB HL) and another without such improvement but nonetheless with audiometric thresholds within the manufacturer's fitting ranges. Overall, we found that exactly 22% of hearing loss patients in this sample owned hearing aids; the same finding has been reported in many previous, well-accepted surveys. However, while all patients in the two cohorts experienced difficulty in noise, patients in the cohort without word recognition improvement were found to own hearing aids at a rate of 0.3%, while those patients whose word recognition could increase with level were found to own hearing aids at a rate of 50%. Results also coherently fit a logistic model where shift of the word recognition performance curve by level corresponded to the likelihood of ownership. In addition to the common attribution of low hearing aid usage to patient denial, cosmetic issues, price, or social stigma, these results provide one alternative explanation based on measurable improvement in word recognition performance. Copyright © 2011 S. Karger AG, Basel.

  8. Response-related fMRI of veridical and false recognition of words.

    PubMed

    Heun, Reinhard; Jessen, Frank; Klose, Uwe; Erb, Michael; Granath, Dirk-Oliver; Grodd, Wolfgang

    2004-02-01

    Studies on the relation between local cerebral activation and retrieval success usually compared high and low performance conditions, and thus showed performance-related activation of different brain areas. Only a few studies directly compared signal intensities of different response categories during retrieval. During verbal recognition, we recently observed increased parieto-occipital activation related to false alarms. The present study intends to replicate and extend this observation by investigating common and differential activation by veridical and false recognition. Fifteen healthy volunteers performed a verbal recognition paradigm using 160 learned target and 160 new distractor words. The subjects had to indicate whether they had learned the word before or not. Echo-planar MRI of blood-oxygen-level-dependent signal changes was performed during this recognition task. Words were classified post hoc according to the subjects' responses, i.e. hits, false alarms, correct rejections and misses. Response-related fMRI-analysis was used to compare activation associated with the subjects' recognition success, i.e. signal intensities related to the presentation of words were compared by the above-mentioned four response types. During recognition, all word categories showed increased bilateral activation of the inferior frontal gyrus, the inferior temporal gyrus, the occipital lobe and the brainstem in comparison with the control condition. Hits and false alarms activated several areas including the left medial and lateral parieto-occipital cortex in comparison with subjectively unknown items, i.e. correct rejections and misses. Hits showed more pronounced activation in the medial, false alarms in the lateral parts of the left parieto-occipital cortex. Veridical and false recognition show common as well as different areas of cerebral activation in the left parieto-occipital lobe: increased activation of the medial parietal cortex by hits may correspond to true recognition, increased activation of the parieto-occipital cortex by false alarms may correspond to familiarity decisions. Further studies are needed to investigate the reasons for false decisions in healthy subjects and patients with memory problems.

  9. Interrupted Monosyllabic Words: The Effects of Ten Interruption Locations on Recognition Performance by Older Listeners with Sensorineural Hearing Loss.

    PubMed

    Wilson, Richard H; Sharrett, Kadie C

    2017-01-01

    Two previous experiments from our laboratory with 70 interrupted monosyllabic words demonstrated that recognition performance was influenced by the temporal location of the interruption pattern. The interruption pattern (10 interruptions/sec, 50% duty cycle) was always the same and referenced word onset; the only difference between the patterns was the temporal location of the on- and off-segments of the interruption cycle. In the first study, both young and older listeners obtained better recognition performances when the initial on-segment coincided with word onset than when the initial on-segment was delayed by 50 msec. The second experiment with 24 young listeners detailed recognition performance as the interruption pattern was incremented in 10-msec steps through the 0- to 90-msec onset range. Across the onset conditions, 95% of the functions were either flat or U-shaped. To define the effects that interruption pattern locations had on word recognition by older listeners with sensorineural hearing loss as the interruption pattern incremented, re: word onset, from 0 to 90 msec in 10-msec steps. A repeated-measures design with ten interruption patterns (onset conditions) and one uninterruption condition. Twenty-four older males (mean = 69.6 yr) with sensorineural hearing loss participated in two 1-hour sessions. The three-frequency pure-tone average was 24.0 dB HL and word recognition was ≥80% correct. Seventy consonant-vowel nucleus-consonant words formed the corpus of materials with 25 additional words used for practice. For each participant, the 700 interrupted stimuli (70 words by 10 onset conditions), the 70 words uninterrupted, and two practice lists each were randomized and recorded on compact disc in 33 tracks of 25 words each. The data were analyzed at the participant and word levels and compared to the results obtained earlier on 24 young listeners with normal hearing. The mean recognition performance on the 70 words uninterrupted was 91.0% with an overall mean performance on the ten interruption conditions of 63.2% (range: 57.9-69.3%), compared to 80.4% (range: 73.0-87.7%) obtained earlier on the young adults. The best performances were at the extremes of the onset conditions. Standard deviations ranged from 22.1% to 28.1% (24 participants) and from 9.2% to 12.8% (70 words). An arithmetic algorithm categorized the shapes of the psychometric functions across the ten onset conditions. With the older participants in the current study, 40% of the functions were flat, 41.4% were U-shaped, and 18.6% were inverted U-shaped, which compared favorably to the function shapes by the young listeners in the earlier study of 50.0%, 41.4%, and 8.6%, respectively. There were two words on which the older listeners had 40% better performances. Collectively, the data are orderly, but at the individual word or participant level, the data are somewhat volatile, which may reflect auditory processing differences between the participant groups. The diversity of recognition performances by the older listeners on the ten interruption conditions with each of the 70 words supports the notion that the term hearing loss is inclusive of processes well beyond the filtering produced by end-organ sensitivity deficits. American Academy of Audiology

  10. Postprocessing for character recognition using pattern features and linguistic information

    NASA Astrophysics Data System (ADS)

    Yoshikawa, Takatoshi; Okamoto, Masayosi; Horii, Hiroshi

    1993-04-01

    We propose a new method of post-processing for character recognition using pattern features and linguistic information. This method corrects errors in the recognition of handwritten Japanese sentences containing Kanji characters. This post-process method is characterized by having two types of character recognition. Improving the accuracy of the character recognition rate of Japanese characters is made difficult by the large number of characters, and the existence of characters with similar patterns. Therefore, it is not practical for a character recognition system to recognize all characters in detail. First, this post-processing method generates a candidate character table by recognizing the simplest features of characters. Then, it selects words corresponding to the character from the candidate character table by referring to a word and grammar dictionary before selecting suitable words. If the correct character is included in the candidate character table, this process can correct an error, however, if the character is not included, it cannot correct an error. Therefore, if this method can presume a character does not exist in a candidate character table by using linguistic information (word and grammar dictionary). It then can verify a presumed character by character recognition using complex features. When this method is applied to an online character recognition system, the accuracy of character recognition improves 93.5% to 94.7%. This proved to be the case when it was used for the editorials of a Japanese newspaper (Asahi Shinbun).

  11. Design and performance of a large vocabulary discrete word recognition system. Volume 1: Technical report. [real time computer technique for voice data processing

    NASA Technical Reports Server (NTRS)

    1973-01-01

    The development, construction, and test of a 100-word vocabulary near real time word recognition system are reported. Included are reasonable replacement of any one or all 100 words in the vocabulary, rapid learning of a new speaker, storage and retrieval of training sets, verbal or manual single word deletion, continuous adaptation with verbal or manual error correction, on-line verification of vocabulary as spoken, system modes selectable via verification display keyboard, relationship of classified word to neighboring word, and a versatile input/output interface to accommodate a variety of applications.

  12. An analysis of initial acquisition and maintenance of sight words following picture matching and copy cover, and compare teaching methods.

    PubMed

    Conley, Colleen M; Derby, K Mark; Roberts-Gwinn, Michelle; Weber, Kimberly P; McLaughlin, T E

    2004-01-01

    This study compared the copy, cover, and compare method to a picture-word matching method for teaching sight word recognition. Participants were 5 kindergarten students with less than preprimer sight word vocabularies who were enrolled in a public school in the Pacific Northwest. A multielement design was used to evaluate the effects of the two interventions. Outcomes suggested that sight words taught using the copy, cover, and compare method resulted in better maintenance of word recognition when compared to the picture-matching intervention. Benefits to students and the practicality of employing the word-level teaching methods are discussed.

  13. Caffeine Improves Left Hemisphere Processing of Positive Words

    PubMed Central

    Kuchinke, Lars; Lux, Vanessa

    2012-01-01

    A positivity advantage is known in emotional word recognition in that positive words are consistently processed faster and with fewer errors compared to emotionally neutral words. A similar advantage is not evident for negative words. Results of divided visual field studies, where stimuli are presented in either the left or right visual field and are initially processed by the contra-lateral brain hemisphere, point to a specificity of the language-dominant left hemisphere. The present study examined this effect by showing that the intake of caffeine further enhanced the recognition performance of positive, but not negative or neutral stimuli compared to a placebo control group. Because this effect was only present in the right visual field/left hemisphere condition, and based on the close link between caffeine intake and dopaminergic transmission, this result points to a dopaminergic explanation of the positivity advantage in emotional word recognition. PMID:23144893

  14. The Effect of Talker Variability on Word Recognition in Preschool Children

    PubMed Central

    Ryalls, Brigette Oliver; Pisoni, David B.

    2012-01-01

    In a series of experiments, the authors investigated the effects of talker variability on children’s word recognition. In Experiment 1, when stimuli were presented in the clear, 3- and 5-year-olds were less accurate at identifying words spoken by multiple talkers than those spoken by a single talker when the multiple-talker list was presented first. In Experiment 2, when words were presented in noise, 3-, 4-, and 5-year-olds again performed worse in the multiple-talker condition than in the single-talker condition, this time regardless of order; processing multiple talkers became easier with age. Experiment 3 showed that both children and adults were slower to repeat words from multiple-talker than those from single-talker lists. More important, children (but not adults) matched acoustic properties of the stimuli (specifically, duration). These results provide important new information about the development of talker normalization in speech perception and spoken word recognition. PMID:9149923

  15. Exploring the Neural Representation of Novel Words Learned through Enactment in a Word Recognition Task

    PubMed Central

    Macedonia, Manuela; Mueller, Karsten

    2016-01-01

    Vocabulary learning in a second language is enhanced if learners enrich the learning experience with self-performed iconic gestures. This learning strategy is called enactment. Here we explore how enacted words are functionally represented in the brain and which brain regions contribute to enhance retention. After an enactment training lasting 4 days, participants performed a word recognition task in the functional Magnetic Resonance Imaging (fMRI) scanner. Data analysis suggests the participation of different and partially intertwined networks that are engaged in higher cognitive processes, i.e., enhanced attention and word recognition. Also, an experience-related network seems to map word representation. Besides core language regions, this latter network includes sensory and motor cortices, the basal ganglia, and the cerebellum. On the basis of its complexity and the involvement of the motor system, this sensorimotor network might explain superior retention for enactment. PMID:27445918

  16. Improving Measurement Efficiency of the Inner EAR Scale with Item Response Theory.

    PubMed

    Jessen, Annika; Ho, Andrew D; Corrales, C Eduardo; Yueh, Bevan; Shin, Jennifer J

    2018-02-01

    Objectives (1) To assess the 11-item Inner Effectiveness of Auditory Rehabilitation (Inner EAR) instrument with item response theory (IRT). (2) To determine whether the underlying latent ability could also be accurately represented by a subset of the items for use in high-volume clinical scenarios. (3) To determine whether the Inner EAR instrument correlates with pure tone thresholds and word recognition scores. Design IRT evaluation of prospective cohort data. Setting Tertiary care academic ambulatory otolaryngology clinic. Subjects and Methods Modern psychometric methods, including factor analysis and IRT, were used to assess unidimensionality and item properties. Regression methods were used to assess prediction of word recognition and pure tone audiometry scores. Results The Inner EAR scale is unidimensional, and items varied in their location and information. Information parameter estimates ranged from 1.63 to 4.52, with higher values indicating more useful items. The IRT model provided a basis for identifying 2 sets of items with relatively lower information parameters. Item information functions demonstrated which items added insubstantial value over and above other items and were removed in stages, creating a 8- and 3-item Inner EAR scale for more efficient assessment. The 8-item version accurately reflected the underlying construct. All versions correlated moderately with word recognition scores and pure tone averages. Conclusion The 11-, 8-, and 3-item versions of the Inner EAR scale have strong psychometric properties, and there is correlational validity evidence for the observed scores. Modern psychometric methods can help streamline care delivery by maximizing relevant information per item administered.

  17. What Is in the Naming? A 5-Year Longitudinal Study of Early Rapid Naming and Phonological Sensitivity in Relation to Subsequent Reading Skills in Both Native Chinese and English as a Second Language

    ERIC Educational Resources Information Center

    Pan, Jinger; McBride-Chang, Catherine; Shu, Hua; Liu, Hongyun; Zhang, Yuping; Li, Hong

    2011-01-01

    Among 262 Chinese children, syllable awareness and rapid automatized naming (RAN) at age 5 years and invented spelling of Pinyin at age 6 years independently predicted subsequent Chinese character recognition and English word reading at ages 8 years and 10 years, even with initial Chinese character reading ability statistically controlled. In…

  18. Word Recognition and Learning: Effects of Hearing Loss and Amplification Feature

    PubMed Central

    Stewart, Elizabeth C.; Willman, Amanda P.; Odgear, Ian S.

    2017-01-01

    Two amplification features were examined using auditory tasks that varied in stimulus familiarity. It was expected that the benefits of certain amplification features would increase as the familiarity with the stimuli decreased. A total of 20 children and 15 adults with normal hearing as well as 21 children and 17 adults with mild to severe hearing loss participated. Three models of ear-level devices were selected based on the quality of the high-frequency amplification or the digital noise reduction (DNR) they provided. The devices were fitted to each participant and used during testing only. Participants completed three tasks: (a) word recognition, (b) repetition and lexical decision of real and nonsense words, and (c) novel word learning. Performance improved significantly with amplification for both the children and the adults with hearing loss. Performance improved further with wideband amplification for the children more than for the adults. In steady-state noise and multitalker babble, performance decreased for both groups with little to no benefit from amplification or from the use of DNR. When compared with the listeners with normal hearing, significantly poorer performance was observed for both the children and adults with hearing loss on all tasks with few exceptions. Finally, analysis of across-task performance confirmed the hypothesis that benefit increased as the familiarity of the stimuli decreased for wideband amplification but not for DNR. However, users who prefer DNR for listening comfort are not likely to jeopardize their ability to detect and learn new information when using this feature. PMID:29169314

  19. (Almost) Word for Word: As Voice Recognition Programs Improve, Students Reap the Benefits

    ERIC Educational Resources Information Center

    Smith, Mark

    2006-01-01

    Voice recognition software is hardly new--attempts at capturing spoken words and turning them into written text have been available to consumers for about two decades. But what was once an expensive and highly unreliable tool has made great strides in recent years, perhaps most recognized in programs such as Nuance's Dragon NaturallySpeaking…

  20. The Effects of Environmental Context on Recognition Memory and Claims of Remembering

    ERIC Educational Resources Information Center

    Hockley, William E.

    2008-01-01

    Recognition memory for words was tested in same or different contexts using the remember/know response procedure. Context was manipulated by presenting words in different screen colors and locations and by presenting words against real-world photographs. Overall hit and false-alarm rates were higher for tests presented in an old context compared…

  1. The Effects of Semantic Transparency and Base Frequency on the Recognition of English Complex Words

    ERIC Educational Resources Information Center

    Xu, Joe; Taft, Marcus

    2015-01-01

    A visual lexical decision task was used to examine the interaction between base frequency (i.e., the cumulative frequencies of morphologically related forms) and semantic transparency for a list of derived words. Linear mixed effects models revealed that high base frequency facilitates the recognition of the complex word (i.e., a "base…

  2. The Roles of Tonal and Segmental Information in Mandarin Spoken Word Recognition: An Eyetracking Study

    ERIC Educational Resources Information Center

    Malins, Jeffrey G.; Joanisse, Marc F.

    2010-01-01

    We used eyetracking to examine how tonal versus segmental information influence spoken word recognition in Mandarin Chinese. Participants heard an auditory word and were required to identify its corresponding picture from an array that included the target item ("chuang2" "bed"), a phonological competitor (segmental: chuang1 "window"; cohort:…

  3. Racial-Ethnic Differences in Word Fluency and Auditory Comprehension Among Persons With Poststroke Aphasia.

    PubMed

    Ellis, Charles; Peach, Richard K

    2017-04-01

    To examine aphasia outcomes and to determine whether the observed language profiles vary by race-ethnicity. Retrospective cross-sectional study using a convenience sample of persons of with aphasia (PWA) obtained from AphasiaBank, a database designed for the study of aphasia outcomes. Aphasia research laboratories. PWA (N=381; 339 white and 42 black individuals). Not applicable. Western Aphasia Battery-Revised (WAB-R) total scale score (Aphasia Quotient) and subtest scores were analyzed for racial-ethnic differences. The WAB-R is a comprehensive assessment of communication function designed to evaluate PWA in the areas of spontaneous speech, auditory comprehension, repetition, and naming in addition to reading, writing, apraxia, and constructional, visuospatial, and calculation skills. In univariate comparisons, black PWA exhibited lower word fluency (5.7 vs 7.6; P=.004), auditory word comprehension (49.0 vs 53.0; P=.021), and comprehension of sequential commands (44.2 vs 52.2; P=.012) when compared with white PWA. In multivariate comparisons, adjusted for age and years of education, black PWA exhibited lower word fluency (5.5 vs 7.6; P=.015), auditory word recognition (49.3 vs 53.3; P=.02), and comprehension of sequential commands (43.7 vs 53.2; P=.017) when compared with white PWA. This study identified racial-ethnic differences in word fluency and auditory comprehension ability among PWA. Both skills are critical to effective communication, and racial-ethnic differences in outcomes must be considered in treatment approaches designed to improve overall communication ability. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  4. Facilitatory Effects of Multi-Word Units in Lexical Processing and Word Learning: A Computational Investigation.

    PubMed

    Grimm, Robert; Cassani, Giovanni; Gillis, Steven; Daelemans, Walter

    2017-01-01

    Previous studies have suggested that children and adults form cognitive representations of co-occurring word sequences. We propose (1) that the formation of such multi-word unit (MWU) representations precedes and facilitates the formation of single-word representations in children and thus benefits word learning, and (2) that MWU representations facilitate adult word recognition and thus benefit lexical processing. Using a modified version of an existing computational model (McCauley and Christiansen, 2014), we extract MWUs from a corpus of child-directed speech (CDS) and a corpus of conversations among adults. We then correlate the number of MWUs within which each word appears with (1) age of first production and (2) adult reaction times on a word recognition task. In doing so, we take care to control for the effect of word frequency, as frequent words will naturally tend to occur in many MWUs. We also compare results to a baseline model which randomly groups words into sequences-and find that MWUs have a unique facilitatory effect on both response variables, suggesting that they benefit word learning in children and word recognition in adults. The effect is strongest on age of first production, implying that MWUs are comparatively more important for word learning than for adult lexical processing. We discuss possible underlying mechanisms and formulate testable predictions.

  5. Facilitatory Effects of Multi-Word Units in Lexical Processing and Word Learning: A Computational Investigation

    PubMed Central

    Grimm, Robert; Cassani, Giovanni; Gillis, Steven; Daelemans, Walter

    2017-01-01

    Previous studies have suggested that children and adults form cognitive representations of co-occurring word sequences. We propose (1) that the formation of such multi-word unit (MWU) representations precedes and facilitates the formation of single-word representations in children and thus benefits word learning, and (2) that MWU representations facilitate adult word recognition and thus benefit lexical processing. Using a modified version of an existing computational model (McCauley and Christiansen, 2014), we extract MWUs from a corpus of child-directed speech (CDS) and a corpus of conversations among adults. We then correlate the number of MWUs within which each word appears with (1) age of first production and (2) adult reaction times on a word recognition task. In doing so, we take care to control for the effect of word frequency, as frequent words will naturally tend to occur in many MWUs. We also compare results to a baseline model which randomly groups words into sequences—and find that MWUs have a unique facilitatory effect on both response variables, suggesting that they benefit word learning in children and word recognition in adults. The effect is strongest on age of first production, implying that MWUs are comparatively more important for word learning than for adult lexical processing. We discuss possible underlying mechanisms and formulate testable predictions. PMID:28450842

  6. Age-Related Differences in Recognition Memory for Items and Associations: Contribution of Individual Differences in Working Memory and Metamemory

    PubMed Central

    Bender, Andrew R.; Raz, Naftali

    2012-01-01

    Ability to form new associations between unrelated items is particularly sensitive to aging, but the reasons for such differential vulnerability are unclear. In this study, we examined the role of objective and subjective factors (working memory and beliefs about memory strategies) on differential relations of age with recognition of items and associations. Healthy adults (N = 100, age 21 to 79) studied word pairs, completed item and association recognition tests, and rated the effectiveness of shallow (e.g., repetition) and deep (e.g., imagery or sentence generation) encoding strategies. Advanced age was associated with reduced working memory (WM) capacity and poorer associative recognition. In addition, reduced WM capacity, beliefs in the utility of ineffective encoding strategies, and lack of endorsement of effective ones were independently associated with impaired associative memory. Thus, maladaptive beliefs about memory in conjunction with reduced cognitive resources account in part for differences in associative memory commonly attributed to aging. PMID:22251381

  7. Recognition memory across the lifespan: the impact of word frequency and study-test interval on estimates of familiarity and recollection

    PubMed Central

    Meier, Beat; Rey-Mermet, Alodie; Rothen, Nicolas; Graf, Peter

    2013-01-01

    The goal of this study was to investigate recognition memory performance across the lifespan and to determine how estimates of recollection and familiarity contribute to performance. In each of three experiments, participants from five groups from 14 up to 85 years of age (children, young adults, middle-aged adults, young-old adults, and old-old adults) were presented with high- and low-frequency words in a study phase and were tested immediately afterwards and/or after a one day retention interval. The results showed that word frequency and retention interval affected recognition memory performance as well as estimates of recollection and familiarity. Across the lifespan, the trajectory of recognition memory followed an inverse u-shape function that was neither affected by word frequency nor by retention interval. The trajectory of estimates of recollection also followed an inverse u-shape function, and was especially pronounced for low-frequency words. In contrast, estimates of familiarity did not differ across the lifespan. The results indicate that age differences in recognition memory are mainly due to differences in processes related to recollection while the contribution of familiarity-based processes seems to be age-invariant. PMID:24198796

  8. Evaluation of a wireless audio streaming accessory to improve mobile telephone performance of cochlear implant users.

    PubMed

    Wolfe, Jace; Morais Duke, Mila; Schafer, Erin; Cire, George; Menapace, Christine; O'Neill, Lori

    2016-01-01

    The objective of this study was to evaluate the potential improvement in word recognition in quiet and in noise obtained with use of a Bluetooth-compatible wireless hearing assistance technology (HAT) relative to the acoustic mobile telephone condition (e.g. the mobile telephone receiver held to the microphone of the sound processor). A two-way repeated measures design was used to evaluate differences in telephone word recognition obtained in quiet and in competing noise in the acoustic mobile telephone condition compared to performance obtained with use of the CI sound processor and a telephone HAT. Sixteen adult users of Nucleus cochlear implants and the Nucleus 6 sound processor were included in this study. Word recognition over the mobile telephone in quiet and in noise was significantly better with use of the wireless HAT compared to performance in the acoustic mobile telephone condition. Word recognition over the mobile telephone was better in quiet when compared to performance in noise. The results of this study indicate that use of a wireless HAT improves word recognition over the mobile telephone in quiet and in noise relative to performance in the acoustic mobile telephone condition for a group of adult cochlear implant recipients.

  9. When fear forms memories: threat of shock and brain potentials during encoding and recognition.

    PubMed

    Weymar, Mathias; Bradley, Margaret M; Hamm, Alfons O; Lang, Peter J

    2013-03-01

    The anticipation of highly aversive events is associated with measurable defensive activation, and both animal and human research suggests that stress-inducing contexts can facilitate memory. Here, we investigated whether encoding stimuli in the context of anticipating an aversive shock affects recognition memory. Event-related potentials (ERPs) were measured during a recognition test for words that were encoded in a font color that signaled threat or safety. At encoding, cues signaling threat of shock, compared to safety, prompted enhanced P2 and P3 components. Correct recognition of words encoded in the context of threat, compared to safety, was associated with an enhanced old-new ERP difference (500-700 msec; centro-parietal), and this difference was most reliable for emotional words. Moreover, larger old-new ERP differences when recognizing emotional words encoded in a threatening context were associated with better recognition, compared to words encoded in safety. Taken together, the data indicate enhanced memory for stimuli encoded in a context in which an aversive event is merely anticipated, which could assist in understanding effects of anxiety and stress on memory processes. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. Age-Related Effects of Stimulus Type and Congruency on Inattentional Blindness.

    PubMed

    Liu, Han-Hui

    2018-01-01

    Background: Most of the previous inattentional blindness (IB) studies focused on the factors that contributed to the detection of unattended stimuli. The age-related changes on IB have rarely been investigated across all age groups. In the current study, by using the dual-task IB paradigm, we aimed to explore the age-related effects of attended stimuli type and congruency between attended and unattended stimuli on IB. Methods: The current study recruited 111 participants (30 adolescents, 48 young adults, and 33 middle-aged adults) in the baseline recognition experiments and 341 participants (135 adolescents, 135 young adults, and 71 middle-aged adults) in the IB experiment. We applied the superimposed picture and word streams experimental paradigm to explore the age-related effects of attended stimuli type and congruency between attended and unattended stimuli on IB. An ANOVA was performed to analyze the results. Results: Participants across all age groups presented significantly lower recognition scores for both pictures and words in comparison with baseline recognition. Participants presented decreased recognition for unattended pictures or words from adolescents to young adults and middle-aged adults. When the pictures and words are congruent, all the participants showed significantly higher recognition scores for unattended stimuli in comparison with incongruent condition. Adolescents and young adults did not show recognition differences when primary tasks were attending pictures or words. Conclusion: The current findings showed that all participants presented better recognition scores for attended stimuli in comparison with unattended stimuli, and the recognition scores decreased from the adolescents to young and middle-aged adults. The findings partly supported the attention capacity models of IB.

  11. Effects of orthographic consistency on eye movement behavior: German and English children and adults process the same words differently.

    PubMed

    Rau, Anne K; Moll, Kristina; Snowling, Margaret J; Landerl, Karin

    2015-02-01

    The current study investigated the time course of cross-linguistic differences in word recognition. We recorded eye movements of German and English children and adults while reading closely matched sentences, each including a target word manipulated for length and frequency. Results showed differential word recognition processes for both developing and skilled readers. Children of the two orthographies did not differ in terms of total word processing time, but this equal outcome was achieved quite differently. Whereas German children relied on small-unit processing early in word recognition, English children applied small-unit decoding only upon rereading-possibly when experiencing difficulties in integrating an unfamiliar word into the sentence context. Rather unexpectedly, cross-linguistic differences were also found in adults in that English adults showed longer processing times than German adults for nonwords. Thus, although orthographic consistency does play a major role in reading development, cross-linguistic differences are detectable even in skilled adult readers. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. Task-Dependent Masked Priming Effects in Visual Word Recognition

    PubMed Central

    Kinoshita, Sachiko; Norris, Dennis

    2012-01-01

    A method used widely to study the first 250 ms of visual word recognition is masked priming: These studies have yielded a rich set of data concerning the processes involved in recognizing letters and words. In these studies, there is an implicit assumption that the early processes in word recognition tapped by masked priming are automatic, and masked priming effects should therefore be invariant across tasks. Contrary to this assumption, masked priming effects are modulated by the task goal: For example, only word targets show priming in the lexical decision task, but both words and non-words do in the same-different task; semantic priming effects are generally weak in the lexical decision task but are robust in the semantic categorization task. We explain how such task dependence arises within the Bayesian Reader account of masked priming (Norris and Kinoshita, 2008), and how the task dissociations can be used to understand the early processes in lexical access. PMID:22675316

  13. The picture superiority effect in a cross-modality recognition task.

    PubMed

    Stenbert, G; Radeborg, K; Hedman, L R

    1995-07-01

    Words and pictures were studied and recognition tests given in which each studied object was to be recognized in both word and picture format. The main dependent variable was the latency of the recognition decision. The purpose was to investigate the effects of study modality (word or picture), of congruence between study and test modalities, and of priming resulting from repeated testing. Experiments 1 and 2 used the same basic design, but the latter also varied retention interval. Experiment 3 added a manipulation of instructions to name studied objects, and Experiment 4 deviated from the others by presenting both picture and word referring to the same object together for study. The results showed that congruence between study and test modalities consistently facilitated recognition. Furthermore, items studied as pictures were more rapidly recognized than were items studied as words. With repeated testing, the second instance was affected by its predecessor, but the facilitating effect of picture-to-word priming exceeded that of word-to-picture priming. The finds suggest a two- stage recognition process, in which the first is based on perceptual familiarity and the second uses semantic links for a retrieval search. Common-code theories that grant privileged access to the semantic code for pictures or, alternatively, dual-code theories that assume mnemonic superiority for the image code are supported by the findings. Explanations of the picture superiority effect as resulting from dual encoding of pictures are not supported by the data.

  14. Visual attention shift to printed words during spoken word recognition in Chinese: The role of phonological information.

    PubMed

    Shen, Wei; Qu, Qingqing; Tong, Xiuhong

    2018-05-01

    The aim of this study was to investigate the extent to which phonological information mediates the visual attention shift to printed Chinese words in spoken word recognition by using an eye-movement technique with a printed-word paradigm. In this paradigm, participants are visually presented with four printed words on a computer screen, which include a target word, a phonological competitor, and two distractors. Participants are then required to select the target word using a computer mouse, and the eye movements are recorded. In Experiment 1, phonological information was manipulated at the full-phonological overlap; in Experiment 2, phonological information at the partial-phonological overlap was manipulated; and in Experiment 3, the phonological competitors were manipulated to share either fulloverlap or partial-overlap with targets directly. Results of the three experiments showed that the phonological competitor effects were observed at both the full-phonological overlap and partial-phonological overlap conditions. That is, phonological competitors attracted more fixations than distractors, which suggested that phonological information mediates the visual attention shift during spoken word recognition. More importantly, we found that the mediating role of phonological information varies as a function of the phonological similarity between target words and phonological competitors.

  15. A System for Mailpiece ZIP Code Assignment through Contextual Analysis. Phase 2

    DTIC Science & Technology

    1991-03-01

    Segmentation Address Block Interpretation Automatic Feature Generation Word Recognition Feature Detection Word Verification Optical Character Recognition Directory...in the Phase III effort. 1.1 Motivation The United States Postal Service (USPS) deploys large numbers of optical character recognition (OCR) machines...4):208-218, November 1986. [2] Gronmeyer, L. K., Ruffin, B. W., Lybanon, M. A., Neely, P. L., and Pierce, S. E. An Overview of Optical Character Recognition (OCR

  16. Hearing taboo words can result in early talker effects in word recognition for female listeners.

    PubMed

    Tuft, Samantha E; MᶜLennan, Conor T; Krestar, Maura L

    2018-02-01

    Previous spoken word recognition research using the long-term repetition-priming paradigm found performance costs for stimuli mismatching in talker identity. That is, when words were repeated across the two blocks, and the identity of the talker changed reaction times (RTs) were slower than when the repeated words were spoken by the same talker. Such performance costs, or talker effects, followed a time course, occurring only when processing was relatively slow. More recent research suggests that increased explicit and implicit attention towards the talkers can result in talker effects even during relatively fast processing. The purpose of the current study was to examine whether word meaning would influence the pattern of talker effects in an easy lexical decision task and, if so, whether results would differ depending on whether the presentation of neutral and taboo words was mixed or blocked. Regardless of presentation, participants responded to taboo words faster than neutral words. Furthermore, talker effects for the female talker emerged when participants heard both taboo and neutral words (consistent with an attention-based hypothesis), but not for participants that heard only taboo or only neutral words (consistent with the time-course hypothesis). These findings have important implications for theoretical models of spoken word recognition.

  17. Phonological-orthographic consistency for Japanese words and its impact on visual and auditory word recognition.

    PubMed

    Hino, Yasushi; Kusunose, Yuu; Miyamura, Shinobu; Lupker, Stephen J

    2017-01-01

    In most models of word processing, the degrees of consistency in the mappings between orthographic, phonological, and semantic representations are hypothesized to affect reading time. Following Hino, Miyamura, and Lupker's (2011) examination of the orthographic-phonological (O-P) and orthographic-semantic (O-S) consistency for 1,114 Japanese words (339 katakana and 775 kanji words), in the present research, we initially attempted to measure the phonological-orthographic (P-O) consistency for those same words. In contrast to the O-P and O-S consistencies, which were equivalent for kanji and katakana words, the P-O relationships were much more inconsistent for the kanji words than for the katakana words. The impact of kanji words' P-O consistency was then examined in both visual and auditory word recognition tasks. Although there was no effect of P-O consistency in the standard visual lexical-decision task, significant effects were detected in a lexical-decision task with auditory stimuli, in a perceptual identification task using masked visual stimuli, and in a lexical-decision task with degraded visual stimuli. The implications of these results are discussed in terms of the impact of P-O consistency in auditory and visual word recognition. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  18. The word-frequency paradox for recall/recognition occurs for pictures.

    PubMed

    Karlsen, Paul Johan; Snodgrass, Joan Gay

    2004-08-01

    A yes-no recognition task and two recall tasks were conducted using pictures of high and low familiarity ratings. Picture familiarity had analogous effects to word frequency, and replicated the word-frequency paradox in recall and recognition. Low-familiarity pictures were more recognizable than high-familiarity pictures, pure lists of high-familiarity pictures were more recallable than pure lists of low-familiarity pictures, and there was no effect of familiarity for mixed lists. These results are consistent with the predictions of the Search of Associative Memory (SAM) model.

  19. Semantic information mediates visual attention during spoken word recognition in Chinese: Evidence from the printed-word version of the visual-world paradigm.

    PubMed

    Shen, Wei; Qu, Qingqing; Li, Xingshan

    2016-07-01

    In the present study, we investigated whether the activation of semantic information during spoken word recognition can mediate visual attention's deployment to printed Chinese words. We used a visual-world paradigm with printed words, in which participants listened to a spoken target word embedded in a neutral spoken sentence while looking at a visual display of printed words. We examined whether a semantic competitor effect could be observed in the printed-word version of the visual-world paradigm. In Experiment 1, the relationship between the spoken target words and the printed words was manipulated so that they were semantically related (a semantic competitor), phonologically related (a phonological competitor), or unrelated (distractors). We found that the probability of fixations on semantic competitors was significantly higher than that of fixations on the distractors. In Experiment 2, the orthographic similarity between the spoken target words and their semantic competitors was manipulated to further examine whether the semantic competitor effect was modulated by orthographic similarity. We found significant semantic competitor effects regardless of orthographic similarity. Our study not only reveals that semantic information can affect visual attention, it also provides important new insights into the methodology employed to investigate the semantic processing of spoken words during spoken word recognition using the printed-word version of the visual-world paradigm.

  20. Evaluation of a voice recognition system for the MOTAS pseudo pilot station function

    NASA Technical Reports Server (NTRS)

    Houck, J. A.

    1982-01-01

    The Langley Research Center has undertaken a technology development activity to provide a capability, the mission oriented terminal area simulation (MOTAS), wherein terminal area and aircraft systems studies can be performed. An experiment was conducted to evaluate state-of-the-art voice recognition technology and specifically, the Threshold 600 voice recognition system to serve as an aircraft control input device for the MOTAS pseudo pilot station function. The results of the experiment using ten subjects showed a recognition error of 3.67 percent for a 48-word vocabulary tested against a programmed vocabulary of 103 words. After the ten subjects retrained the Threshold 600 system for the words which were misrecognized or rejected, the recognition error decreased to 1.96 percent. The rejection rates for both cases were less than 0.70 percent. Based on the results of the experiment, voice recognition technology and specifically the Threshold 600 voice recognition system were chosen to fulfill this MOTAS function.

  1. Effects of distinctive encoding on source-based false recognition: further examination of recall-to-reject processes in aging and Alzheimer disease.

    PubMed

    Pierce, Benton H; Waring, Jill D; Schacter, Daniel L; Budson, Andrew E

    2008-09-01

    To examine the use of distinctive materials at encoding on recall-to-reject monitoring processes in aging and Alzheimer disease (AD). AD patients, and to a lesser extent older adults, have shown an impaired ability to use recollection-based monitoring processes (eg, recall-to-reject) to avoid various types of false memories, such as source-based false recognition. Younger adults, healthy older adults, and AD patients engaged in an incidental learning task, in which critical category exemplars were either accompanied by a distinctive picture or were presented as only words. Later, participants studied a series of categorized lists in which several typical exemplars were omitted and were then given a source memory test. Both older and younger adults made more accurate source attributions after picture encoding compared with word-only encoding, whereas AD patients did not exhibit this distinctiveness effect. These results extend those of previous studies showing that monitoring in older adults can be enhanced with distinctive encoding, and suggest that such monitoring processes in AD patients many be insensitive to distinctiveness.

  2. The Contribution of Phonological Awareness to Reading Fluency and Its Individual Sub-skills in Readers Aged 9- to 12-years

    PubMed Central

    Elhassan, Zena; Crewther, Sheila G.; Bavin, Edith L.

    2017-01-01

    Research examining phonological awareness (PA) contributions to reading in established readers of different skill levels is limited. The current study examined the contribution of PA to phonological decoding, visual word recognition, reading rate, and reading comprehension in 124 fourth to sixth grade children (aged 9–12 years). On the basis of scores on the FastaReada measure of reading fluency participants were allocated to one of three reading ability categories: dysfluent (n = 47), moderate (n = 38) and fluent (n = 39). For the dysfluent group, PA contributed significantly to all reading measures except rate, but in the moderate group only to phonological decoding. PA did not influence performances on any of the reading measures examined for the fluent reader group. The results support the notion that fluency is characterized by a shift from conscious decoding to rapid and accurate visual recognition of words. Although PA may be influential in reading development, the results of the current study show that it is not sufficient for fluent reading. PMID:28443048

  3. The Contribution of Phonological Awareness to Reading Fluency and Its Individual Sub-skills in Readers Aged 9- to 12-years.

    PubMed

    Elhassan, Zena; Crewther, Sheila G; Bavin, Edith L

    2017-01-01

    Research examining phonological awareness (PA) contributions to reading in established readers of different skill levels is limited. The current study examined the contribution of PA to phonological decoding, visual word recognition, reading rate, and reading comprehension in 124 fourth to sixth grade children (aged 9-12 years). On the basis of scores on the FastaReada measure of reading fluency participants were allocated to one of three reading ability categories: dysfluent ( n = 47), moderate ( n = 38) and fluent ( n = 39). For the dysfluent group, PA contributed significantly to all reading measures except rate, but in the moderate group only to phonological decoding. PA did not influence performances on any of the reading measures examined for the fluent reader group. The results support the notion that fluency is characterized by a shift from conscious decoding to rapid and accurate visual recognition of words. Although PA may be influential in reading development, the results of the current study show that it is not sufficient for fluent reading.

  4. [When shape-invariant recognition ('A' = 'a') fails. A case study of pure alexia and kinesthetic facilitation].

    PubMed

    Diesfeldt, H F A

    2011-06-01

    A right-handed patient, aged 72, manifested alexia without agraphia, a right homonymous hemianopia and an impaired ability to identify visually presented objects. He was completely unable to read words aloud and severely deficient in naming visually presented letters. He responded to orthographic familiarity in the lexical decision tasks of the Psycholinguistic Assessments of Language Processing in Aphasia (PALPA) rather than to the lexicality of the letter strings. He was impaired at deciding whether two letters of different case (e.g., A, a) are the same, though he could detect real letters from made-up ones or from their mirror image. Consequently, his core deficit in reading was posited at the level of the abstract letter identifiers. When asked to trace a letter with his right index finger, kinesthetic facilitation enabled him to read letters and words aloud. Though he could use intact motor representations of letters in order to facilitate recognition and reading, the slow, sequential and error-prone process of reading letter by letter made him abandon further training.

  5. Visual recognition of permuted words

    NASA Astrophysics Data System (ADS)

    Rashid, Sheikh Faisal; Shafait, Faisal; Breuel, Thomas M.

    2010-02-01

    In current study we examine how letter permutation affects in visual recognition of words for two orthographically dissimilar languages, Urdu and German. We present the hypothesis that recognition or reading of permuted and non-permuted words are two distinct mental level processes, and that people use different strategies in handling permuted words as compared to normal words. A comparison between reading behavior of people in these languages is also presented. We present our study in context of dual route theories of reading and it is observed that the dual-route theory is consistent with explanation of our hypothesis of distinction in underlying cognitive behavior for reading permuted and non-permuted words. We conducted three experiments in lexical decision tasks to analyze how reading is degraded or affected by letter permutation. We performed analysis of variance (ANOVA), distribution free rank test, and t-test to determine the significance differences in response time latencies for two classes of data. Results showed that the recognition accuracy for permuted words is decreased 31% in case of Urdu and 11% in case of German language. We also found a considerable difference in reading behavior for cursive and alphabetic languages and it is observed that reading of Urdu is comparatively slower than reading of German due to characteristics of cursive script.

  6. Talker and accent variability effects on spoken word recognition

    NASA Astrophysics Data System (ADS)

    Nyang, Edna E.; Rogers, Catherine L.; Nishi, Kanae

    2003-04-01

    A number of studies have shown that words in a list are recognized less accurately in noise and with longer response latencies when they are spoken by multiple talkers, rather than a single talker. These results have been interpreted as support for an exemplar-based model of speech perception, in which it is assumed that detailed information regarding the speaker's voice is preserved in memory and used in recognition, rather than being eliminated via normalization. In the present study, the effects of varying both accent and talker are investigated using lists of words spoken by (a) a single native English speaker, (b) six native English speakers, (c) three native English speakers and three Japanese-accented English speakers. Twelve /hVd/ words were mixed with multi-speaker babble at three signal-to-noise ratios (+10, +5, and 0 dB) to create the word lists. Native English-speaking listeners' percent-correct recognition for words produced by native English speakers across the three talker conditions (single talker native, multi-talker native, and multi-talker mixed native and non-native) and three signal-to-noise ratios will be compared to determine whether sources of speaker variability other than voice alone add to the processing demands imposed by simple (i.e., single accent) speaker variability in spoken word recognition.

  7. Emotionally enhanced memory for negatively arousing words: storage or retrieval advantage?

    PubMed

    Nadarevic, Lena

    2017-12-01

    People typically remember emotionally negative words better than neutral words. Two experiments are reported that investigate whether emotionally enhanced memory (EEM) for negatively arousing words is based on a storage or retrieval advantage. Participants studied non-word-word pairs that either involved negatively arousing or neutral target words. Memory for these target words was tested by means of a recognition test and a cued-recall test. Data were analysed with a multinomial model that allows the disentanglement of storage and retrieval processes in the present recognition-then-cued-recall paradigm. In both experiments the multinomial analyses revealed no storage differences between negatively arousing and neutral words but a clear retrieval advantage for negatively arousing words in the cued-recall test. These findings suggest that EEM for negatively arousing words is driven by associative processes.

  8. False memory and level of processing effect: an event-related potential study.

    PubMed

    Beato, Maria Soledad; Boldini, Angela; Cadavid, Sara

    2012-09-12

    Event-related potentials (ERPs) were used to determine the effects of level of processing on true and false memory, using the Deese-Roediger-McDermott (DRM) paradigm. In the DRM paradigm, lists of words highly associated to a single nonpresented word (the 'critical lure') are studied and, in a subsequent memory test, critical lures are often falsely remembered. Lists with three critical lures per list were auditorily presented here to participants who studied them with either a shallow (saying whether the word contained the letter 'o') or a deep (creating a mental image of the word) processing task. Visual presentation modality was used on a final recognition test. True recognition of studied words was significantly higher after deep encoding, whereas false recognition of nonpresented critical lures was similar in both experimental groups. At the ERP level, true and false recognition showed similar patterns: no FN400 effect was found, whereas comparable left parietal and late right frontal old/new effects were found for true and false recognition in both experimental conditions. Items studied under shallow encoding conditions elicited more positive ERP than items studied under deep encoding conditions at a 1000-1500 ms interval. These ERP results suggest that true and false recognition share some common underlying processes. Differential effects of level of processing on true and false memory were found only at the behavioral level but not at the ERP level.

  9. Evaluating a Split Processing Model of Visual Word Recognition: Effects of Orthographic Neighborhood Size

    ERIC Educational Resources Information Center

    Lavidor, Michal; Hayes, Adrian; Shillcock, Richard; Ellis, Andrew W.

    2004-01-01

    The split fovea theory proposes that visual word recognition of centrally presented words is mediated by the splitting of the foveal image, with letters to the left of fixation being projected to the right hemisphere (RH) and letters to the right of fixation being projected to the left hemisphere (LH). Two lexical decision experiments aimed to…

  10. The Picture Superiority Effect in Recognition Memory: A Developmental Study Using the Response Signal Procedure

    ERIC Educational Resources Information Center

    Defeyter, Margaret Anne; Russo, Riccardo; McPartlin, Pamela Louise

    2009-01-01

    Items studied as pictures are better remembered than items studied as words even when test items are presented as words. The present study examined the development of this picture superiority effect in recognition memory. Four groups ranging in age from 7 to 20 years participated. They studied words and pictures, with test stimuli always presented…

  11. Learning-Dependent Changes of Associations between Unfamiliar Words and Perceptual Features: A 15-Day Longitudinal Study

    ERIC Educational Resources Information Center

    Kambara, Toshimune; Tsukiura, Takashi; Shigemune, Yayoi; Kanno, Akitake; Nouchi, Rui; Yomogida, Yukihito; Kawashima, Ryuta

    2013-01-01

    This study examined behavioral changes in 15-day learning of word-picture (WP) and word-sound (WS) associations, using meaningless stimuli. Subjects performed a learning task and two recognition tasks under the WP and WS conditions every day for 15 days. Two main findings emerged from this study. First, behavioral data of recognition accuracy and…

  12. The Processing of Consonants and Vowels during Letter Identity and Letter Position Assignment in Visual-Word Recognition: An ERP Study

    ERIC Educational Resources Information Center

    Vergara-Martinez, Marta; Perea, Manuel; Marin, Alejandro; Carreiras, Manuel

    2011-01-01

    Recent research suggests that there is a processing distinction between consonants and vowels in visual-word recognition. Here we conjointly examine the time course of consonants and vowels in processes of letter identity and letter position assignment. Event related potentials (ERPs) were recorded while participants read words and pseudowords in…

  13. Lexical-Semantic Processing and Reading: Relations between Semantic Priming, Visual Word Recognition and Reading Comprehension

    ERIC Educational Resources Information Center

    Nobre, Alexandre de Pontes; de Salles, Jerusa Fumagalli

    2016-01-01

    The aim of this study was to investigate relations between lexical-semantic processing and two components of reading: visual word recognition and reading comprehension. Sixty-eight children from private schools in Porto Alegre, Brazil, from 7 to 12 years, were evaluated. Reading was assessed with a word/nonword reading task and a reading…

  14. Re-Evaluating Split-Fovea Processing in Word Recognition: A Critical Assessment of Recent Research

    ERIC Educational Resources Information Center

    Jordan, Timothy R.; Paterson, Kevin B.

    2009-01-01

    In recent years, some researchers have proposed that a fundamental component of the word recognition process is that each fovea is divided precisely at its vertical midline and that information either side of this midline projects to different, contralateral hemispheres. Thus, when a word is fixated, all letters to the left of the point of…

  15. Charting the Functional Relevance of Broca's Area for Visual Word Recognition and Picture Naming in Dutch Using fMRI-Guided TMS

    ERIC Educational Resources Information Center

    Wheat, Katherine L.; Cornelissen, Piers L.; Sack, Alexander T.; Schuhmann, Teresa; Goebel, Rainer; Blomert, Leo

    2013-01-01

    Magnetoencephalography (MEG) has shown pseudohomophone priming effects at Broca's area (specifically pars opercularis of left inferior frontal gyrus and precentral gyrus; LIFGpo/PCG) within [approximately]100 ms of viewing a word. This is consistent with Broca's area involvement in fast phonological access during visual word recognition. Here we…

  16. Reading Habits, Perceptual Learning, and Recognition of Printed Words

    ERIC Educational Resources Information Center

    Nazir, Tatjana A.; Ben-Boutayab, Nadia; Decoppet, Nathalie; Deutsch, Avital; Frost, Ram

    2004-01-01

    The present work aims at demonstrating that visual training associated with the act of reading modifies the way we perceive printed words. As reading does not train all parts of the retina in the same way but favors regions on the side in the direction of scanning, visual word recognition should be better at retinal locations that are frequently…

  17. Encoding instructions and stimulus presentation in local environmental context-dependent memory studies.

    PubMed

    Markopoulos, G; Rutherford, A; Cairns, C; Green, J

    2010-08-01

    Murnane and Phelps (1993) recommend word pair presentations in local environmental context (EC) studies to prevent associations being formed between successively presented items and their ECs and a consequent reduction in the EC effect. Two experiments were conducted to assess the veracity of this assumption. In Experiment 1, participants memorised single words or word pairs, or categorised them as natural or man made. Their free recall protocols were examined to assess any associations established between successively presented items. Fewest associations were observed when the item-specific encoding task (i.e., natural or man made categorisation of word referents) was applied to single words. These findings were examined further in Experiment 2, where the influence of encoding instructions and stimulus presentation on local EC dependent recognition memory was examined. Consistent with recognition dual-process signal detection model predictions and findings (e.g., Macken, 2002; Parks & Yonelinas, 2008), recollection sensitivity, but not familiarity sensitivity, was found to be local EC dependent. However, local EC dependent recognition was observed only after item-specific encoding instructions, irrespective of stimulus presentation. These findings and the existing literature suggest that the use of single word presentations and item-specific encoding enhances local EC dependent recognition.

  18. Levels-of-processing effect on frontotemporal function in schizophrenia during word encoding and recognition.

    PubMed

    Ragland, J Daniel; Gur, Ruben C; Valdez, Jeffrey N; Loughead, James; Elliott, Mark; Kohler, Christian; Kanes, Stephen; Siegel, Steven J; Moelter, Stephen T; Gur, Raquel E

    2005-10-01

    Patients with schizophrenia improve episodic memory accuracy when given organizational strategies through levels-of-processing paradigms. This study tested if improvement is accompanied by normalized frontotemporal function. Event-related blood-oxygen-level-dependent functional magnetic resonance imaging (fMRI) was used to measure activation during shallow (perceptual) and deep (semantic) word encoding and recognition in 14 patients with schizophrenia and 14 healthy comparison subjects. Despite slower and less accurate overall word classification, the patients showed normal levels-of-processing effects, with faster and more accurate recognition of deeply processed words. These effects were accompanied by left ventrolateral prefrontal activation during encoding in both groups, although the thalamus, hippocampus, and lingual gyrus were overactivated in the patients. During word recognition, the patients showed overactivation in the left frontal pole and had a less robust right prefrontal response. Evidence of normal levels-of-processing effects and left prefrontal activation suggests that patients with schizophrenia can form and maintain semantic representations when they are provided with organizational cues and can improve their word encoding and retrieval. Areas of overactivation suggest residual inefficiencies. Nevertheless, the effect of teaching organizational strategies on episodic memory and brain function is a worthwhile topic for future interventional studies.

  19. A New Font, Specifically Designed for Peripheral Vision, Improves Peripheral Letter and Word Recognition, but Not Eye-Mediated Reading Performance

    PubMed Central

    Bernard, Jean-Baptiste; Aguilar, Carlos; Castet, Eric

    2016-01-01

    Reading speed is dramatically reduced when readers cannot use their central vision. This is because low visual acuity and crowding negatively impact letter recognition in the periphery. In this study, we designed a new font (referred to as the Eido font) in order to reduce inter-letter similarity and consequently to increase peripheral letter recognition performance. We tested this font by running five experiments that compared the Eido font with the standard Courier font. Letter spacing and x-height were identical for the two monospaced fonts. Six normally-sighted subjects used exclusively their peripheral vision to run two aloud reading tasks (with eye movements), a letter recognition task (without eye movements), a word recognition task (without eye movements) and a lexical decision task. Results show that reading speed was not significantly different between the Eido and the Courier font when subjects had to read single sentences with a round simulated gaze-contingent central scotoma (10° diameter). In contrast, Eido significantly decreased perceptual errors in peripheral crowded letter recognition (-30% errors on average for letters briefly presented at 6° eccentricity) and in peripheral word recognition (-32% errors on average for words briefly presented at 6° eccentricity). PMID:27074013

  20. A New Font, Specifically Designed for Peripheral Vision, Improves Peripheral Letter and Word Recognition, but Not Eye-Mediated Reading Performance.

    PubMed

    Bernard, Jean-Baptiste; Aguilar, Carlos; Castet, Eric

    2016-01-01

    Reading speed is dramatically reduced when readers cannot use their central vision. This is because low visual acuity and crowding negatively impact letter recognition in the periphery. In this study, we designed a new font (referred to as the Eido font) in order to reduce inter-letter similarity and consequently to increase peripheral letter recognition performance. We tested this font by running five experiments that compared the Eido font with the standard Courier font. Letter spacing and x-height were identical for the two monospaced fonts. Six normally-sighted subjects used exclusively their peripheral vision to run two aloud reading tasks (with eye movements), a letter recognition task (without eye movements), a word recognition task (without eye movements) and a lexical decision task. Results show that reading speed was not significantly different between the Eido and the Courier font when subjects had to read single sentences with a round simulated gaze-contingent central scotoma (10° diameter). In contrast, Eido significantly decreased perceptual errors in peripheral crowded letter recognition (-30% errors on average for letters briefly presented at 6° eccentricity) and in peripheral word recognition (-32% errors on average for words briefly presented at 6° eccentricity).

  1. Congruent bodily arousal promotes the constructive recognition of emotional words.

    PubMed

    Kever, Anne; Grynberg, Delphine; Vermeulen, Nicolas

    2017-08-01

    Considerable research has shown that bodily states shape affect and cognition. Here, we examined whether transient states of bodily arousal influence the categorization speed of high arousal, low arousal, and neutral words. Participants realized two blocks of a constructive recognition task, once after a cycling session (increased arousal), and once after a relaxation session (reduced arousal). Results revealed overall faster response times for high arousal compared to low arousal words, and for positive compared to negative words. Importantly, low arousal words were categorized significantly faster after the relaxation than after the cycling, suggesting that a decrease in bodily arousal promotes the recognition of stimuli matching one's current arousal state. These findings highlight the importance of the arousal dimension in emotional processing, and suggest the presence of arousal-congruency effects. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. Experience with compound words influences their processing: An eye movement investigation with English compound words.

    PubMed

    Juhasz, Barbara J

    2016-11-14

    Recording eye movements provides information on the time-course of word recognition during reading. Juhasz and Rayner [Juhasz, B. J., & Rayner, K. (2003). Investigating the effects of a set of intercorrelated variables on eye fixation durations in reading. Journal of Experimental Psychology: Learning, Memory and Cognition, 29, 1312-1318] examined the impact of five word recognition variables, including familiarity and age-of-acquisition (AoA), on fixation durations. All variables impacted fixation durations, but the time-course differed. However, the study focused on relatively short, morphologically simple words. Eye movements are also informative for examining the processing of morphologically complex words such as compound words. The present study further examined the time-course of lexical and semantic variables during morphological processing. A total of 120 English compound words that varied in familiarity, AoA, semantic transparency, lexeme meaning dominance, sensory experience rating (SER), and imageability were selected. The impact of these variables on fixation durations was examined when length, word frequency, and lexeme frequencies were controlled in a regression model. The most robust effects were found for familiarity and AoA, indicating that a reader's experience with compound words significantly impacts compound recognition. These results provide insight into semantic processing of morphologically complex words during reading.

  3. Influences of emotion on context memory while viewing film clips.

    PubMed

    Anderson, Lisa; Shimamura, Arthur P

    2005-01-01

    Participants listened to words while viewing film clips (audio off). Film clips were classified as neutral, positively valenced, negatively valenced, and arousing. Memory was assessed in three ways: recall of film content, recall of words, and context recognition. In the context recognition test, participants were presented a word and determined which film clip was showing when the word was originally presented. In two experiments, context memory performance was disrupted when words were presented during negatively valenced film clips, whereas it was enhanced when words were presented during arousing film clips. Free recall of words presented during the negatively valenced films was also disrupted. These findings suggest multiple influences of emotion on memory performance.

  4. Effects of Bilateral Eye Movements on Gist Based False Recognition in the DRM Paradigm

    ERIC Educational Resources Information Center

    Parker, Andrew; Dagnall, Neil

    2007-01-01

    The effects of saccadic bilateral (horizontal) eye movements on gist based false recognition was investigated. Following exposure to lists of words related to a critical but non-studied word participants were asked to engage in 30s of bilateral vs. vertical vs. no eye movements. Subsequent testing of recognition memory revealed that those who…

  5. Conceptually based vocabulary intervention: second graders' development of vocabulary words.

    PubMed

    Dimling, Lisa M

    2010-01-01

    An instructional strategy was investigated that addressed the needs of deaf and hard of hearing students through a conceptually based sign language vocabulary intervention. A single-subject multiple-baseline design was used to determine the effects of the vocabulary intervention on word recognition, production, and comprehension. Six students took part in the 30-minute intervention over 6-8 weeks, learning 12 new vocabulary words each week by means of the three intervention components: (a) word introduction, (b) word activity (semantic mapping), and (c) practice. Results indicated that the vocabulary intervention successfully improved all students' recognition, production, and comprehension of the vocabulary words and phrases.

  6. Speech recognition in one- and two-talker maskers in school-age children and adults: Development of perceptual masking and glimpsing

    PubMed Central

    Buss, Emily; Leibold, Lori J.; Porter, Heather L.; Grose, John H.

    2017-01-01

    Children perform more poorly than adults on a wide range of masked speech perception paradigms, but this effect is particularly pronounced when the masker itself is also composed of speech. The present study evaluated two factors that might contribute to this effect: the ability to perceptually isolate the target from masker speech, and the ability to recognize target speech based on sparse cues (glimpsing). Speech reception thresholds (SRTs) were estimated for closed-set, disyllabic word recognition in children (5–16 years) and adults in a one- or two-talker masker. Speech maskers were 60 dB sound pressure level (SPL), and they were either presented alone or in combination with a 50-dB-SPL speech-shaped noise masker. There was an age effect overall, but performance was adult-like at a younger age for the one-talker than the two-talker masker. Noise tended to elevate SRTs, particularly for older children and adults, and when summed with the one-talker masker. Removing time-frequency epochs associated with a poor target-to-masker ratio markedly improved SRTs, with larger effects for younger listeners; the age effect was not eliminated, however. Results were interpreted as indicating that development of speech-in-speech recognition is likely impacted by development of both perceptual masking and the ability recognize speech based on sparse cues. PMID:28464682

  7. Multimodal Alexia: Neuropsychological Mechanisms and Implications for Treatment

    PubMed Central

    Kim, Esther S.; Rapcsak, Steven Z.; Andersen, Sarah; Beeson, Pélagie M.

    2011-01-01

    Letter-by-letter (LBL) reading is the phenomenon whereby individuals with acquired alexia decode words by sequential identification of component letters. In cases where letter recognition or letter naming is impaired, however, a LBL reading approach is obviated, resulting in a nearly complete inability to read, or global alexia. In some such cases, a treatment strategy wherein letter tracing is used to provide tactile and/or kinesthetic input has resulted in improved letter identification. In this study, a kinesthetic treatment approach was implemented with an individual who presented with severe alexia in the context of relatively preserved recognition of orally spelled words, and mildly impaired oral/written spelling. Eight weeks of kinesthetic treatment resulted in improved letter identification accuracy and oral reading of trained words; however, the participant remained unable to successfully decode untrained words. Further testing revealed that, in addition to the visual-verbal disconnection that resulted in impaired word reading and letter naming, her limited ability to derive benefit from the kinesthetic strategy was attributable to a disconnection that prevented access to letter names from kinesthetic input. We propose that this kinesthetic-verbal disconnection resulted from damage to the left parietal lobe and underlying white matter, a neuroanatomical feature that is not typically observed in patients with global alexia or classic LBL reading. This unfortunate combination of visual-verbal and kinesthetic-verbal disconnections demonstrated in this individual resulted in a persistent multimodal alexia syndrome that was resistant to behavioral treatment. To our knowledge, this is the first case in which the nature of this form of multimodal alexia has been fully characterized, and our findings provide guidance regarding the requisite cognitive skills and lesion profiles that are likely to be associated with a positive response to tactile/kinesthetic treatment. PMID:21952194

  8. Multimodal alexia: neuropsychological mechanisms and implications for treatment.

    PubMed

    Kim, Esther S; Rapcsak, Steven Z; Andersen, Sarah; Beeson, Pélagie M

    2011-11-01

    Letter-by-letter (LBL) reading is the phenomenon whereby individuals with acquired alexia decode words by sequential identification of component letters. In cases where letter recognition or letter naming is impaired, however, a LBL reading approach is obviated, resulting in a nearly complete inability to read, or global alexia. In some such cases, a treatment strategy wherein letter tracing is used to provide tactile and/or kinesthetic input has resulted in improved letter identification. In this study, a kinesthetic treatment approach was implemented with an individual who presented with severe alexia in the context of relatively preserved recognition of orally spelled words, and mildly impaired oral/written spelling. Eight weeks of kinesthetic treatment resulted in improved letter identification accuracy and oral reading of trained words; however, the participant remained unable to successfully decode untrained words. Further testing revealed that, in addition to the visual-verbal disconnection that resulted in impaired word reading and letter naming, her limited ability to derive benefit from the kinesthetic strategy was attributable to a disconnection that prevented access to letter names from kinesthetic input. We propose that this kinesthetic-verbal disconnection resulted from damage to the left parietal lobe and underlying white matter, a neuroanatomical feature that is not typically observed in patients with global alexia or classic LBL reading. This unfortunate combination of visual-verbal and kinesthetic-verbal disconnections demonstrated in this individual resulted in a persistent multimodal alexia syndrome that was resistant to behavioral treatment. To our knowledge, this is the first case in which the nature of this form of multimodal alexia has been fully characterized, and our findings provide guidance regarding the requisite cognitive skills and lesion profiles that are likely to be associated with a positive response to tactile/kinesthetic treatment. Copyright © 2011 Elsevier Ltd. All rights reserved.

  9. Usage of semantic representations in recognition memory.

    PubMed

    Nishiyama, Ryoji; Hirano, Tetsuji; Ukita, Jun

    2017-11-01

    Meanings of words facilitate false acceptance as well as correct rejection of lures in recognition memory tests, depending on the experimental context. This suggests that semantic representations are both directly and indirectly (i.e., mediated by perceptual representations) used in remembering. Studies using memory conjunction errors (MCEs) paradigms, in which the lures consist of component parts of studied words, have reported semantic facilitation of rejection of the lures. However, attending to components of the lures could potentially cause this. Therefore, we investigated whether semantic overlap of lures facilitates MCEs using Japanese Kanji words in which a whole-word image is more concerned in reading. Experiments demonstrated semantic facilitation of MCEs in a delayed recognition test (Experiment 1), and in immediate recognition tests in which participants were prevented from using phonological or orthographic representations (Experiment 2), and the salient effect on individuals with high semantic memory capacities (Experiment 3). Additionally, analysis of the receiver operating characteristic suggested that this effect is attributed to familiarity-based memory judgement and phantom recollection. These findings indicate that semantic representations can be directly used in remembering, even when perceptual representations of studied words are available.

  10. The influence of speech rate and accent on access and use of semantic information.

    PubMed

    Sajin, Stanislav M; Connine, Cynthia M

    2017-04-01

    Circumstances in which the speech input is presented in sub-optimal conditions generally lead to processing costs affecting spoken word recognition. The current study indicates that some processing demands imposed by listening to difficult speech can be mitigated by feedback from semantic knowledge. A set of lexical decision experiments examined how foreign accented speech and word duration impact access to semantic knowledge in spoken word recognition. Results indicate that when listeners process accented speech, the reliance on semantic information increases. Speech rate was not observed to influence semantic access, except in the setting in which unusually slow accented speech was presented. These findings support interactive activation models of spoken word recognition in which attention is modulated based on speech demands.

  11. Emotion words and categories: evidence from lexical decision.

    PubMed

    Scott, Graham G; O'Donnell, Patrick J; Sereno, Sara C

    2014-05-01

    We examined the categorical nature of emotion word recognition. Positive, negative, and neutral words were presented in lexical decision tasks. Word frequency was additionally manipulated. In Experiment 1, "positive" and "negative" categories of words were implicitly indicated by the blocked design employed. A significant emotion-frequency interaction was obtained, replicating past research. While positive words consistently elicited faster responses than neutral words, only low frequency negative words demonstrated a similar advantage. In Experiments 2a and 2b, explicit categories ("positive," "negative," and "household" items) were specified to participants. Positive words again elicited faster responses than did neutral words. Responses to negative words, however, were no different than those to neutral words, regardless of their frequency. The overall pattern of effects indicates that positive words are always facilitated, frequency plays a greater role in the recognition of negative words, and a "negative" category represents a somewhat disparate set of emotions. These results support the notion that emotion word processing may be moderated by distinct systems.

  12. Associations of hallucination proneness with free-recall intrusions and response bias in a nonclinical sample.

    PubMed

    Brébion, Gildas; Larøi, Frank; Van der Linden, Martial

    2010-10-01

    Hallucinations in patients with schizophrenia have been associated with a liberal response bias in signal detection and recognition tasks and with various types of source-memory error. We investigated the associations of hallucination proneness with free-recall intrusions and false recognitions of words in a nonclinical sample. A total of 81 healthy individuals were administered a verbal memory task involving free recall and recognition of one nonorganizable and one semantically organizable list of words. Hallucination proneness was assessed by means of a self-rating scale. Global hallucination proneness was associated with free-recall intrusions in the nonorganizable list and with a response bias reflecting tendency to make false recognitions of nontarget words in both types of list. The verbal hallucination score was associated with more intrusions and with a reduced tendency to make false recognitions of words. The associations between global hallucination proneness and two types of verbal memory error in a nonclinical sample corroborate those observed in patients with schizophrenia and suggest that common cognitive mechanisms underlie hallucinations in psychiatric and nonclinical individuals.

  13. Improving language models for radiology speech recognition.

    PubMed

    Paulett, John M; Langlotz, Curtis P

    2009-02-01

    Speech recognition systems have become increasingly popular as a means to produce radiology reports, for reasons both of efficiency and of cost. However, the suboptimal recognition accuracy of these systems can affect the productivity of the radiologists creating the text reports. We analyzed a database of over two million de-identified radiology reports to determine the strongest determinants of word frequency. Our results showed that body site and imaging modality had a similar influence on the frequency of words and of three-word phrases as did the identity of the speaker. These findings suggest that the accuracy of speech recognition systems could be significantly enhanced by further tailoring their language models to body site and imaging modality, which are readily available at the time of report creation.

  14. Pictures, images, and recollective experience.

    PubMed

    Dewhurst, S A; Conway, M A

    1994-09-01

    Five experiments investigated the influence of picture processing on recollective experience in recognition memory. Subjects studied items that differed in visual or imaginal detail, such as pictures versus words and high-imageability versus low-imageability words, and performed orienting tasks that directed processing either toward a stimulus as a word or toward a stimulus as a picture or image. Standard effects of imageability (e.g., the picture superiority effect and memory advantages following imagery) were obtained only in recognition judgments that featured recollective experience and were eliminated or reversed when recognition was not accompanied by recollective experience. It is proposed that conscious recollective experience in recognition memory is cued by attributes of retrieved memories such as sensory-perceptual attributes and records of cognitive operations performed at encoding.

  15. The Role of Semantics in Translation Recognition: Effects of Number of Translations, Dominance of Translations and Semantic Relatedness of Multiple Translations

    ERIC Educational Resources Information Center

    Laxen, Jannika; Lavaur, Jean-Marc

    2010-01-01

    This study aims to examine the influence of multiple translations of a word on bilingual processing in three translation recognition experiments during which French-English bilinguals had to decide whether two words were translations of each other or not. In the first experiment, words with only one translation were recognized as translations…

  16. Handwritten Word Recognition Using Multi-view Analysis

    NASA Astrophysics Data System (ADS)

    de Oliveira, J. J.; de A. Freitas, C. O.; de Carvalho, J. M.; Sabourin, R.

    This paper brings a contribution to the problem of efficiently recognizing handwritten words from a limited size lexicon. For that, a multiple classifier system has been developed that analyzes the words from three different approximation levels, in order to get a computational approach inspired on the human reading process. For each approximation level a three-module architecture composed of a zoning mechanism (pseudo-segmenter), a feature extractor and a classifier is defined. The proposed application is the recognition of the Portuguese handwritten names of the months, for which a best recognition rate of 97.7% was obtained, using classifier combination.

  17. Concreteness norms for 1,659 French words: Relationships with other psycholinguistic variables and word recognition times.

    PubMed

    Bonin, Patrick; Méot, Alain; Bugaiska, Aurélia

    2018-02-12

    Words that correspond to a potential sensory experience-concrete words-have long been found to possess a processing advantage over abstract words in various lexical tasks. We collected norms of concreteness for a set of 1,659 French words, together with other psycholinguistic norms that were not available for these words-context availability, emotional valence, and arousal-but which are important if we are to achieve a better understanding of the meaning of concreteness effects. We then investigated the relationships of concreteness with these newly collected variables, together with other psycholinguistic variables that were already available for this set of words (e.g., imageability, age of acquisition, and sensory experience ratings). Finally, thanks to the variety of psychological norms available for this set of words, we decided to test further the embodied account of concreteness effects in visual-word recognition, championed by Kousta, Vigliocco, Vinson, Andrews, and Del Campo (Journal of Experimental Psychology: General, 140, 14-34, 2011). Similarly, we investigated the influences of concreteness in three word recognition tasks-lexical decision, progressive demasking, and word naming-using a multiple regression approach, based on the reaction times available in Chronolex (Ferrand, Brysbaert, Keuleers, New, Bonin, Méot, Pallier, Frontiers in Psychology, 2; 306, 2011). The norms can be downloaded as supplementary material provided with this article.

  18. Faces are special but not too special: Spared face recognition in amnesia is based on familiarity

    PubMed Central

    Aly, Mariam; Knight, Robert T.; Yonelinas, Andrew P.

    2014-01-01

    Most current theories of human memory are material-general in the sense that they assume that the medial temporal lobe (MTL) is important for retrieving the details of prior events, regardless of the specific type of materials. Recent studies of amnesia have challenged the material-general assumption by suggesting that the MTL may be necessary for remembering words, but is not involved in remembering faces. We examined recognition memory for faces and words in a group of amnesic patients, which included hypoxic patients and patients with extensive left or right MTL lesions. Recognition confidence judgments were used to plot receiver operating characteristics (ROCs) in order to more fully quantify recognition performance and to estimate the contributions of recollection and familiarity. Consistent with the extant literature, an analysis of overall recognition accuracy showed that the patients were impaired at word memory but had spared face memory. However, the ROC analysis indicated that the patients were generally impaired at high confidence recognition responses for faces and words, and they exhibited significant recollection impairments for both types of materials. Familiarity for faces was preserved in all patients, but extensive left MTL damage impaired familiarity for words. These results suggest that face recognition may appear to be spared because performance tends to rely heavily on familiarity, a process that is relatively well preserved in amnesia. The findings challenge material-general theories of memory, and suggest that both material and process are important determinants of memory performance in amnesia, and different types of materials may depend more or less on recollection and familiarity. PMID:20833190

  19. Voice gender and the segregation of competing talkers: Perceptual learning in cochlear implant simulations

    PubMed Central

    Sullivan, Jessica R.; Assmann, Peter F.; Hossain, Shaikat; Schafer, Erin C.

    2017-01-01

    Two experiments explored the role of differences in voice gender in the recognition of speech masked by a competing talker in cochlear implant simulations. Experiment 1 confirmed that listeners with normal hearing receive little benefit from differences in voice gender between a target and masker sentence in four- and eight-channel simulations, consistent with previous findings that cochlear implants deliver an impoverished representation of the cues for voice gender. However, gender differences led to small but significant improvements in word recognition with 16 and 32 channels. Experiment 2 assessed the benefits of perceptual training on the use of voice gender cues in an eight-channel simulation. Listeners were assigned to one of four groups: (1) word recognition training with target and masker differing in gender; (2) word recognition training with same-gender target and masker; (3) gender recognition training; or (4) control with no training. Significant improvements in word recognition were observed from pre- to post-test sessions for all three training groups compared to the control group. These improvements were maintained at the late session (one week following the last training session) for all three groups. There was an overall improvement in masked word recognition performance provided by gender mismatch following training, but the amount of benefit did not differ as a function of the type of training. The training effects observed here are consistent with a form of rapid perceptual learning that contributes to the segregation of competing voices but does not specifically enhance the benefits provided by voice gender cues. PMID:28372046

  20. Holistic word processing in dyslexia

    PubMed Central

    Conway, Aisling; Misra, Karuna

    2017-01-01

    People with dyslexia have difficulty learning to read and many lack fluent word recognition as adults. In a novel task that borrows elements of the ‘word superiority’ and ‘word inversion’ paradigms, we investigate whether holistic word recognition is impaired in dyslexia. In Experiment 1 students with dyslexia and controls judged the similarity of pairs of 6- and 7-letter words or pairs of words whose letters had been partially jumbled. The stimuli were presented in both upright and inverted form with orthographic regularity and orientation randomized from trial to trial. While both groups showed sensitivity to orthographic regularity, both word inversion and letter jumbling were more detrimental to skilled than dyslexic readers supporting the idea that the latter may read in a more analytic fashion. Experiment 2 employed the same task but using shorter, 4- and 5-letter words and a design where orthographic regularity and stimuli orientation was held constant within experimental blocks to encourage the use of either holistic or analytic processing. While there was no difference in reaction time between the dyslexic and control groups for inverted stimuli, the students with dyslexia were significantly slower than controls for upright stimuli. These findings suggest that holistic word recognition, which is largely based on the detection of orthographic regularity, is impaired in dyslexia. PMID:29121046

  1. Influences of spoken word planning on speech recognition.

    PubMed

    Roelofs, Ardi; Ozdemir, Rebecca; Levelt, Willem J M

    2007-09-01

    In 4 chronometric experiments, influences of spoken word planning on speech recognition were examined. Participants were shown pictures while hearing a tone or a spoken word presented shortly after picture onset. When a spoken word was presented, participants indicated whether it contained a prespecified phoneme. When the tone was presented, they indicated whether the picture name contained the phoneme (Experiment 1) or they named the picture (Experiment 2). Phoneme monitoring latencies for the spoken words were shorter when the picture name contained the prespecified phoneme compared with when it did not. Priming of phoneme monitoring was also obtained when the phoneme was part of spoken nonwords (Experiment 3). However, no priming of phoneme monitoring was obtained when the pictures required no response in the experiment, regardless of monitoring latency (Experiment 4). These results provide evidence that an internal phonological pathway runs from spoken word planning to speech recognition and that active phonological encoding is a precondition for engaging the pathway. 2007 APA

  2. A cascaded neuro-computational model for spoken word recognition

    NASA Astrophysics Data System (ADS)

    Hoya, Tetsuya; van Leeuwen, Cees

    2010-03-01

    In human speech recognition, words are analysed at both pre-lexical (i.e., sub-word) and lexical (word) levels. The aim of this paper is to propose a constructive neuro-computational model that incorporates both these levels as cascaded layers of pre-lexical and lexical units. The layered structure enables the system to handle the variability of real speech input. Within the model, receptive fields of the pre-lexical layer consist of radial basis functions; the lexical layer is composed of units that perform pattern matching between their internal template and a series of labels, corresponding to the winning receptive fields in the pre-lexical layer. The model adapts through self-tuning of all units, in combination with the formation of a connectivity structure through unsupervised (first layer) and supervised (higher layers) network growth. Simulation studies show that the model can achieve a level of performance in spoken word recognition similar to that of a benchmark approach using hidden Markov models, while enabling parallel access to word candidates in lexical decision making.

  3. When Is All Understood and Done? The Psychological Reality of the Recognition Point

    ERIC Educational Resources Information Center

    Bolte, Jens; Uhe, Mechtild

    2004-01-01

    Using lexical decision, the effects of primes of different length on spoken word recognition were evaluated in three partial repetition priming experiments. Prime length was determined via gating (Experiments 1a and 2a). It was shorter than, equivalent to, or longer than the recognition point (RP), or a complete word. In Experiments 1b and 1c,…

  4. Effect of signal to noise ratio on the speech perception ability of older adults

    PubMed Central

    Shojaei, Elahe; Ashayeri, Hassan; Jafari, Zahra; Zarrin Dast, Mohammad Reza; Kamali, Koorosh

    2016-01-01

    Background: Speech perception ability depends on auditory and extra-auditory elements. The signal- to-noise ratio (SNR) is an extra-auditory element that has an effect on the ability to normally follow speech and maintain a conversation. Speech in noise perception difficulty is a common complaint of the elderly. In this study, the importance of SNR magnitude as an extra-auditory effect on speech perception in noise was examined in the elderly. Methods: The speech perception in noise test (SPIN) was conducted on 25 elderly participants who had bilateral low–mid frequency normal hearing thresholds at three SNRs in the presence of ipsilateral white noise. These participants were selected by available sampling method. Cognitive screening was done using the Persian Mini Mental State Examination (MMSE) test. Results: Independent T- test, ANNOVA and Pearson Correlation Index were used for statistical analysis. There was a significant difference in word discrimination scores at silence and at three SNRs in both ears (p≤0.047). Moreover, there was a significant difference in word discrimination scores for paired SNRs (0 and +5, 0 and +10, and +5 and +10 (p≤0.04)). No significant correlation was found between age and word recognition scores at silence and at three SNRs in both ears (p≥0.386). Conclusion: Our results revealed that decreasing the signal level and increasing the competing noise considerably reduced the speech perception ability in normal hearing at low–mid thresholds in the elderly. These results support the critical role of SNRs for speech perception ability in the elderly. Furthermore, our results revealed that normal hearing elderly participants required compensatory strategies to maintain normal speech perception in challenging acoustic situations. PMID:27390712

  5. Word length and lexical activation: longer is better.

    PubMed

    Pitt, Mark A; Samuel, Arthur G

    2006-10-01

    Many models of spoken word recognition posit the existence of lexical and sublexical representations, with excitatory and inhibitory mechanisms used to affect the activation levels of such representations. Bottom-up evidence provides excitatory input, and inhibition from phonetically similar representations leads to lexical competition. In such a system, long words should produce stronger lexical activation than short words, for 2 reasons: Long words provide more bottom-up evidence than short words, and short words are subject to greater inhibition due to the existence of more similar words. Four experiments provide evidence for this view. In addition, reaction-time-based partitioning of the data shows that long words generate greater activation that is available both earlier and for a longer time than is the case for short words. As a result, lexical influences on phoneme identification are extremely robust for long words but are quite fragile and condition-dependent for short words. Models of word recognition must consider words of all lengths to capture the true dynamics of lexical activation. Copyright 2006 APA.

  6. Lexical association and false memory for words in two cultures.

    PubMed

    Lee, Yuh-shiow; Chiang, Wen-Chi; Hung, Hsu-Ching

    2008-01-01

    This study examined the relationship between language experience and false memory produced by the DRM paradigm. The word lists used in Stadler, et al. (Memory & Cognition, 27, 494-500, 1999) were first translated into Chinese. False recall and false recognition for critical non-presented targets were then tested on a group of Chinese users. The average co-occurrence rate of the list word and the critical word was calculated based on two large Chinese corpuses. List-level analyses revealed that the correlation between the American and Taiwanese participants was significant only in false recognition. More importantly, the co-occurrence rate was significantly correlated with false recall and recognition of Taiwanese participants, and not of American participants. In addition, the backward association strength based on Nelson et al. (The University of South Florida word association, rhyme and word fragment norms, 1999) was significantly correlated with false recall of American participants and not of Taiwanese participants. Results are discussed in terms of the relationship between language experiences and lexical association in creating false memory for word lists.

  7. The influence of talker and foreign-accent variability on spoken word identification.

    PubMed

    Bent, Tessa; Holt, Rachael Frush

    2013-03-01

    In spoken word identification and memory tasks, stimulus variability from numerous sources impairs performance. In the current study, the influence of foreign-accent variability on spoken word identification was evaluated in two experiments. Experiment 1 used a between-subjects design to test word identification in noise in single-talker and two multiple-talker conditions: multiple talkers with the same accent and multiple talkers with different accents. Identification performance was highest in the single-talker condition, but there was no difference between the single-accent and multiple-accent conditions. Experiment 2 further explored word recognition for multiple talkers in single-accent versus multiple-accent conditions using a mixed design. A detriment to word recognition was observed in the multiple-accent condition compared to the single-accent condition, but the effect differed across the language backgrounds tested. These results demonstrate that the processing of foreign-accent variation may influence word recognition in ways similar to other sources of variability (e.g., speaking rate or style) in that the inclusion of multiple foreign accents can result in a small but significant performance decrement beyond the multiple-talker effect.

  8. The coupling of emotion and cognition in the eye: introducing the pupil old/new effect.

    PubMed

    Võ, Melissa L-H; Jacobs, Arthur M; Kuchinke, Lars; Hofmann, Markus; Conrad, Markus; Schacht, Annekathrin; Hutzler, Florian

    2008-01-01

    The study presented here investigated the effects of emotional valence on the memory for words by assessing both memory performance and pupillary responses during a recognition memory task. Participants had to make speeded judgments on whether a word presented in the test phase of the experiment had already been presented ("old") or not ("new"). An emotion-induced recognition bias was observed: Words with emotional content not only produced a higher amount of hits, but also elicited more false alarms than neutral words. Further, we found a distinct pupil old/new effect characterized as an elevated pupillary response to hits as opposed to correct rejections. Interestingly, this pupil old/new effect was clearly diminished for emotional words. We therefore argue that the pupil old/new effect is not only able to mirror memory retrieval processes, but also reflects modulation by an emotion-induced recognition bias.

  9. The Bayesian reader: explaining word recognition as an optimal Bayesian decision process.

    PubMed

    Norris, Dennis

    2006-04-01

    This article presents a theory of visual word recognition that assumes that, in the tasks of word identification, lexical decision, and semantic categorization, human readers behave as optimal Bayesian decision makers. This leads to the development of a computational model of word recognition, the Bayesian reader. The Bayesian reader successfully simulates some of the most significant data on human reading. The model accounts for the nature of the function relating word frequency to reaction time and identification threshold, the effects of neighborhood density and its interaction with frequency, and the variation in the pattern of neighborhood density effects seen in different experimental tasks. Both the general behavior of the model and the way the model predicts different patterns of results in different tasks follow entirely from the assumption that human readers approximate optimal Bayesian decision makers. ((c) 2006 APA, all rights reserved).

  10. Memory bias for negative emotional words in recognition memory is driven by effects of category membership

    PubMed Central

    White, Corey N.; Kapucu, Aycan; Bruno, Davide; Rotello, Caren M.; Ratcliff, Roger

    2014-01-01

    Recognition memory studies often find that emotional items are more likely than neutral items to be labeled as studied. Previous work suggests this bias is driven by increased memory strength/familiarity for emotional items. We explored strength and bias interpretations of this effect with the conjecture that emotional stimuli might seem more familiar because they share features with studied items from the same category. Categorical effects were manipulated in a recognition task by presenting lists with a small, medium, or large proportion of emotional words. The liberal memory bias for emotional words was only observed when a medium or large proportion of categorized words were presented in the lists. Similar, though weaker, effects were observed with categorized words that were not emotional (animal names). These results suggest that liberal memory bias for emotional items may be largely driven by effects of category membership. PMID:24303902

  11. Memory bias for negative emotional words in recognition memory is driven by effects of category membership.

    PubMed

    White, Corey N; Kapucu, Aycan; Bruno, Davide; Rotello, Caren M; Ratcliff, Roger

    2014-01-01

    Recognition memory studies often find that emotional items are more likely than neutral items to be labelled as studied. Previous work suggests this bias is driven by increased memory strength/familiarity for emotional items. We explored strength and bias interpretations of this effect with the conjecture that emotional stimuli might seem more familiar because they share features with studied items from the same category. Categorical effects were manipulated in a recognition task by presenting lists with a small, medium or large proportion of emotional words. The liberal memory bias for emotional words was only observed when a medium or large proportion of categorised words were presented in the lists. Similar, though weaker, effects were observed with categorised words that were not emotional (animal names). These results suggest that liberal memory bias for emotional items may be largely driven by effects of category membership.

  12. The time course of spoken word learning and recognition: studies with artificial lexicons.

    PubMed

    Magnuson, James S; Tanenhaus, Michael K; Aslin, Richard N; Dahan, Delphine

    2003-06-01

    The time course of spoken word recognition depends largely on the frequencies of a word and its competitors, or neighbors (similar-sounding words). However, variability in natural lexicons makes systematic analysis of frequency and neighbor similarity difficult. Artificial lexicons were used to achieve precise control over word frequency and phonological similarity. Eye tracking provided time course measures of lexical activation and competition (during spoken instructions to perform visually guided tasks) both during and after word learning, as a function of word frequency, neighbor type, and neighbor frequency. Apparent shifts from holistic to incremental competitor effects were observed in adults and neural network simulations, suggesting such shifts reflect general properties of learning rather than changes in the nature of lexical representations.

  13. The effect of normative context variability on recognition memory.

    PubMed

    Steyvers, Mark; Malmberg, Kenneth J

    2003-09-01

    According to some theories of recognition memory (e.g., S. Dennis & M. S. Humphreys, 2001), the number of different contexts in which words appear determines how memorable individual occurrences of words will be: A word that occurs in a small number of different contexts should be better recognized than a word that appears in a larger number of different contexts. To empirically test this prediction, a normative measure is developed, referred to here as context variability, that estimates the number of different contexts in which words appear in everyday life. These findings confirm the prediction that words low in context variability are better recognized (on average) than words that are high in context variability. (c) 2003 APA, all rights reserved

  14. Auditory word recognition: extrinsic and intrinsic effects of word frequency.

    PubMed

    Connine, C M; Titone, D; Wang, J

    1993-01-01

    Two experiments investigated the influence of word frequency in a phoneme identification task. Speech voicing continua were constructed so that one endpoint was a high-frequency word and the other endpoint was a low-frequency word (e.g., best-pest). Experiment 1 demonstrated that ambiguous tokens were labeled such that a high-frequency word was formed (intrinsic frequency effect). Experiment 2 manipulated the frequency composition of the list (extrinsic frequency effect). A high-frequency list bias produced an exaggerated influence of frequency; a low-frequency list bias showed a reverse frequency effect. Reaction time effects were discussed in terms of activation and postaccess decision models of frequency coding. The results support a late use of frequency in auditory word recognition.

  15. Famous talker effects in spoken word recognition.

    PubMed

    Maibauer, Alisa M; Markis, Teresa A; Newell, Jessica; McLennan, Conor T

    2014-01-01

    Previous work has demonstrated that talker-specific representations affect spoken word recognition relatively late during processing. However, participants in these studies were listening to unfamiliar talkers. In the present research, we used a long-term repetition-priming paradigm and a speeded-shadowing task and presented listeners with famous talkers. In Experiment 1, half the words were spoken by Barack Obama, and half by Hillary Clinton. Reaction times (RTs) to repeated words were shorter than those to unprimed words only when repeated by the same talker. However, in Experiment 2, using nonfamous talkers, RTs to repeated words were shorter than those to unprimed words both when repeated by the same talker and when repeated by a different talker. Taken together, the results demonstrate that talker-specific details can affect the perception of spoken words relatively early during processing when words are spoken by famous talkers.

  16. Tracking the emergence of the consonant bias in visual-word recognition: evidence with developing readers.

    PubMed

    Soares, Ana Paula; Perea, Manuel; Comesaña, Montserrat

    2014-01-01

    Recent research with skilled adult readers has consistently revealed an advantage of consonants over vowels in visual-word recognition (i.e., the so-called "consonant bias"). Nevertheless, little is known about how early in development the consonant bias emerges. This work aims to address this issue by studying the relative contribution of consonants and vowels at the early stages of visual-word recognition in developing readers (2(nd) and 4(th) Grade children) and skilled adult readers (college students) using a masked priming lexical decision task. Target words starting either with a consonant or a vowel were preceded by a briefly presented masked prime (50 ms) that could be the same as the target (e.g., pirata-PIRATA [pirate-PIRATE]), a consonant-preserving prime (e.g., pureto-PIRATA), a vowel-preserving prime (e.g., gicala-PIRATA), or an unrelated prime (e.g., bocelo -PIRATA). Results revealed significant priming effects for the identity and consonant-preserving conditions in adult readers and 4(th) Grade children, whereas 2(nd) graders only showed priming for the identity condition. In adult readers, the advantage of consonants was observed both for words starting with a consonant or a vowel, while in 4(th) graders this advantage was restricted to words with an initial consonant. Thus, the present findings suggest that a Consonant/Vowel skeleton should be included in future (developmental) models of visual-word recognition and reading.

  17. A Bridge between Pictures and Print.

    ERIC Educational Resources Information Center

    Jeffree, Dorothy

    1981-01-01

    The experiment investigated the feasibility of bridging the gap between the recognition of pictures and the recognition of words in four mentally handicapped adolescents by adapting a modified version of symbol accentuation (in which a printed word looks like the object it represents). (SB)

  18. Measuring Reading Performance Informally.

    ERIC Educational Resources Information Center

    Powell, William R.

    To improve the accuracy of the informal reading inventory (IRI), a differential set of criteria is necessary for both word recognition and comprehension scores for different levels and reading conditions. In initial evaluation, word recognition scores should reflect only errors of insertions, omissions, mispronunciations, substitiutions, unkown…

  19. Microcomputers and Preschoolers.

    ERIC Educational Resources Information Center

    Evans, Dina

    Preschool children can benefit by working with microcomputers. Thinking skills are enhanced by software games that focus on logic, memory, problem solving, and pattern recognition. Counting, sequencing, and matching games develop mathematics skills, and word games focusing on basic letter symbol and word recognition develop language skills.…

  20. Speech as a pilot input medium

    NASA Technical Reports Server (NTRS)

    Plummer, R. P.; Coler, C. R.

    1977-01-01

    The speech recognition system under development is a trainable pattern classifier based on a maximum-likelihood technique. An adjustable uncertainty threshold allows the rejection of borderline cases for which the probability of misclassification is high. The syntax of the command language spoken may be used as an aid to recognition, and the system adapts to changes in pronunciation if feedback from the user is available. Words must be separated by .25 second gaps. The system runs in real time on a mini-computer (PDP 11/10) and was tested on 120,000 speech samples from 10- and 100-word vocabularies. The results of these tests were 99.9% correct recognition for a vocabulary consisting of the ten digits, and 99.6% recognition for a 100-word vocabulary of flight commands, with a 5% rejection rate in each case. With no rejection, the recognition accuracies for the same vocabularies were 99.5% and 98.6% respectively.

Top