Sample records for visual word learning

  1. Embodied attention and word learning by toddlers

    PubMed Central

    Yu, Chen; Smith, Linda B.

    2013-01-01

    Many theories of early word learning begin with the uncertainty inherent to learning a word from its co-occurrence with a visual scene. However, the relevant visual scene for infant word learning is neither from the adult theorist’s view nor the mature partner’s view, but is rather from the learner’s personal view. Here we show that when 18-month old infants interacted with objects in play with their parents, they created moments in which a single object was visually dominant. If parents named the object during these moments of bottom-up selectivity, later forced-choice tests showed that infants learned the name, but did not when naming occurred during a less visually selective moment. The momentary visual input for parents and toddlers was captured via head cameras placed low on each participant’s forehead as parents played with and named objects for their infant. Frame-by-frame analyses of the head camera images at and around naming moments were conducted to determine the visual properties at input that were associated with learning. The analyses indicated that learning occurred when bottom-up visual information was clean and uncluttered. The sensory-motor behaviors of infants and parents were also analyzed to determine how their actions on the objects may have created these optimal visual moments for learning. The results are discussed with respect to early word learning, embodied attention, and the social role of parents in early word learning. PMID:22878116

  2. Interfering Neighbours: The Impact of Novel Word Learning on the Identification of Visually Similar Words

    ERIC Educational Resources Information Center

    Bowers, Jeffrey S.; Davis, Colin J.; Hanley, Derek A.

    2005-01-01

    We assessed the impact of visual similarity on written word identification by having participants learn new words (e.g. BANARA) that were neighbours of familiar words that previously had no neighbours (e.g. BANANA). Repeated exposure to these new words made it more difficult to semantically categorize the familiar words. There was some evidence of…

  3. Word learning and the cerebral hemispheres: from serial to parallel processing of written words

    PubMed Central

    Ellis, Andrew W.; Ferreira, Roberto; Cathles-Hagan, Polly; Holt, Kathryn; Jarvis, Lisa; Barca, Laura

    2009-01-01

    Reading familiar words differs from reading unfamiliar non-words in two ways. First, word reading is faster and more accurate than reading of unfamiliar non-words. Second, effects of letter length are reduced for words, particularly when they are presented in the right visual field in familiar formats. Two experiments are reported in which right-handed participants read aloud non-words presented briefly in their left and right visual fields before and after training on those items. The non-words were interleaved with familiar words in the naming tests. Before training, naming was slow and error prone, with marked effects of length in both visual fields. After training, fewer errors were made, naming was faster, and the effect of length was much reduced in the right visual field compared with the left. We propose that word learning creates orthographic word forms in the mid-fusiform gyrus of the left cerebral hemisphere. Those word forms allow words to access their phonological and semantic representations on a lexical basis. But orthographic word forms also interact with more posterior letter recognition systems in the middle/inferior occipital gyri, inducing more parallel processing of right visual field words than is possible for any left visual field stimulus, or for unfamiliar non-words presented in the right visual field. PMID:19933140

  4. Effects of Multimodal Information on Learning Performance and Judgment of Learning

    ERIC Educational Resources Information Center

    Chen, Gongxiang; Fu, Xiaolan

    2003-01-01

    Two experiments were conducted to investigate the effects of multimodal information on learning performance and judgment of learning (JOL). Experiment 1 examined the effects of representation type (word-only versus word-plus-picture) and presentation channel (visual-only versus visual-plus-auditory) on recall and immediate-JOL in fixed-rate…

  5. Learning of grammar-like visual sequences by adults with and without language-learning disabilities.

    PubMed

    Aguilar, Jessica M; Plante, Elena

    2014-08-01

    Two studies examined learning of grammar-like visual sequences to determine whether a general deficit in statistical learning characterizes this population. Furthermore, we tested the hypothesis that difficulty in sustaining attention during the learning task might account for differences in statistical learning. In Study 1, adults with normal language (NL) or language-learning disability (LLD) were familiarized with the visual artificial grammar and then tested using items that conformed or deviated from the grammar. In Study 2, a 2nd sample of adults with NL and LLD were presented auditory word pairs with weak semantic associations (e.g., groom + clean) along with the visual learning task. Participants were instructed to attend to visual sequences and to ignore the auditory stimuli. Incidental encoding of these words would indicate reduced attention to the primary task. In Studies 1 and 2, both groups demonstrated learning and generalization of the artificial grammar. In Study 2, neither the NL nor the LLD group appeared to encode the words presented during the learning phase. The results argue against a general deficit in statistical learning for individuals with LLD and demonstrate that both NL and LLD learners can ignore extraneous auditory stimuli during visual learning.

  6. ESTEEM: A Novel Framework for Qualitatively Evaluating and Visualizing Spatiotemporal Embeddings in Social Media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arendt, Dustin L.; Volkova, Svitlana

    Analyzing and visualizing large amounts of social media communications and contrasting short-term conversation changes over time and geo-locations is extremely important for commercial and government applications. Earlier approaches for large-scale text stream summarization used dynamic topic models and trending words. Instead, we rely on text embeddings – low-dimensional word representations in a continuous vector space where similar words are embedded nearby each other. This paper presents ESTEEM,1 a novel tool for visualizing and evaluating spatiotemporal embeddings learned from streaming social media texts. Our tool allows users to monitor and analyze query words and their closest neighbors with an interactive interface.more » We used state-of- the-art techniques to learn embeddings and developed a visualization to represent dynamically changing relations between words in social media over time and other dimensions. This is the first interactive visualization of streaming text representations learned from social media texts that also allows users to contrast differences across multiple dimensions of the data.« less

  7. Adding words to the brain's visual dictionary: novel word learning selectively sharpens orthographic representations in the VWFA.

    PubMed

    Glezer, Laurie S; Kim, Judy; Rule, Josh; Jiang, Xiong; Riesenhuber, Maximilian

    2015-03-25

    The nature of orthographic representations in the human brain is still subject of much debate. Recent reports have claimed that the visual word form area (VWFA) in left occipitotemporal cortex contains an orthographic lexicon based on neuronal representations highly selective for individual written real words (RWs). This theory predicts that learning novel words should selectively increase neural specificity for these words in the VWFA. We trained subjects to recognize novel pseudowords (PWs) and used fMRI rapid adaptation to compare neural selectivity with RWs, untrained PWs (UTPWs), and trained PWs (TPWs). Before training, PWs elicited broadly tuned responses, whereas responses to RWs indicated tight tuning. After training, TPW responses resembled those of RWs, whereas UTPWs continued to show broad tuning. This change in selectivity was specific to the VWFA. Therefore, word learning appears to selectively increase neuronal specificity for the new words in the VWFA, thereby adding these words to the brain's visual dictionary. Copyright © 2015 the authors 0270-6474/15/354965-08$15.00/0.

  8. Serial and semantic encoding of lists of words in schizophrenia patients with visual hallucinations.

    PubMed

    Brébion, Gildas; Ohlsen, Ruth I; Pilowsky, Lyn S; David, Anthony S

    2011-03-30

    Previous research has suggested that visual hallucinations in schizophrenia are associated with abnormal salience of visual mental images. Since visual imagery is used as a mnemonic strategy to learn lists of words, increased visual imagery might impede the other commonly used strategies of serial and semantic encoding. We had previously published data on the serial and semantic strategies implemented by patients when learning lists of concrete words with different levels of semantic organisation (Brébion et al., 2004). In this paper we present a re-analysis of these data, aiming at investigating the associations between learning strategies and visual hallucinations. Results show that the patients with visual hallucinations presented less serial clustering in the non-organisable list than the other patients. In the semantically organisable list with typical instances, they presented both less serial and less semantic clustering than the other patients. Thus, patients with visual hallucinations demonstrate reduced use of serial and semantic encoding in the lists made up of fairly familiar concrete words, which enable the formation of mental images. Although these results are preliminary, we propose that this different processing of the lists stems from the abnormal salience of the mental images such patients experience from the word stimuli. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  9. Visual Literacy: Learn To See, See To Learn.

    ERIC Educational Resources Information Center

    Burmark, Lynell

    Because of television, advertising, and the Internet, the primary literacy of the 21st century will be visual. It is no longer enough to read and write text--students must learn to process both words and pictures. They must be able to move fluently between text and images, between literal and figurative words. This book examines the effect on…

  10. Similarity and Difference in Learning L2 Word-Form

    ERIC Educational Resources Information Center

    Hamada, Megumi; Koda, Keiko

    2011-01-01

    This study explored similarity and difference in L2 written word-form learning from a cross-linguistic perspective. This study investigated whether learners' L1 orthographic background, which influences L2 visual word recognition (e.g., Wang et al., 2003), also influences L2 word-form learning, in particular, the sensitivity to phonological and…

  11. Examining the direct and indirect effects of visual-verbal paired associate learning on Chinese word reading.

    PubMed

    Georgiou, George; Liu, Cuina; Xu, Shiyang

    2017-08-01

    Associative learning, traditionally measured with paired associate learning (PAL) tasks, has been found to predict reading ability in several languages. However, it remains unclear whether it also predicts word reading in Chinese, which is known for its ambiguous print-sound correspondences, and whether its effects are direct or indirect through the effects of other reading-related skills such as phonological awareness and rapid naming. Thus, the purpose of this study was to examine the direct and indirect effects of visual-verbal PAL on word reading in an unselected sample of Chinese children followed from the second to the third kindergarten year. A sample of 141 second-year kindergarten children (71 girls and 70 boys; mean age=58.99months, SD=3.17) were followed for a year and were assessed at both times on measures of visual-verbal PAL, rapid naming, and phonological awareness. In the third kindergarten year, they were also assessed on word reading. The results of path analysis showed that visual-verbal PAL exerted a significant direct effect on word reading that was independent of the effects of phonological awareness and rapid naming. However, it also exerted significant indirect effects through phonological awareness. Taken together, these findings suggest that variations in cross-modal associative learning (as measured by visual-verbal PAL) place constraints on the development of word recognition skills irrespective of the characteristics of the orthography children are learning to read. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Do preschool children learn to read words from environmental prints?

    PubMed

    Zhao, Jing; Zhao, Pei; Weng, Xuchu; Li, Su

    2014-01-01

    Parents and teachers worldwide believe that a visual environment rich with print can contribute to young children's literacy. Children seem to recognize words in familiar logos at an early age. However, most of previous studies were carried out with alphabetic scripts. Alphabetic letters regularly correspond to phonological segments in a word and provide strong cues about the identity of the whole word. Thus it was not clear whether children can learn to read words by extracting visual word form information from environmental prints. To exclude the phonological-cue confound, this study tested children's knowledge of Chinese words embedded in familiar logos. The four environmental logos were employed and transformed into four versions with the contextual cues (i.e., something apart from the presentation of the words themselves in logo format like the color, logo and font type cues) gradually minimized. Children aged from 3 to 5 were tested. We observed that children of different ages all performed better when words were presented in highly familiar logos compared to when they were presented in a plain fashion, devoid of context. This advantage for familiar logos was also present when the contextual information was only partial. However, the role of various cues in learning words changed with age. The color and logo cues had a larger effect in 3- and 4- year-olds than in 5-year-olds, while the font type cue played a greater role in 5-year-olds than in the other two groups. Our findings demonstrated that young children did not easily learn words by extracting their visual form information even from familiar environmental prints. However, children aged 5 begin to pay more attention to the visual form information of words in highly familiar logos than those aged 3 and 4.

  13. Do Preschool Children Learn to Read Words from Environmental Prints?

    PubMed Central

    Zhao, Jing; Zhao, Pei; Weng, Xuchu; Li, Su

    2014-01-01

    Parents and teachers worldwide believe that a visual environment rich with print can contribute to young children's literacy. Children seem to recognize words in familiar logos at an early age. However, most of previous studies were carried out with alphabetic scripts. Alphabetic letters regularly correspond to phonological segments in a word and provide strong cues about the identity of the whole word. Thus it was not clear whether children can learn to read words by extracting visual word form information from environmental prints. To exclude the phonological-cue confound, this study tested children's knowledge of Chinese words embedded in familiar logos. The four environmental logos were employed and transformed into four versions with the contextual cues (i.e., something apart from the presentation of the words themselves in logo format like the color, logo and font type cues) gradually minimized. Children aged from 3 to 5 were tested. We observed that children of different ages all performed better when words were presented in highly familiar logos compared to when they were presented in a plain fashion, devoid of context. This advantage for familiar logos was also present when the contextual information was only partial. However, the role of various cues in learning words changed with age. The color and logo cues had a larger effect in 3- and 4- year-olds than in 5-year-olds, while the font type cue played a greater role in 5-year-olds than in the other two groups. Our findings demonstrated that young children did not easily learn words by extracting their visual form information even from familiar environmental prints. However, children aged 5 begin to pay more attention to the visual form information of words in highly familiar logos than those aged 3 and 4. PMID:24465677

  14. Visual feature-tolerance in the reading network.

    PubMed

    Rauschecker, Andreas M; Bowen, Reno F; Perry, Lee M; Kevan, Alison M; Dougherty, Robert F; Wandell, Brian A

    2011-09-08

    A century of neurology and neuroscience shows that seeing words depends on ventral occipital-temporal (VOT) circuitry. Typically, reading is learned using high-contrast line-contour words. We explored whether a specific VOT region, the visual word form area (VWFA), learns to see only these words or recognizes words independent of the specific shape-defining visual features. Word forms were created using atypical features (motion-dots, luminance-dots) whose statistical properties control word-visibility. We measured fMRI responses as word form visibility varied, and we used TMS to interfere with neural processing in specific cortical circuits, while subjects performed a lexical decision task. For all features, VWFA responses increased with word-visibility and correlated with performance. TMS applied to motion-specialized area hMT+ disrupted reading performance for motion-dots, but not line-contours or luminance-dots. A quantitative model describes feature-convergence in the VWFA and relates VWFA responses to behavioral performance. These findings suggest how visual feature-tolerance in the reading network arises through signal convergence from feature-specialized cortical areas. Copyright © 2011 Elsevier Inc. All rights reserved.

  15. Relationships between Visual and Auditory Perceptual Skills and Comprehension in Students with Learning Disabilities.

    ERIC Educational Resources Information Center

    Weaver, Phyllis A.; Rosner, Jerome

    1979-01-01

    Scores of 25 learning disabled students (aged 9 to 13) were compared on five tests: a visual-perceptual test (Coloured Progressive Matrices); an auditory-perceptual test (Auditory Motor Placement); a listening and reading comprehension test (Durrell Listening-Reading Series); and a word recognition test (Word Recognition subtest, Diagnostic…

  16. Deep learning of orthographic representations in baboons.

    PubMed

    Hannagan, Thomas; Ziegler, Johannes C; Dufau, Stéphane; Fagot, Joël; Grainger, Jonathan

    2014-01-01

    What is the origin of our ability to learn orthographic knowledge? We use deep convolutional networks to emulate the primate's ventral visual stream and explore the recent finding that baboons can be trained to discriminate English words from nonwords. The networks were exposed to the exact same sequence of stimuli and reinforcement signals as the baboons in the experiment, and learned to map real visual inputs (pixels) of letter strings onto binary word/nonword responses. We show that the networks' highest levels of representations were indeed sensitive to letter combinations as postulated in our previous research. The model also captured the key empirical findings, such as generalization to novel words, along with some intriguing inter-individual differences. The present work shows the merits of deep learning networks that can simulate the whole processing chain all the way from the visual input to the response while allowing researchers to analyze the complex representations that emerge during the learning process.

  17. Linguistic labels, dynamic visual features, and attention in infant category learning.

    PubMed

    Deng, Wei Sophia; Sloutsky, Vladimir M

    2015-06-01

    How do words affect categorization? According to some accounts, even early in development words are category markers and are different from other features. According to other accounts, early in development words are part of the input and are akin to other features. The current study addressed this issue by examining the role of words and dynamic visual features in category learning in 8- to 12-month-old infants. Infants were familiarized with exemplars from one category in a label-defined or motion-defined condition and then tested with prototypes from the studied category and from a novel contrast category. Eye-tracking results indicated that infants exhibited better category learning in the motion-defined condition than in the label-defined condition, and their attention was more distributed among different features when there was a dynamic visual feature compared with the label-defined condition. These results provide little evidence for the idea that linguistic labels are category markers that facilitate category learning. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. Linguistic Labels, Dynamic Visual Features, and Attention in Infant Category Learning

    PubMed Central

    Deng, Wei (Sophia); Sloutsky, Vladimir M.

    2015-01-01

    How do words affect categorization? According to some accounts, even early in development, words are category markers and are different from other features. According to other accounts, early in development, words are part of the input and are akin to other features. The current study addressed this issue by examining the role of words and dynamic visual features in category learning in 8- to 12- month infants. Infants were familiarized with exemplars from one category in a label-defined or motion-defined condition and then tested with prototypes from the studied category and from a novel contrast category. Eye tracking results indicated that infants exhibited better category learning in the motion-defined than in the label-defined condition and their attention was more distributed among different features when there was a dynamic visual feature compared to the label-defined condition. These results provide little evidence for the idea that linguistic labels are category markers that facilitate category learning. PMID:25819100

  19. Tracking the Eye Movement of Four Years Old Children Learning Chinese Words.

    PubMed

    Lin, Dan; Chen, Guangyao; Liu, Yingyi; Liu, Jiaxin; Pan, Jue; Mo, Lei

    2018-02-01

    Storybook reading is the major source of literacy exposure for beginning readers. The present study tracked 4-year-old Chinese children's eye movements while they were reading simulated storybook pages. Their eye-movement patterns were examined in relation to their word learning gains. The same reading list, consisting of 20 two-character Chinese words, was used in the pretest, 5-min eye-tracking learning session, and posttest. Additionally, visual spatial skill and phonological awareness were assessed in the pretest as cognitive controls. The results showed that the children's attention was attracted quickly by pictures, on which their attention was focused most, with only 13% of the time looking at words. Moreover, significant learning gains in word reading were observed, from the pretest to posttest, from 5-min exposure to simulated storybook pages with words, picture and pronunciation of two-character words present. Furthermore, the children's attention to words significantly predicted posttest reading beyond socioeconomic status, age, visual spatial skill, phonological awareness and pretest reading performance. This eye-movement evidence of storybook reading by children as young as four years, reading a non-alphabetic script (i.e., Chinese), has demonstrated exciting findings that children can learn words effectively with minimal exposure and little instruction; these findings suggest that learning to read requires attention to the basic words itself. The study contributes to our understanding of early reading acquisition with eye-movement evidence from beginning readers.

  20. A Dual-Route Model that Learns to Pronounce English Words

    NASA Technical Reports Server (NTRS)

    Remington, Roger W.; Miller, Craig S.; Null, Cynthia H. (Technical Monitor)

    1995-01-01

    This paper describes a model that learns to pronounce English words. Learning occurs in two modules: 1) a rule-based module that constructs pronunciations by phonetic analysis of the letter string, and 2) a whole-word module that learns to associate subsets of letters to the pronunciation, without phonetic analysis. In a simulation on a corpus of over 300 words the model produced pronunciation latencies consistent with the effects of word frequency and orthographic regularity observed in human data. Implications of the model for theories of visual word processing and reading instruction are discussed.

  1. Learning style, judgements of learning, and learning of verbal and visual information.

    PubMed

    Knoll, Abby R; Otani, Hajime; Skeel, Reid L; Van Horn, K Roger

    2017-08-01

    The concept of learning style is immensely popular despite the lack of evidence showing that learning style influences performance. This study tested the hypothesis that the popularity of learning style is maintained because it is associated with subjective aspects of learning, such as judgements of learning (JOLs). Preference for verbal and visual information was assessed using the revised Verbalizer-Visualizer Questionnaire (VVQ). Then, participants studied a list of word pairs and a list of picture pairs, making JOLs (immediate, delayed, and global) while studying each list. Learning was tested by cued recall. The results showed that higher VVQ verbalizer scores were associated with higher immediate JOLs for words, and higher VVQ visualizer scores were associated with higher immediate JOLs for pictures. There was no association between VVQ scores and recall or JOL accuracy. As predicted, learning style was associated with subjective aspects of learning but not objective aspects of learning. © 2016 The British Psychological Society.

  2. Bedding down new words: Sleep promotes the emergence of lexical competition in visual word recognition.

    PubMed

    Wang, Hua-Chen; Savage, Greg; Gaskell, M Gareth; Paulin, Tamara; Robidoux, Serje; Castles, Anne

    2017-08-01

    Lexical competition processes are widely viewed as the hallmark of visual word recognition, but little is known about the factors that promote their emergence. This study examined for the first time whether sleep may play a role in inducing these effects. A group of 27 participants learned novel written words, such as banara, at 8 am and were tested on their learning at 8 pm the same day (AM group), while 29 participants learned the words at 8 pm and were tested at 8 am the following day (PM group). Both groups were retested after 24 hours. Using a semantic categorization task, we showed that lexical competition effects, as indexed by slowed responses to existing neighbor words such as banana, emerged 12 h later in the PM group who had slept after learning but not in the AM group. After 24 h the competition effects were evident in both groups. These findings have important implications for theories of orthographic learning and broader neurobiological models of memory consolidation.

  3. Multisensory training can promote or impede visual perceptual learning of speech stimuli: visual-tactile vs. visual-auditory training.

    PubMed

    Eberhardt, Silvio P; Auer, Edward T; Bernstein, Lynne E

    2014-01-01

    In a series of studies we have been investigating how multisensory training affects unisensory perceptual learning with speech stimuli. Previously, we reported that audiovisual (AV) training with speech stimuli can promote auditory-only (AO) perceptual learning in normal-hearing adults but can impede learning in congenitally deaf adults with late-acquired cochlear implants. Here, impeder and promoter effects were sought in normal-hearing adults who participated in lipreading training. In Experiment 1, visual-only (VO) training on paired associations between CVCVC nonsense word videos and nonsense pictures demonstrated that VO words could be learned to a high level of accuracy even by poor lipreaders. In Experiment 2, visual-auditory (VA) training in the same paradigm but with the addition of synchronous vocoded acoustic speech impeded VO learning of the stimuli in the paired-associates paradigm. In Experiment 3, the vocoded AO stimuli were shown to be less informative than the VO speech. Experiment 4 combined vibrotactile speech stimuli with the visual stimuli during training. Vibrotactile stimuli were shown to promote visual perceptual learning. In Experiment 5, no-training controls were used to show that training with visual speech carried over to consonant identification of untrained CVCVC stimuli but not to lipreading words in sentences. Across this and previous studies, multisensory training effects depended on the functional relationship between pathways engaged during training. Two principles are proposed to account for stimulus effects: (1) Stimuli presented to the trainee's primary perceptual pathway will impede learning by a lower-rank pathway. (2) Stimuli presented to the trainee's lower rank perceptual pathway will promote learning by a higher-rank pathway. The mechanisms supporting these principles are discussed in light of multisensory reverse hierarchy theory (RHT).

  4. Multisensory training can promote or impede visual perceptual learning of speech stimuli: visual-tactile vs. visual-auditory training

    PubMed Central

    Eberhardt, Silvio P.; Auer Jr., Edward T.; Bernstein, Lynne E.

    2014-01-01

    In a series of studies we have been investigating how multisensory training affects unisensory perceptual learning with speech stimuli. Previously, we reported that audiovisual (AV) training with speech stimuli can promote auditory-only (AO) perceptual learning in normal-hearing adults but can impede learning in congenitally deaf adults with late-acquired cochlear implants. Here, impeder and promoter effects were sought in normal-hearing adults who participated in lipreading training. In Experiment 1, visual-only (VO) training on paired associations between CVCVC nonsense word videos and nonsense pictures demonstrated that VO words could be learned to a high level of accuracy even by poor lipreaders. In Experiment 2, visual-auditory (VA) training in the same paradigm but with the addition of synchronous vocoded acoustic speech impeded VO learning of the stimuli in the paired-associates paradigm. In Experiment 3, the vocoded AO stimuli were shown to be less informative than the VO speech. Experiment 4 combined vibrotactile speech stimuli with the visual stimuli during training. Vibrotactile stimuli were shown to promote visual perceptual learning. In Experiment 5, no-training controls were used to show that training with visual speech carried over to consonant identification of untrained CVCVC stimuli but not to lipreading words in sentences. Across this and previous studies, multisensory training effects depended on the functional relationship between pathways engaged during training. Two principles are proposed to account for stimulus effects: (1) Stimuli presented to the trainee’s primary perceptual pathway will impede learning by a lower-rank pathway. (2) Stimuli presented to the trainee’s lower rank perceptual pathway will promote learning by a higher-rank pathway. The mechanisms supporting these principles are discussed in light of multisensory reverse hierarchy theory (RHT). PMID:25400566

  5. Dynamic versus Static Dictionary with and without Printed Focal Words in e-Book Reading as Facilitator for Word Learning

    ERIC Educational Resources Information Center

    Korat, Ofra; Levin, Iris; Ben-Shabt, Anat; Shneor, Dafna; Bokovza, Limor

    2014-01-01

    We investigated the extent to which a dictionary embedded in an e-book with static or dynamic visuals with and without printed focal words affects word learning. A pretest-posttest design was used to measure gains of expressive words' meaning and their spelling. The participants included 250 Hebrew-speaking second graders from…

  6. Word position affects stimulus recognition: evidence for early ERP short-term plastic modulation.

    PubMed

    Spironelli, Chiara; Galfano, Giovanni; Umiltà, Carlo; Angrilli, Alessandro

    2011-12-01

    The present study was aimed at investigating the short-term plastic changes that follow word learning at a neurophysiological level. The main hypothesis was that word position (left or right visual field, LVF/RH or RVF/LH) in the initial learning phase would leave a trace that affected, in the subsequent recognition phase, the Recognition Potential (i.e., the first negative component distinguishing words from other stimuli) elicited 220-240 ms after centrally presented stimuli. Forty-eight students were administered, in the learning phase, 125 words for 4s, randomly presented half in the left and half in the right visual field. In the recognition phase, participants were split into two equal groups, one was assigned to the Word task, the other to the Picture task (in which half of the 125 pictures were new, and half matched prior studied words). During the Word task, old RVF/LH words elicited significantly greater negativity in left posterior sites with respect to old LVF/RH words, which in turn showed the same pattern of activation evoked by new words. Therefore, correspondence between stimulus spatial position and hemisphere specialized in automatic word recognition created a robust prime for subsequent recognition. During the Picture task, pictures matching old RVF/LH words showed no differences compared with new pictures, but evoked significantly greater negativity than pictures matching old LVF/RH words. Thus, the priming effect vanished when the task required a switch from visual analysis to stored linguistic information, whereas the lack of correspondence between stimulus position and network specialized in automatic word recognition (i.e., when words were presented to the LVF/RH) revealed the implicit costs for recognition. Results support the view that short-term plastic changes occurring in a linguistic learning task interact with both stimulus position and modality (written word vs. picture representation). Copyright © 2011 Elsevier B.V. All rights reserved.

  7. Immediate lexical integration of novel word forms

    PubMed Central

    Kapnoula, Efthymia C.; McMurray, Bob

    2014-01-01

    It is well known that familiar words inhibit each other during spoken word recognition. However, we do not know how and under what circumstances newly learned words become integrated with the lexicon in order to engage in this competition. Previous work on word learning has highlighted the importance of offline consolidation (Gaskell & Dumay, 2003) and meaning (Leach & Samuel, 2007) to establish this integration. In two experiments we test the necessity of these factors by examining the inhibition between newly learned items and familiar words immediately after learning. Participants learned a set of nonwords without meanings in active (Exp 1) or passive (Exp 2) exposure paradigms. After training, participants performed a visual world paradigm task to assess inhibition from these newly learned items. An analysis of participants’ fixations suggested that the newly learned words were able to engage in competition with known words without any consolidation. PMID:25460382

  8. Immediate lexical integration of novel word forms.

    PubMed

    Kapnoula, Efthymia C; Packard, Stephanie; Gupta, Prahlad; McMurray, Bob

    2015-01-01

    It is well known that familiar words inhibit each other during spoken word recognition. However, we do not know how and under what circumstances newly learned words become integrated with the lexicon in order to engage in this competition. Previous work on word learning has highlighted the importance of offline consolidation (Gaskell & Dumay, 2003) and meaning (Leach & Samuel, 2007) to establish this integration. In two experiments we test the necessity of these factors by examining the inhibition between newly learned items and familiar words immediately after learning. Participants learned a set of nonwords without meanings in active (Experiment 1) or passive (Experiment 2) exposure paradigms. After training, participants performed a visual world paradigm task to assess inhibition from these newly learned items. An analysis of participants' fixations suggested that the newly learned words were able to engage in competition with known words without any consolidation. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Deep Learning of Orthographic Representations in Baboons

    PubMed Central

    Hannagan, Thomas; Ziegler, Johannes C.; Dufau, Stéphane; Fagot, Joël; Grainger, Jonathan

    2014-01-01

    What is the origin of our ability to learn orthographic knowledge? We use deep convolutional networks to emulate the primate's ventral visual stream and explore the recent finding that baboons can be trained to discriminate English words from nonwords [1]. The networks were exposed to the exact same sequence of stimuli and reinforcement signals as the baboons in the experiment, and learned to map real visual inputs (pixels) of letter strings onto binary word/nonword responses. We show that the networks' highest levels of representations were indeed sensitive to letter combinations as postulated in our previous research. The model also captured the key empirical findings, such as generalization to novel words, along with some intriguing inter-individual differences. The present work shows the merits of deep learning networks that can simulate the whole processing chain all the way from the visual input to the response while allowing researchers to analyze the complex representations that emerge during the learning process. PMID:24416300

  10. The company objects keep: Linking referents together during cross-situational word learning.

    PubMed

    Zettersten, Martin; Wojcik, Erica; Benitez, Viridiana L; Saffran, Jenny

    2018-04-01

    Learning the meanings of words involves not only linking individual words to referents but also building a network of connections among entities in the world, concepts, and words. Previous studies reveal that infants and adults track the statistical co-occurrence of labels and objects across multiple ambiguous training instances to learn words. However, it is less clear whether, given distributional or attentional cues, learners also encode associations amongst the novel objects. We investigated the consequences of two types of cues that highlighted object-object links in a cross-situational word learning task: distributional structure - how frequently the referents of novel words occurred together - and visual context - whether the referents were seen on matching backgrounds. Across three experiments, we found that in addition to learning novel words, adults formed connections between frequently co-occurring objects. These findings indicate that learners exploit statistical regularities to form multiple types of associations during word learning.

  11. The Development of Cortical Sensitivity to Visual Word Forms

    ERIC Educational Resources Information Center

    Ben-Shachar, Michal; Dougherty, Robert F.; Deutsch, Gayle K.; Wandell, Brian A.

    2011-01-01

    The ability to extract visual word forms quickly and efficiently is essential for using reading as a tool for learning. We describe the first longitudinal fMRI study to chart individual changes in cortical sensitivity to written words as reading develops. We conducted four annual measurements of brain function and reading skills in a heterogeneous…

  12. Pitch enhancement facilitates word learning across visual contexts

    PubMed Central

    Filippi, Piera; Gingras, Bruno; Fitch, W. Tecumseh

    2014-01-01

    This study investigates word-learning using a new experimental paradigm that integrates three processes: (a) extracting a word out of a continuous sound sequence, (b) inferring its referential meanings in context, (c) mapping the segmented word onto its broader intended referent, such as other objects of the same semantic category, and to novel utterances. Previous work has examined the role of statistical learning and/or of prosody in each of these processes separately. Here, we combine these strands of investigation into a single experimental approach, in which participants viewed a photograph belonging to one of three semantic categories while hearing a complex, five-word utterance containing a target word. Six between-subjects conditions were tested with 20 adult participants each. In condition 1, the only cue to word-meaning mapping was the co-occurrence of word and referents. This statistical cue was present in all conditions. In condition 2, the target word was sounded at a higher pitch. In condition 3, random words were sounded at a higher pitch, creating an inconsistent cue. In condition 4, the duration of the target word was lengthened. In conditions 5 and 6, an extraneous acoustic cue and a visual cue were associated with the target word, respectively. Performance in this word-learning task was significantly higher than that observed with simple co-occurrence only when pitch prominence consistently marked the target word. We discuss implications for the pragmatic value of pitch marking as well as the relevance of our findings to language acquisition and language evolution. PMID:25566144

  13. Effects of Referent Token Variability on L2 Vocabulary Learning

    ERIC Educational Resources Information Center

    Sommers, Mitchell S.; Barcroft, Joe

    2013-01-01

    Previous research has demonstrated substantially improved second language (L2) vocabulary learning when spoken word forms are varied using multiple talkers, speaking styles, or speaking rates. In contrast, the present study varied visual representations of referents for target vocabulary. English speakers learned Spanish words in formats of no…

  14. The strengths and weaknesses in verbal short-term memory and visual working memory in children with hearing impairment and additional language learning difficulties.

    PubMed

    Willis, Suzi; Goldbart, Juliet; Stansfield, Jois

    2014-07-01

    To compare verbal short-term memory and visual working memory abilities of six children with congenital hearing-impairment identified as having significant language learning difficulties with normative data from typically hearing children using standardized memory assessments. Six children with hearing loss aged 8-15 years were assessed on measures of verbal short-term memory (Non-word and word recall) and visual working memory annually over a two year period. All children had cognitive abilities within normal limits and used spoken language as the primary mode of communication. The language assessment scores at the beginning of the study revealed that all six participants exhibited delays of two years or more on standardized assessments of receptive and expressive vocabulary and spoken language. The children with hearing-impairment scores were significantly higher on the non-word recall task than the "real" word recall task. They also exhibited significantly higher scores on visual working memory than those of the age-matched sample from the standardized memory assessment. Each of the six participants in this study displayed the same pattern of strengths and weaknesses in verbal short-term memory and visual working memory despite their very different chronological ages. The children's poor ability to recall single syllable words in relation to non-words is a clinical indicator of their difficulties in verbal short-term memory. However, the children with hearing-impairment do not display generalized processing difficulties and indeed demonstrate strengths in visual working memory. The poor ability to recall words, in combination with difficulties with early word learning may be indicators of children with hearing-impairment who will struggle to develop spoken language equal to that of their normally hearing peers. This early identification has the potential to allow for target specific intervention that may remediate their difficulties. Copyright © 2014. Published by Elsevier Ireland Ltd.

  15. Incidental orthographic learning during a color detection task.

    PubMed

    Protopapas, Athanassios; Mitsi, Anna; Koustoumbardis, Miltiadis; Tsitsopoulou, Sofia M; Leventi, Marianna; Seitz, Aaron R

    2017-09-01

    Orthographic learning refers to the acquisition of knowledge about specific spelling patterns forming words and about general biases and constraints on letter sequences. It is thought to occur by strengthening simultaneously activated visual and phonological representations during reading. Here we demonstrate that a visual perceptual learning procedure that leaves no time for articulation can result in orthographic learning evidenced in improved reading and spelling performance. We employed task-irrelevant perceptual learning (TIPL), in which the stimuli to be learned are paired with an easy task target. Assorted line drawings and difficult-to-spell words were presented in red color among sequences of other black-colored words and images presented in rapid succession, constituting a fast-TIPL procedure with color detection being the explicit task. In five experiments, Greek children in Grades 4-5 showed increased recognition of words and images that had appeared in red, both during and after the training procedure, regardless of within-training testing, and also when targets appeared in blue instead of red. Significant transfer to reading and spelling emerged only after increased training intensity. In a sixth experiment, children in Grades 2-3 showed generalization to words not presented during training that carried the same derivational affixes as in the training set. We suggest that reinforcement signals related to detection of the target stimuli contribute to the strengthening of orthography-phonology connections beyond earlier levels of visually-based orthographic representation learning. These results highlight the potential of perceptual learning procedures for the reinforcement of higher-level orthographic representations. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  16. Evaluating a Bilingual Text-Mining System with a Taxonomy of Key Words and Hierarchical Visualization for Understanding Learner-Generated Text

    ERIC Educational Resources Information Center

    Kong, Siu Cheung; Li, Ping; Song, Yanjie

    2018-01-01

    This study evaluated a bilingual text-mining system, which incorporated a bilingual taxonomy of key words and provided hierarchical visualization, for understanding learner-generated text in the learning management systems through automatic identification and counting of matching key words. A class of 27 in-service teachers studied a course…

  17. Reading Habits, Perceptual Learning, and Recognition of Printed Words

    ERIC Educational Resources Information Center

    Nazir, Tatjana A.; Ben-Boutayab, Nadia; Decoppet, Nathalie; Deutsch, Avital; Frost, Ram

    2004-01-01

    The present work aims at demonstrating that visual training associated with the act of reading modifies the way we perceive printed words. As reading does not train all parts of the retina in the same way but favors regions on the side in the direction of scanning, visual word recognition should be better at retinal locations that are frequently…

  18. Neurophysiological evidence for the interplay of speech segmentation and word-referent mapping during novel word learning.

    PubMed

    François, Clément; Cunillera, Toni; Garcia, Enara; Laine, Matti; Rodriguez-Fornells, Antoni

    2017-04-01

    Learning a new language requires the identification of word units from continuous speech (the speech segmentation problem) and mapping them onto conceptual representation (the word to world mapping problem). Recent behavioral studies have revealed that the statistical properties found within and across modalities can serve as cues for both processes. However, segmentation and mapping have been largely studied separately, and thus it remains unclear whether both processes can be accomplished at the same time and if they share common neurophysiological features. To address this question, we recorded EEG of 20 adult participants during both an audio alone speech segmentation task and an audiovisual word-to-picture association task. The participants were tested for both the implicit detection of online mismatches (structural auditory and visual semantic violations) as well as for the explicit recognition of words and word-to-picture associations. The ERP results from the learning phase revealed a delayed learning-related fronto-central negativity (FN400) in the audiovisual condition compared to the audio alone condition. Interestingly, while online structural auditory violations elicited clear MMN/N200 components in the audio alone condition, visual-semantic violations induced meaning-related N400 modulations in the audiovisual condition. The present results support the idea that speech segmentation and meaning mapping can take place in parallel and act in synergy to enhance novel word learning. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. The Role of the Phonological Loop in English Word Learning: A Comparison of Chinese ESL Learners and Native Speakers

    ERIC Educational Resources Information Center

    Hamada, Megumi; Koda, Keiko

    2011-01-01

    Although the role of the phonological loop in word-retention is well documented, research in Chinese character retention suggests the involvement of non-phonological encoding. This study investigated whether the extent to which the phonological loop contributes to learning and remembering visually introduced words varies between college-level…

  20. Learning during processing Word learning doesn’t wait for word recognition to finish

    PubMed Central

    Apfelbaum, Keith S.; McMurray, Bob

    2017-01-01

    Previous research on associative learning has uncovered detailed aspects of the process, including what types of things are learned, how they are learned, and where in the brain such learning occurs. However, perceptual processes, such as stimulus recognition and identification, take time to unfold. Previous studies of learning have not addressed when, during the course of these dynamic recognition processes, learned representations are formed and updated. If learned representations are formed and updated while recognition is ongoing, the result of learning may incorporate spurious, partial information. For example, during word recognition, words take time to be identified, and competing words are often active in parallel. If learning proceeds before this competition resolves, representations may be influenced by the preliminary activations present at the time of learning. In three experiments using word learning as a model domain, we provide evidence that learning reflects the ongoing dynamics of auditory and visual processing during a learning event. These results show that learning can occur before stimulus recognition processes are complete; learning does not wait for ongoing perceptual processing to complete. PMID:27471082

  1. Comparing Auditory-Only and Audiovisual Word Learning for Children with Hearing Loss.

    PubMed

    McDaniel, Jena; Camarata, Stephen; Yoder, Paul

    2018-05-15

    Although reducing visual input to emphasize auditory cues is a common practice in pediatric auditory (re)habilitation, the extant literature offers minimal empirical evidence for whether unisensory auditory-only (AO) or multisensory audiovisual (AV) input is more beneficial to children with hearing loss for developing spoken language skills. Using an adapted alternating treatments single case research design, we evaluated the effectiveness and efficiency of a receptive word learning intervention with and without access to visual speechreading cues. Four preschool children with prelingual hearing loss participated. Based on probes without visual cues, three participants demonstrated strong evidence for learning in the AO and AV conditions relative to a control (no-teaching) condition. No participants demonstrated a differential rate of learning between AO and AV conditions. Neither an inhibitory effect predicted by a unisensory theory nor a beneficial effect predicted by a multisensory theory for providing visual cues was identified. Clinical implications are discussed.

  2. Phoneme Awareness, Visual-Verbal Paired-Associate Learning, and Rapid Automatized Naming as Predictors of Individual Differences in Reading Ability

    ERIC Educational Resources Information Center

    Warmington, Meesha; Hulme, Charles

    2012-01-01

    This study examines the concurrent relationships between phoneme awareness, visual-verbal paired-associate learning, rapid automatized naming (RAN), and reading skills in 7- to 11-year-old children. Path analyses showed that visual-verbal paired-associate learning and RAN, but not phoneme awareness, were unique predictors of word recognition,…

  3. The Effect of Visual-Spatial Stimulation on Emergent Readers at Risk for Specific Learning Disability in Reading

    ERIC Educational Resources Information Center

    Zascavage, Victoria Selden; McKenzie, Ginger Kelley; Buot, Max; Woods, Carol; Orton-Gillingham, Fellow

    2012-01-01

    This study compared word recognition for words written in a traditional flat font to the same words written in a three-dimensional appearing font determined to create a right hemispheric stimulation. The participants were emergent readers enrolled in Montessori schools in the United States learning to read basic CVC (consonant, vowel, consonant)…

  4. Using spoken words to guide open-ended category formation.

    PubMed

    Chauhan, Aneesh; Seabra Lopes, Luís

    2011-11-01

    Naming is a powerful cognitive tool that facilitates categorization by forming an association between words and their referents. There is evidence in child development literature that strong links exist between early word-learning and conceptual development. A growing view is also emerging that language is a cultural product created and acquired through social interactions. Inspired by these studies, this paper presents a novel learning architecture for category formation and vocabulary acquisition in robots through active interaction with humans. This architecture is open-ended and is capable of acquiring new categories and category names incrementally. The process can be compared to language grounding in children at single-word stage. The robot is embodied with visual and auditory sensors for world perception. A human instructor uses speech to teach the robot the names of the objects present in a visually shared environment. The robot uses its perceptual input to ground these spoken words and dynamically form/organize category descriptions in order to achieve better categorization. To evaluate the learning system at word-learning and category formation tasks, two experiments were conducted using a simple language game involving naming and corrective feedback actions from the human user. The obtained results are presented and discussed in detail.

  5. Constraints on the Transfer of Perceptual Learning in Accented Speech

    PubMed Central

    Eisner, Frank; Melinger, Alissa; Weber, Andrea

    2013-01-01

    The perception of speech sounds can be re-tuned through a mechanism of lexically driven perceptual learning after exposure to instances of atypical speech production. This study asked whether this re-tuning is sensitive to the position of the atypical sound within the word. We investigated perceptual learning using English voiced stop consonants, which are commonly devoiced in word-final position by Dutch learners of English. After exposure to a Dutch learner’s productions of devoiced stops in word-final position (but not in any other positions), British English (BE) listeners showed evidence of perceptual learning in a subsequent cross-modal priming task, where auditory primes with devoiced final stops (e.g., “seed”, pronounced [si:th]), facilitated recognition of visual targets with voiced final stops (e.g., SEED). In Experiment 1, this learning effect generalized to test pairs where the critical contrast was in word-initial position, e.g., auditory primes such as “town” facilitated recognition of visual targets like DOWN. Control listeners, who had not heard any stops by the speaker during exposure, showed no learning effects. The generalization to word-initial position did not occur when participants had also heard correctly voiced, word-initial stops during exposure (Experiment 2), and when the speaker was a native BE speaker who mimicked the word-final devoicing (Experiment 3). The readiness of the perceptual system to generalize a previously learned adjustment to other positions within the word thus appears to be modulated by distributional properties of the speech input, as well as by the perceived sociophonetic characteristics of the speaker. The results suggest that the transfer of pre-lexical perceptual adjustments that occur through lexically driven learning can be affected by a combination of acoustic, phonological, and sociophonetic factors. PMID:23554598

  6. Eye-tracking the time-course of novel word learning and lexical competition in adults and children.

    PubMed

    Weighall, A R; Henderson, L M; Barr, D J; Cairney, S A; Gaskell, M G

    2017-04-01

    Lexical competition is a hallmark of proficient, automatic word recognition. Previous research suggests that there is a delay before a new spoken word becomes engaged in this process, with sleep playing an important role. However, data from one method - the visual world paradigm - consistently show competition without a delay. We trained 42 adults and 40 children (aged 7-8) on novel word-object pairings, and employed this paradigm to measure the time-course of lexical competition. Fixations to novel objects upon hearing existing words (e.g., looks to the novel object biscal upon hearing "click on the biscuit") were compared to fixations on untrained objects. Novel word-object pairings learned immediately before testing and those learned the previous day exhibited significant competition effects, with stronger competition for the previous day pairings for children but not adults. Crucially, this competition effect was significantly smaller for novel than existing competitors (e.g., looks to candy upon hearing "click on the candle"), suggesting that novel items may not compete for recognition like fully-fledged lexical items, even after 24h. Explicit memory (cued recall) was superior for words learned the day before testing, particularly for children; this effect (but not the lexical competition effects) correlated with sleep-spindle density. Together, the results suggest that different aspects of new word learning follow different time courses: visual world competition effects can emerge swiftly, but are qualitatively different from those observed with established words, and are less reliant upon sleep. Furthermore, the findings fit with the view that word learning earlier in development is boosted by sleep to a greater degree. Copyright © 2016. Published by Elsevier Inc.

  7. Orthographic processing in pigeons (Columba livia)

    PubMed Central

    Scarf, Damian; Boy, Karoline; Uber Reinert, Anelisie; Devine, Jack; Güntürkün, Onur; Colombo, Michael

    2016-01-01

    Learning to read involves the acquisition of letter–sound relationships (i.e., decoding skills) and the ability to visually recognize words (i.e., orthographic knowledge). Although decoding skills are clearly human-unique, given they are seated in language, recent research and theory suggest that orthographic processing may derive from the exaptation or recycling of visual circuits that evolved to recognize everyday objects and shapes in our natural environment. An open question is whether orthographic processing is limited to visual circuits that are similar to our own or a product of plasticity common to many vertebrate visual systems. Here we show that pigeons, organisms that separated from humans more than 300 million y ago, process words orthographically. Specifically, we demonstrate that pigeons trained to discriminate words from nonwords picked up on the orthographic properties that define words and used this knowledge to identify words they had never seen before. In addition, the pigeons were sensitive to the bigram frequencies of words (i.e., the common co-occurrence of certain letter pairs), the edit distance between nonwords and words, and the internal structure of words. Our findings demonstrate that visual systems organizationally distinct from the primate visual system can also be exapted or recycled to process the visual word form. PMID:27638211

  8. Operational Symbols: Can a Picture Be Worth a Thousand Words?

    DTIC Science & Technology

    1991-04-01

    internal visualization, because forms are to visual communication what words are to verbal communication. From a psychological point of view, the process... Visual Communication . Washington, DC: National Education Association, 1960. Bohannan, Anthony G. "C31 In Support of the Land Commander," in Principles...captions guide what is learned from a picture or graphic. 40. John C. Ball and Francis C. Byrnes, ed., Research, Principles, and Practices in Visual

  9. Visual-Verbal Redundancy Effects on Television News Learning.

    ERIC Educational Resources Information Center

    Reese, Stephen D.

    1984-01-01

    Discusses methodology and results of a study focusing on improving television's informing process by examining effects of combining visual and captioned information with a reporter's script. Results indicate redundant pictures and words enhance learning, while adding redundant print information either had no effect or detracted from learning. (MBR)

  10. Color-Coded Vowels and Spelling with Visual Cues in Beginning Reading.

    ERIC Educational Resources Information Center

    Turner, Ann Coffeen

    Twenty-four beginning readers participated in a study of the effectiveness of cued learning. The study was carried out in two phases--a letter-learning phase and a word-learning phase. The children were taught one at a time by the same teacher over a four-year period. During the word-learning phase, one fourth of the children used a vowels-only…

  11. Effects of audio-visual presentation of target words in word translation training

    NASA Astrophysics Data System (ADS)

    Akahane-Yamada, Reiko; Komaki, Ryo; Kubo, Rieko

    2004-05-01

    Komaki and Akahane-Yamada (Proc. ICA2004) used 2AFC translation task in vocabulary training, in which the target word is presented visually in orthographic form of one language, and the appropriate meaning in another language has to be chosen between two choices. Present paper examined the effect of audio-visual presentation of target word when native speakers of Japanese learn to translate English words into Japanese. Pairs of English words contrasted in several phonemic distinctions (e.g., /r/-/l/, /b/-/v/, etc.) were used as word materials, and presented in three conditions; visual-only (V), audio-only (A), and audio-visual (AV) presentations. Identification accuracy of those words produced by two talkers was also assessed. During pretest, the accuracy for A stimuli was lowest, implying that insufficient translation ability and listening ability interact with each other when aurally presented word has to be translated. However, there was no difference in accuracy between V and AV stimuli, suggesting that participants translate the words depending on visual information only. The effect of translation training using AV stimuli did not transfer to identification ability, showing that additional audio information during translation does not help improve speech perception. Further examination is necessary to determine the effective L2 training method. [Work supported by TAO, Japan.

  12. The Concreteness Effect and the Bilingual Lexicon: The Impact of Visual Stimuli Attachment on Meaning Recall of Abstract L2 Words

    ERIC Educational Resources Information Center

    Farley, Andrew P.; Ramonda, Kris; Liu, Xun

    2012-01-01

    According to the Dual-Coding Theory (Paivio & Desrochers, 1980), words that are associated with rich visual imagery are more easily learned than abstract words due to what is termed the concreteness effect (Altarriba & Bauer, 2004; de Groot, 1992, de Groot et al., 1994; ter Doest & Semin, 2005). The present study examined the effects of attaching…

  13. The Role of Auditory and Visual Speech in Word Learning at 18 Months and in Adulthood

    ERIC Educational Resources Information Center

    Havy, Mélanie; Foroud, Afra; Fais, Laurel; Werker, Janet F.

    2017-01-01

    Visual information influences speech perception in both infants and adults. It is still unknown whether lexical representations are multisensory. To address this question, we exposed 18-month-old infants (n = 32) and adults (n = 32) to new word-object pairings: Participants either heard the acoustic form of the words or saw the talking face in…

  14. The time course of spoken word learning and recognition: studies with artificial lexicons.

    PubMed

    Magnuson, James S; Tanenhaus, Michael K; Aslin, Richard N; Dahan, Delphine

    2003-06-01

    The time course of spoken word recognition depends largely on the frequencies of a word and its competitors, or neighbors (similar-sounding words). However, variability in natural lexicons makes systematic analysis of frequency and neighbor similarity difficult. Artificial lexicons were used to achieve precise control over word frequency and phonological similarity. Eye tracking provided time course measures of lexical activation and competition (during spoken instructions to perform visually guided tasks) both during and after word learning, as a function of word frequency, neighbor type, and neighbor frequency. Apparent shifts from holistic to incremental competitor effects were observed in adults and neural network simulations, suggesting such shifts reflect general properties of learning rather than changes in the nature of lexical representations.

  15. From a Gloss to a Learning Tool: Does Visual Aids Enhance Better Sentence Comprehension?

    ERIC Educational Resources Information Center

    Sato, Takeshi; Suzuki, Akio

    2012-01-01

    The aim of this study is to optimize CALL environments as a learning tool rather than a gloss, focusing on the learning of polysemous words which refer to spatial relationship between objects. A lot of research has already been conducted to examine the efficacy of visual glosses while reading L2 texts and has reported that visual glosses can be…

  16. The development of cortical sensitivity to visual word forms.

    PubMed

    Ben-Shachar, Michal; Dougherty, Robert F; Deutsch, Gayle K; Wandell, Brian A

    2011-09-01

    The ability to extract visual word forms quickly and efficiently is essential for using reading as a tool for learning. We describe the first longitudinal fMRI study to chart individual changes in cortical sensitivity to written words as reading develops. We conducted four annual measurements of brain function and reading skills in a heterogeneous group of children, initially 7-12 years old. The results show age-related increase in children's cortical sensitivity to word visibility in posterior left occipito-temporal sulcus (LOTS), nearby the anatomical location of the visual word form area. Moreover, the rate of increase in LOTS word sensitivity specifically correlates with the rate of improvement in sight word efficiency, a measure of speeded overt word reading. Other cortical regions, including V1, posterior parietal cortex, and the right homologue of LOTS, did not demonstrate such developmental changes. These results provide developmental support for the hypothesis that LOTS is part of the cortical circuitry that extracts visual word forms quickly and efficiently and highlight the importance of developing cortical sensitivity to word visibility in reading acquisition.

  17. The Development of Cortical Sensitivity to Visual Word Forms

    PubMed Central

    Ben-Shachar, Michal; Dougherty, Robert F.; Deutsch, Gayle K.; Wandell, Brian A.

    2011-01-01

    The ability to extract visual word forms quickly and efficiently is essential for using reading as a tool for learning. We describe the first longitudinal fMRI study to chart individual changes in cortical sensitivity to written words as reading develops. We conducted four annual measurements of brain function and reading skills in a heterogeneous group of children, initially 7–12 years old. The results show age-related increase in children's cortical sensitivity to word visibility in posterior left occipito-temporal sulcus (LOTS), nearby the anatomical location of the visual word form area. Moreover, the rate of increase in LOTS word sensitivity specifically correlates with the rate of improvement in sight word efficiency, a measure of speeded overt word reading. Other cortical regions, including V1, posterior parietal cortex, and the right homologue of LOTS, did not demonstrate such developmental changes. These results provide developmental support for the hypothesis that LOTS is part of the cortical circuitry that extracts visual word forms quickly and efficiently and highlight the importance of developing cortical sensitivity to word visibility in reading acquisition. PMID:21261451

  18. Perceptual and academic patterns of learning-disabled/gifted students.

    PubMed

    Waldron, K A; Saphire, D G

    1992-04-01

    This research explored ways gifted children with learning disabilities perceive and recall auditory and visual input and apply this information to reading, mathematics, and spelling. 24 learning-disabled/gifted children and a matched control group of normally achieving gifted students were tested for oral reading, word recognition and analysis, listening comprehension, and spelling. In mathematics, they were tested for numeration, mental and written computation, word problems, and numerical reasoning. To explore perception and memory skills, students were administered formal tests of visual and auditory memory as well as auditory discrimination of sounds. Their responses to reading and to mathematical computations were further considered for evidence of problems in visual discrimination, visual sequencing, and visual spatial areas. Analyses indicated that these learning-disabled/gifted students were significantly weaker than controls in their decoding skills, in spelling, and in most areas of mathematics. They were also significantly weaker in auditory discrimination and memory, and in visual discrimination, sequencing, and spatial abilities. Conclusions are that these underlying perceptual and memory deficits may be related to students' academic problems.

  19. Visual noise disrupts conceptual integration in reading.

    PubMed

    Gao, Xuefei; Stine-Morrow, Elizabeth A L; Noh, Soo Rim; Eskew, Rhea T

    2011-02-01

    The Effortfulness Hypothesis suggests that sensory impairment (either simulated or age-related) may decrease capacity for semantic integration in language comprehension. We directly tested this hypothesis by measuring resource allocation to different levels of processing during reading (i.e., word vs. semantic analysis). College students read three sets of passages word-by-word, one at each of three levels of dynamic visual noise. There was a reliable interaction between processing level and noise, such that visual noise increased resources allocated to word-level processing, at the cost of attention paid to semantic analysis. Recall of the most important ideas also decreased with increasing visual noise. Results suggest that sensory challenge can impair higher-level cognitive functions in learning from text, supporting the Effortfulness Hypothesis.

  20. Interactive Word Walls

    ERIC Educational Resources Information Center

    Jackson, Julie; Narvaez, Rose

    2013-01-01

    It is common to see word walls displaying the vocabulary that students have learned in class. Word walls serve as visual scaffolds and are a classroom strategy used to reinforce reading and language arts instruction. Research shows a strong relationship between student word knowledge and academic achievement (Stahl and Fairbanks 1986). As a…

  1. Syntactic Categorization in French-Learning Infants

    ERIC Educational Resources Information Center

    Shi, Rushen; Melancon, Andreane

    2010-01-01

    Recent work showed that infants recognize and store function words starting from the age of 6-8 months. Using a visual fixation procedure, the present study tested whether French-learning 14-month-olds have the knowledge of syntactic categories of determiners and pronouns, respectively, and whether they can use these function words for…

  2. Decoding and disrupting left midfusiform gyrus activity during word reading

    PubMed Central

    Hirshorn, Elizabeth A.; Ward, Michael J.; Fiez, Julie A.; Ghuman, Avniel Singh

    2016-01-01

    The nature of the visual representation for words has been fiercely debated for over 150 y. We used direct brain stimulation, pre- and postsurgical behavioral measures, and intracranial electroencephalography to provide support for, and elaborate upon, the visual word form hypothesis. This hypothesis states that activity in the left midfusiform gyrus (lmFG) reflects visually organized information about words and word parts. In patients with electrodes placed directly in their lmFG, we found that disrupting lmFG activity through stimulation, and later surgical resection in one of the patients, led to impaired perception of whole words and letters. Furthermore, using machine-learning methods to analyze the electrophysiological data from these electrodes, we found that information contained in early lmFG activity was consistent with an orthographic similarity space. Finally, the lmFG contributed to at least two distinguishable stages of word processing, an early stage that reflects gist-level visual representation sensitive to orthographic statistics, and a later stage that reflects more precise representation sufficient for the individuation of orthographic word forms. These results provide strong support for the visual word form hypothesis and demonstrate that across time the lmFG is involved in multiple stages of orthographic representation. PMID:27325763

  3. Decoding and disrupting left midfusiform gyrus activity during word reading.

    PubMed

    Hirshorn, Elizabeth A; Li, Yuanning; Ward, Michael J; Richardson, R Mark; Fiez, Julie A; Ghuman, Avniel Singh

    2016-07-19

    The nature of the visual representation for words has been fiercely debated for over 150 y. We used direct brain stimulation, pre- and postsurgical behavioral measures, and intracranial electroencephalography to provide support for, and elaborate upon, the visual word form hypothesis. This hypothesis states that activity in the left midfusiform gyrus (lmFG) reflects visually organized information about words and word parts. In patients with electrodes placed directly in their lmFG, we found that disrupting lmFG activity through stimulation, and later surgical resection in one of the patients, led to impaired perception of whole words and letters. Furthermore, using machine-learning methods to analyze the electrophysiological data from these electrodes, we found that information contained in early lmFG activity was consistent with an orthographic similarity space. Finally, the lmFG contributed to at least two distinguishable stages of word processing, an early stage that reflects gist-level visual representation sensitive to orthographic statistics, and a later stage that reflects more precise representation sufficient for the individuation of orthographic word forms. These results provide strong support for the visual word form hypothesis and demonstrate that across time the lmFG is involved in multiple stages of orthographic representation.

  4. Using complex auditory-visual samples to produce emergent relations in children with autism.

    PubMed

    Groskreutz, Nicole C; Karsina, Allen; Miguel, Caio F; Groskreutz, Mark P

    2010-03-01

    Six participants with autism learned conditional relations between complex auditory-visual sample stimuli (dictated words and pictures) and simple visual comparisons (printed words) using matching-to-sample training procedures. Pre- and posttests examined potential stimulus control by each element of the complex sample when presented individually and emergence of additional conditional relations and oral labeling. Tests revealed class-consistent performance for all participants following training.

  5. The Effects of Visual-Verbal Redundancy and Recaps on Television News Learning.

    ERIC Educational Resources Information Center

    Son, Jinok; Davie, William

    A study examined the effects of visual-verbal redundancy and recaps on learning from television news. Two factors were used: redundancy between the visual and audio channels, and the presence or absence of a recap. Manipulation of these factors created four conditions: (1) redundant pictures and words plus recap, (2) redundant pictures and words…

  6. Preparing Content-Rich Learning Environments with VPython and Excel, Controlled by Visual Basic for Applications

    ERIC Educational Resources Information Center

    Prayaga, Chandra

    2008-01-01

    A simple interface between VPython and Microsoft (MS) Office products such as Word and Excel, controlled by Visual Basic for Applications, is described. The interface allows the preparation of content-rich, interactive learning environments by taking advantage of the three-dimensional (3D) visualization capabilities of VPython and the GUI…

  7. Real-world visual statistics and infants' first-learned object names

    PubMed Central

    Clerkin, Elizabeth M.; Hart, Elizabeth; Rehg, James M.; Yu, Chen

    2017-01-01

    We offer a new solution to the unsolved problem of how infants break into word learning based on the visual statistics of everyday infant-perspective scenes. Images from head camera video captured by 8 1/2 to 10 1/2 month-old infants at 147 at-home mealtime events were analysed for the objects in view. The images were found to be highly cluttered with many different objects in view. However, the frequency distribution of object categories was extremely right skewed such that a very small set of objects was pervasively present—a fact that may substantially reduce the problem of referential ambiguity. The statistical structure of objects in these infant egocentric scenes differs markedly from that in the training sets used in computational models and in experiments on statistical word-referent learning. Therefore, the results also indicate a need to re-examine current explanations of how infants break into word learning. This article is part of the themed issue ‘New frontiers for statistical learning in the cognitive sciences’. PMID:27872373

  8. Semantic and visual memory codes in learning disabled readers.

    PubMed

    Swanson, H L

    1984-02-01

    Two experiments investigated whether learning disabled readers' impaired recall is due to multiple coding deficiencies. In Experiment 1, learning disabled and skilled readers viewed nonsense pictures without names or with either relevant or irrelevant names with respect to the distinctive characteristics of the picture. Both types of names improved recall of nondisabled readers, while learning disabled readers exhibited better recall for unnamed pictures. No significant difference in recall was found between name training (relevant, irrelevant) conditions within reading groups. In Experiment 2, both reading groups participated in recall training for complex visual forms labeled with unrelated words, hierarchically related words, or without labels. A subsequent reproduction transfer task showed a facilitation in performance in skilled readers due to labeling, with learning disabled readers exhibiting better reproduction for unnamed pictures. Measures of output organization (clustering) indicated that recall is related to the development of superordinate categories. The results suggest that learning disabled children's reading difficulties are due to an inability to activate a semantic representation that interconnects visual and verbal codes.

  9. Spelling Instruction in Spanish: A Comparison of Self-Correction, Visual Imagery and Copying

    ERIC Educational Resources Information Center

    Gaintza, Zuriñe; Goikoetxea, Edurne

    2016-01-01

    Two randomised control experiments examined spelling outcomes in a repeated measures design (pre-test, post-tests; 1-day, 1-month follow-up, 5-month follow-up), where students learned Spanish irregular words through (1) immediate feedback using self-correction, (2) visual imagery where children imagine and represent words using movement, and (3)…

  10. What can we learn from learning models about sensitivity to letter-order in visual word recognition?

    PubMed Central

    Lerner, Itamar; Armstrong, Blair C.; Frost, Ram

    2014-01-01

    Recent research on the effects of letter transposition in Indo-European Languages has shown that readers are surprisingly tolerant of these manipulations in a range of tasks. This evidence has motivated the development of new computational models of reading that regard flexibility in positional coding to be a core and universal principle of the reading process. Here we argue that such approach does not capture cross-linguistic differences in transposed-letter effects, nor do they explain them. To address this issue, we investigated how a simple domain-general connectionist architecture performs in tasks such as letter-transposition and letter substitution when it had learned to process words in the context of different linguistic environments. The results show that in spite of of the neurobiological noise involved in registering letter-position in all languages, flexibility and inflexibility in coding letter order is also shaped by the statistical orthographic properties of words in a language, such as the relative prevalence of anagrams. Our learning model also generated novel predictions for targeted empirical research, demonstrating a clear advantage of learning models for studying visual word recognition. PMID:25431521

  11. Visual field differences in visual word recognition can emerge purely from perceptual learning: evidence from modeling Chinese character pronunciation.

    PubMed

    Hsiao, Janet Hui-Wen

    2011-11-01

    In Chinese orthography, a dominant character structure exists in which a semantic radical appears on the left and a phonetic radical on the right (SP characters); a minority opposite arrangement also exists (PS characters). As the number of phonetic radical types is much greater than semantic radical types, in SP characters the information is skewed to the right, whereas in PS characters it is skewed to the left. Through training a computational model for SP and PS character recognition that takes into account of the locations in which the characters appear in the visual field during learning, but does not assume any fundamental hemispheric processing difference, we show that visual field differences can emerge as a consequence of the fundamental structural differences in information between SP and PS characters, as opposed to the fundamental processing differences between the two hemispheres. This modeling result is also consistent with behavioral naming performance. This work provides strong evidence that perceptual learning, i.e., the information structure of word stimuli to which the readers have long been exposed, is one of the factors that accounts for hemispheric asymmetry effects in visual word recognition. Copyright © 2011 Elsevier Inc. All rights reserved.

  12. Deep generative learning of location-invariant visual word recognition.

    PubMed

    Di Bono, Maria Grazia; Zorzi, Marco

    2013-01-01

    It is widely believed that orthographic processing implies an approximate, flexible coding of letter position, as shown by relative-position and transposition priming effects in visual word recognition. These findings have inspired alternative proposals about the representation of letter position, ranging from noisy coding across the ordinal positions to relative position coding based on open bigrams. This debate can be cast within the broader problem of learning location-invariant representations of written words, that is, a coding scheme abstracting the identity and position of letters (and combinations of letters) from their eye-centered (i.e., retinal) locations. We asked whether location-invariance would emerge from deep unsupervised learning on letter strings and what type of intermediate coding would emerge in the resulting hierarchical generative model. We trained a deep network with three hidden layers on an artificial dataset of letter strings presented at five possible retinal locations. Though word-level information (i.e., word identity) was never provided to the network during training, linear decoding from the activity of the deepest hidden layer yielded near-perfect accuracy in location-invariant word recognition. Conversely, decoding from lower layers yielded a large number of transposition errors. Analyses of emergent internal representations showed that word selectivity and location invariance increased as a function of layer depth. Word-tuning and location-invariance were found at the level of single neurons, but there was no evidence for bigram coding. Finally, the distributed internal representation of words at the deepest layer showed higher similarity to the representation elicited by the two exterior letters than by other combinations of two contiguous letters, in agreement with the hypothesis that word edges have special status. These results reveal that the efficient coding of written words-which was the model's learning objective-is largely based on letter-level information.

  13. Feature and Region Selection for Visual Learning.

    PubMed

    Zhao, Ji; Wang, Liantao; Cabral, Ricardo; De la Torre, Fernando

    2016-03-01

    Visual learning problems, such as object classification and action recognition, are typically approached using extensions of the popular bag-of-words (BoWs) model. Despite its great success, it is unclear what visual features the BoW model is learning. Which regions in the image or video are used to discriminate among classes? Which are the most discriminative visual words? Answering these questions is fundamental for understanding existing BoW models and inspiring better models for visual recognition. To answer these questions, this paper presents a method for feature selection and region selection in the visual BoW model. This allows for an intermediate visualization of the features and regions that are important for visual learning. The main idea is to assign latent weights to the features or regions, and jointly optimize these latent variables with the parameters of a classifier (e.g., support vector machine). There are four main benefits of our approach: 1) our approach accommodates non-linear additive kernels, such as the popular χ(2) and intersection kernel; 2) our approach is able to handle both regions in images and spatio-temporal regions in videos in a unified way; 3) the feature selection problem is convex, and both problems can be solved using a scalable reduced gradient method; and 4) we point out strong connections with multiple kernel learning and multiple instance learning approaches. Experimental results in the PASCAL VOC 2007, MSR Action Dataset II and YouTube illustrate the benefits of our approach.

  14. Somebody's Jumping on the Floor: Incorporating Music into Orientation and Mobility for Preschoolers with Visual Impairments

    ERIC Educational Resources Information Center

    Sapp, Wendy

    2011-01-01

    Young children with visual impairments face many challenges as they learn to orient to and move through their environment, the beginnings of orientation and mobility (O&M). Children who are visually impaired must learn many concepts (such as body parts and positional words) and skills (like body movement and interpreting sensory information) to…

  15. Ultrasound visual feedback treatment and practice variability for residual speech sound errors

    PubMed Central

    Preston, Jonathan L.; McCabe, Patricia; Rivera-Campos, Ahmed; Whittle, Jessica L.; Landry, Erik; Maas, Edwin

    2014-01-01

    Purpose The goals were to (1) test the efficacy of a motor-learning based treatment that includes ultrasound visual feedback for individuals with residual speech sound errors, and (2) explore whether the addition of prosodic cueing facilitates speech sound learning. Method A multiple baseline single subject design was used, replicated across 8 participants. For each participant, one sound context was treated with ultrasound plus prosodic cueing for 7 sessions, and another sound context was treated with ultrasound but without prosodic cueing for 7 sessions. Sessions included ultrasound visual feedback as well as non-ultrasound treatment. Word-level probes assessing untreated words were used to evaluate retention and generalization. Results For most participants, increases in accuracy of target sound contexts at the word level were observed with the treatment program regardless of whether prosodic cueing was included. Generalization between onset singletons and clusters was observed, as well as generalization to sentence-level accuracy. There was evidence of retention during post-treatment probes, including at a two-month follow-up. Conclusions A motor-based treatment program that includes ultrasound visual feedback can facilitate learning of speech sounds in individuals with residual speech sound errors. PMID:25087938

  16. Newly learned word forms are abstract and integrated immediately after acquisition

    PubMed Central

    Kapnoula, Efthymia C.; McMurray, Bob

    2015-01-01

    A hotly debated question in word learning concerns the conditions under which newly learned words compete or interfere with familiar words during spoken word recognition. This has recently been described as a key marker of the integration of a new word into the lexicon and was thought to require consolidation Dumay & Gaskell, (Psychological Science, 18, 35–39, 2007; Gaskell & Dumay, Cognition, 89, 105–132, 2003). Recently, however, Kapnoula, Packard, Gupta, and McMurray, (Cognition, 134, 85–99, 2015) showed that interference can be observed immediately after a word is first learned, implying very rapid integration of new words into the lexicon. It is an open question whether these kinds of effects derive from episodic traces of novel words or from more abstract and lexicalized representations. Here we addressed this question by testing inhibition for newly learned words using training and test stimuli presented in different talker voices. During training, participants were exposed to a set of nonwords spoken by a female speaker. Immediately after training, we assessed the ability of the novel word forms to inhibit familiar words, using a variant of the visual world paradigm. Crucially, the test items were produced by a male speaker. An analysis of fixations showed that even with a change in voice, newly learned words interfered with the recognition of similar known words. These findings show that lexical competition effects from newly learned words spread across different talker voices, which suggests that newly learned words can be sufficiently lexicalized, and abstract with respect to talker voice, without consolidation. PMID:26202702

  17. An eye movement corpus study of the age-of-acquisition effect.

    PubMed

    Dirix, Nicolas; Duyck, Wouter

    2017-12-01

    In the present study, we investigated the effects of word-level age of acquisition (AoA) on natural reading. Previous studies, using multiple language modalities, showed that earlier-learned words are recognized, read, spoken, and responded to faster than words learned later in life. Until now, in visual word recognition the experimental materials were limited to single-word or sentence studies. We analyzed the data of the Ghent Eye-tracking Corpus (GECO; Cop, Dirix, Drieghe, & Duyck, in press), an eyetracking corpus of participants reading an entire novel, resulting in the first eye movement megastudy of AoA effects in natural reading. We found that the ages at which specific words were learned indeed influenced reading times, above other important (correlated) lexical variables, such as word frequency and length. Shorter fixations for earlier-learned words were consistently found throughout the reading process, in both early (single-fixation durations, first-fixation durations, gaze durations) and late (total reading times) measures. Implications for theoretical accounts of AoA effects and eye movements are discussed.

  18. Learning semantic and visual similarity for endomicroscopy video retrieval.

    PubMed

    Andre, Barbara; Vercauteren, Tom; Buchner, Anna M; Wallace, Michael B; Ayache, Nicholas

    2012-06-01

    Content-based image retrieval (CBIR) is a valuable computer vision technique which is increasingly being applied in the medical community for diagnosis support. However, traditional CBIR systems only deliver visual outputs, i.e., images having a similar appearance to the query, which is not directly interpretable by the physicians. Our objective is to provide a system for endomicroscopy video retrieval which delivers both visual and semantic outputs that are consistent with each other. In a previous study, we developed an adapted bag-of-visual-words method for endomicroscopy retrieval, called "Dense-Sift," that computes a visual signature for each video. In this paper, we present a novel approach to complement visual similarity learning with semantic knowledge extraction, in the field of in vivo endomicroscopy. We first leverage a semantic ground truth based on eight binary concepts, in order to transform these visual signatures into semantic signatures that reflect how much the presence of each semantic concept is expressed by the visual words describing the videos. Using cross-validation, we demonstrate that, in terms of semantic detection, our intuitive Fisher-based method transforming visual-word histograms into semantic estimations outperforms support vector machine (SVM) methods with statistical significance. In a second step, we propose to improve retrieval relevance by learning an adjusted similarity distance from a perceived similarity ground truth. As a result, our distance learning method allows to statistically improve the correlation with the perceived similarity. We also demonstrate that, in terms of perceived similarity, the recall performance of the semantic signatures is close to that of visual signatures and significantly better than those of several state-of-the-art CBIR methods. The semantic signatures are thus able to communicate high-level medical knowledge while being consistent with the low-level visual signatures and much shorter than them. In our resulting retrieval system, we decide to use visual signatures for perceived similarity learning and retrieval, and semantic signatures for the output of an additional information, expressed in the endoscopist own language, which provides a relevant semantic translation of the visual retrieval outputs.

  19. Short-term memory for serial order supports vocabulary development: new evidence from a novel word learning paradigm.

    PubMed

    Majerus, Steve; Boukebza, Claire

    2013-12-01

    Although recent studies suggest a strong association between short-term memory (STM) for serial order and lexical development, the precise mechanisms linking the two domains remain to be determined. This study explored the nature of these mechanisms via a microanalysis of performance on serial order STM and novel word learning tasks. In the experiment, 6- and 7-year-old children were administered tasks maximizing STM for either item or serial order information as well as paired-associate learning tasks involving the learning of novel words, visual symbols, or familiar word pair associations. Learning abilities for novel words were specifically predicted by serial order STM abilities. A measure estimating the precision of serial order coding predicted the rate of correct repetitions and the rate of phoneme migration errors during the novel word learning process. In line with recent theoretical accounts, these results suggest that serial order STM supports vocabulary development via ordered and detailed reactivation of the novel phonological sequences that characterize new words. Copyright © 2013 Elsevier Inc. All rights reserved.

  20. Novel word acquisition in aphasia: Facing the word-referent ambiguity of natural language learning contexts.

    PubMed

    Peñaloza, Claudia; Mirman, Daniel; Tuomiranta, Leena; Benetello, Annalisa; Heikius, Ida-Maria; Järvinen, Sonja; Majos, Maria C; Cardona, Pedro; Juncadella, Montserrat; Laine, Matti; Martin, Nadine; Rodríguez-Fornells, Antoni

    2016-06-01

    Recent research suggests that some people with aphasia preserve some ability to learn novel words and to retain them in the long-term. However, this novel word learning ability has been studied only in the context of single word-picture pairings. We examined the ability of people with chronic aphasia to learn novel words using a paradigm that presents new word forms together with a limited set of different possible visual referents and requires the identification of the correct word-object associations on the basis of online feedback. We also studied the relationship between word learning ability and aphasia severity, word processing abilities, and verbal short-term memory (STM). We further examined the influence of gross lesion location on new word learning. The word learning task was first validated with a group of forty-five young adults. Fourteen participants with chronic aphasia were administered the task and underwent tests of immediate and long-term recognition memory at 1 week. Their performance was compared to that of a group of fourteen matched controls using growth curve analysis. The learning curve and recognition performance of the aphasia group was significantly below the matched control group, although above-chance recognition performance and case-by-case analyses indicated that some participants with aphasia had learned the correct word-referent mappings. Verbal STM but not word processing abilities predicted word learning ability after controlling for aphasia severity. Importantly, participants with lesions in the left frontal cortex performed significantly worse than participants with lesions that spared the left frontal region both during word learning and on the recognition tests. Our findings indicate that some people with aphasia can preserve the ability to learn a small novel lexicon in an ambiguous word-referent context. This learning and recognition memory ability was associated with verbal STM capacity, aphasia severity and the integrity of the left inferior frontal region. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Novel word acquisition in aphasia: Facing the word-referent ambiguity of natural language learning contexts

    PubMed Central

    Peñaloza, Claudia; Mirman, Daniel; Tuomiranta, Leena; Benetello, Annalisa; Heikius, Ida-Maria; Järvinen, Sonja; Majos, Maria C.; Cardona, Pedro; Juncadella, Montserrat; Laine, Matti; Martin, Nadine; Rodríguez-Fornells, Antoni

    2017-01-01

    Recent research suggests that some people with aphasia preserve some ability to learn novel words and to retain them in the long-term. However, this novel word learning ability has been studied only in the context of single word-picture pairings. We examined the ability of people with chronic aphasia to learn novel words using a paradigm that presents new word forms together with a limited set of different possible visual referents and requires the identification of the correct word-object associations on the basis of online feedback. We also studied the relationship between word learning ability and aphasia severity, word processing abilities, and verbal short-term memory (STM). We further examined the influence of gross lesion location on new word learning. The word learning task was first validated with a group of forty-five young adults. Fourteen participants with chronic aphasia were administered the task and underwent tests of immediate and long-term recognition memory at 1 week. Their performance was compared to that of a group of fourteen matched controls using growth curve analysis. The learning curve and recognition performance of the aphasia group was significantly below the matched control group, although above-chance recognition performance and case-by-case analyses indicated that some participants with aphasia had learned the correct word-referent mappings. Verbal STM but not word processing abilities predicted word learning ability after controlling for aphasia severity. Importantly, participants with lesions in the left frontal cortex performed significantly worse than participants with lesions that spared the left frontal region both during word learning and on the recognition tests. Our findings indicate that some people with aphasia can preserve the ability to learn a small novel lexicon in an ambiguous word-referent context. This learning and recognition memory ability was associated with verbal STM capacity, aphasia severity and the integrity of the left inferior frontal region. PMID:27085892

  2. Bag-of-features based medical image retrieval via multiple assignment and visual words weighting.

    PubMed

    Wang, Jingyan; Li, Yongping; Zhang, Ying; Wang, Chao; Xie, Honglan; Chen, Guoling; Gao, Xin

    2011-11-01

    Bag-of-features based approaches have become prominent for image retrieval and image classification tasks in the past decade. Such methods represent an image as a collection of local features, such as image patches and key points with scale invariant feature transform (SIFT) descriptors. To improve the bag-of-features methods, we first model the assignments of local descriptors as contribution functions, and then propose a novel multiple assignment strategy. Assuming the local features can be reconstructed by their neighboring visual words in a vocabulary, reconstruction weights can be solved by quadratic programming. The weights are then used to build contribution functions, resulting in a novel assignment method, called quadratic programming (QP) assignment. We further propose a novel visual word weighting method. The discriminative power of each visual word is analyzed by the sub-similarity function in the bin that corresponds to the visual word. Each sub-similarity function is then treated as a weak classifier. A strong classifier is learned by boosting methods that combine those weak classifiers. The weighting factors of the visual words are learned accordingly. We evaluate the proposed methods on medical image retrieval tasks. The methods are tested on three well-known data sets, i.e., the ImageCLEFmed data set, the 304 CT Set, and the basal-cell carcinoma image set. Experimental results demonstrate that the proposed QP assignment outperforms the traditional nearest neighbor assignment, the multiple assignment, and the soft assignment, whereas the proposed boosting based weighting strategy outperforms the state-of-the-art weighting methods, such as the term frequency weights and the term frequency-inverse document frequency weights.

  3. Retrieval evaluation and distance learning from perceived similarity between endomicroscopy videos.

    PubMed

    André, Barbara; Vercauteren, Tom; Buchner, Anna M; Wallace, Michael B; Ayache, Nicholas

    2011-01-01

    Evaluating content-based retrieval (CBR) is challenging because it requires an adequate ground-truth. When the available groundtruth is limited to textual metadata such as pathological classes, retrieval results can only be evaluated indirectly, for example in terms of classification performance. In this study we first present a tool to generate perceived similarity ground-truth that enables direct evaluation of endomicroscopic video retrieval. This tool uses a four-points Likert scale and collects subjective pairwise similarities perceived by multiple expert observers. We then evaluate against the generated ground-truth a previously developed dense bag-of-visual-words method for endomicroscopic video retrieval. Confirming the results of previous indirect evaluation based on classification, our direct evaluation shows that this method significantly outperforms several other state-of-the-art CBR methods. In a second step, we propose to improve the CBR method by learning an adjusted similarity metric from the perceived similarity ground-truth. By minimizing a margin-based cost function that differentiates similar and dissimilar video pairs, we learn a weight vector applied to the visual word signatures of videos. Using cross-validation, we demonstrate that the learned similarity distance is significantly better correlated with the perceived similarity than the original visual-word-based distance.

  4. Investigating Orthographic and Semantic Aspects of Word Learning in Poor Comprehenders

    ERIC Educational Resources Information Center

    Ricketts, Jessie; Bishop, Dorothy V. M.; Nation, Kate

    2008-01-01

    This study compared orthographic and semantic aspects of word learning in children who differed in reading comprehension skill. Poor comprehenders and controls matched for age (9-10 years), nonverbal ability and decoding skill were trained to pronounce 20 visually presented nonwords, 10 in a consistent way and 10 in an inconsistent way. They then…

  5. Visualization and Analysis of Geology Word Vectors for Efficient Information Extraction

    NASA Astrophysics Data System (ADS)

    Floyd, J. S.

    2016-12-01

    When a scientist begins studying a new geographic region of the Earth, they frequently begin by gathering relevant scientific literature in order to understand what is known, for example, about the region's geologic setting, structure, stratigraphy, and tectonic and environmental history. Experienced scientists typically know what keywords to seek and understand that if a document contains one important keyword, then other words in the document may be important as well. Word relationships in a document give rise to what is known in linguistics as the context-dependent nature of meaning. For example, the meaning of the word `strike' in geology, as in the strike of a fault, is quite different from its popular meaning in baseball. In addition, word order, such as in the phrase `Cretaceous-Tertiary boundary,' often corresponds to the order of sequences in time or space. The context of words and the relevance of words to each other can be derived quantitatively by machine learning vector representations of words. Here we show the results of training a neural network to create word vectors from scientific research papers from selected rift basins and mid-ocean ridges: the Woodlark Basin of Papua New Guinea, the Hess Deep rift, and the Gulf of Mexico basin. The word vectors are statistically defined by surrounding words within a given window, limited by the length of each sentence. The word vectors are analyzed by their cosine distance to related words (e.g., `axial' and `magma'), classified by high dimensional clustering, and visualized by reducing the vector dimensions and plotting the vectors on a two- or three-dimensional graph. Similarity analysis of `Triassic' and `Cretaceous' returns `Jurassic' as the nearest word vector, suggesting that the model is capable of learning the geologic time scale. Similarity analysis of `basalt' and `minerals' automatically returns mineral names such as `chlorite', `plagioclase,' and `olivine.' Word vector analysis and visualization allow one to extract information from hundreds of papers or more and find relationships in less time than it would take to read all of the papers. As machine learning tools become more commonly available, more and more scientists will be able to use and refine these tools for their individual needs.

  6. Cross-Situational Learning with Bayesian Generative Models for Multimodal Category and Word Learning in Robots

    PubMed Central

    Taniguchi, Akira; Taniguchi, Tadahiro; Cangelosi, Angelo

    2017-01-01

    In this paper, we propose a Bayesian generative model that can form multiple categories based on each sensory-channel and can associate words with any of the four sensory-channels (action, position, object, and color). This paper focuses on cross-situational learning using the co-occurrence between words and information of sensory-channels in complex situations rather than conventional situations of cross-situational learning. We conducted a learning scenario using a simulator and a real humanoid iCub robot. In the scenario, a human tutor provided a sentence that describes an object of visual attention and an accompanying action to the robot. The scenario was set as follows: the number of words per sensory-channel was three or four, and the number of trials for learning was 20 and 40 for the simulator and 25 and 40 for the real robot. The experimental results showed that the proposed method was able to estimate the multiple categorizations and to learn the relationships between multiple sensory-channels and words accurately. In addition, we conducted an action generation task and an action description task based on word meanings learned in the cross-situational learning scenario. The experimental results showed that the robot could successfully use the word meanings learned by using the proposed method. PMID:29311888

  7. Methods & Strategies: Put Your Walls to Work

    ERIC Educational Resources Information Center

    Jackson, Julie; Durham, Annie

    2016-01-01

    This column provides ideas and techniques to enhance your science teaching. This month's issue discusses planning and using interactive word walls to support science and reading instruction. Many classrooms have word walls displaying vocabulary that students have learned in class. Word walls serve as visual scaffolds to support instruction. To…

  8. Is Banara Really a Word?

    ERIC Educational Resources Information Center

    Qiao, Xiaomei; Forster, Kenneth; Witzel, Naoko

    2009-01-01

    Bowers, Davis, and Hanley (Bowers, J. S., Davis, C. J., & Hanley, D. A. (2005). "Interfering neighbours: The impact of novel word learning on the identification of visually similar words." "Cognition," 97(3), B45-B54) reported that if participants were trained to type nonwords such as "banara", subsequent semantic categorization responses to…

  9. Very Similar Spacing-Effect Patterns in Very Different Learning/Practice Domains

    PubMed Central

    Kornmeier, Jürgen; Spitzer, Manfred; Sosic-Vasic, Zrinka

    2014-01-01

    Temporally distributed (“spaced”) learning can be twice as efficient as massed learning. This “spacing effect” occurs with a broad spectrum of learning materials, with humans of different ages, with non-human vertebrates and also invertebrates. This indicates, that very basic learning mechanisms are at work (“generality”). Although most studies so far focused on very narrow spacing interval ranges, there is some evidence for a non-monotonic behavior of this “spacing effect” (“nonlinearity”) with optimal spacing intervals at different time scales. In the current study we focused both the nonlinearity aspect by using a broad range of spacing intervals and the generality aspect by using very different learning/practice domains: Participants learned German-Japanese word pairs and performed visual acuity tests. For each of six groups we used a different spacing interval between learning/practice units from 7 min to 24 h in logarithmic steps. Memory retention was studied in three consecutive final tests, one, seven and 28 days after the final learning unit. For both the vocabulary learning and visual acuity performance we found a highly significant effect of the factor spacing interval on the final test performance. In the 12 h-spacing-group about 85% of the learned words stayed in memory and nearly all of the visual acuity gain was preserved. In the 24 h-spacing-group, in contrast, only about 33% of the learned words were retained and the visual acuity gain dropped to zero. The very similar patterns of results from the two very different learning/practice domains point to similar underlying mechanisms. Further, our results indicate spacing in the range of 12 hours as optimal. A second peak may be around a spacing interval of 20 min but here the data are less clear. We discuss relations between our results and basic learning at the neuronal level. PMID:24609081

  10. Pragmatically Framed Cross-Situational Noun Learning Using Computational Reinforcement Models

    PubMed Central

    Najnin, Shamima; Banerjee, Bonny

    2018-01-01

    Cross-situational learning and social pragmatic theories are prominent mechanisms for learning word meanings (i.e., word-object pairs). In this paper, the role of reinforcement is investigated for early word-learning by an artificial agent. When exposed to a group of speakers, the agent comes to understand an initial set of vocabulary items belonging to the language used by the group. Both cross-situational learning and social pragmatic theory are taken into account. As social cues, joint attention and prosodic cues in caregiver's speech are considered. During agent-caregiver interaction, the agent selects a word from the caregiver's utterance and learns the relations between that word and the objects in its visual environment. The “novel words to novel objects” language-specific constraint is assumed for computing rewards. The models are learned by maximizing the expected reward using reinforcement learning algorithms [i.e., table-based algorithms: Q-learning, SARSA, SARSA-λ, and neural network-based algorithms: Q-learning for neural network (Q-NN), neural-fitted Q-network (NFQ), and deep Q-network (DQN)]. Neural network-based reinforcement learning models are chosen over table-based models for better generalization and quicker convergence. Simulations are carried out using mother-infant interaction CHILDES dataset for learning word-object pairings. Reinforcement is modeled in two cross-situational learning cases: (1) with joint attention (Attentional models), and (2) with joint attention and prosodic cues (Attentional-prosodic models). Attentional-prosodic models manifest superior performance to Attentional ones for the task of word-learning. The Attentional-prosodic DQN outperforms existing word-learning models for the same task. PMID:29441027

  11. The role of the phonological loop in English word learning: a comparison of Chinese ESL learners and native speakers.

    PubMed

    Hamada, Megumi; Koda, Keiko

    2011-04-01

    Although the role of the phonological loop in word-retention is well documented, research in Chinese character retention suggests the involvement of non-phonological encoding. This study investigated whether the extent to which the phonological loop contributes to learning and remembering visually introduced words varies between college-level Chinese ESL learners (N = 20) and native speakers of English (N = 20). The groups performed a paired associative learning task under two conditions (control versus articulatory suppression) with two word types (regularly spelled versus irregularly spelled words) differing in degree of phonological accessibility. The results demonstrated that both groups' recall declined when the phonological loop was made less available (with irregularly spelled words and in the articulatory suppression condition), but the decline was greater for the native group. These results suggest that word learning entails phonological encoding uniformly across learners, but the contribution of phonology varies among learners with diverse linguistic backgrounds.

  12. Picturing words? Sensorimotor cortex activation for printed words in child and adult readers

    PubMed Central

    Dekker, Tessa M.; Mareschal, Denis; Johnson, Mark H.; Sereno, Martin I.

    2014-01-01

    Learning to read involves associating abstract visual shapes with familiar meanings. Embodiment theories suggest that word meaning is at least partially represented in distributed sensorimotor networks in the brain (Barsalou, 2008; Pulvermueller, 2013). We explored how reading comprehension develops by tracking when and how printed words start activating these “semantic” sensorimotor representations as children learn to read. Adults and children aged 7–10 years showed clear category-specific cortical specialization for tool versus animal pictures during a one-back categorisation task. Thus, sensorimotor representations for these categories were in place at all ages. However, co-activation of these same brain regions by the visual objects’ written names was only present in adults, even though all children could read and comprehend all presented words, showed adult-like task performance, and older children were proficient readers. It thus takes years of training and expert reading skill before spontaneous processing of printed words’ sensorimotor meanings develops in childhood. PMID:25463817

  13. How Many Words Is a Picture Worth? Integrating Visual Literacy in Language Learning with Photographs

    ERIC Educational Resources Information Center

    Baker, Lottie

    2015-01-01

    Cognitive research has shown that the human brain processes images quicker than it processes words, and images are more likely than text to remain in long-term memory. With the expansion of technology that allows people from all walks of life to create and share photographs with a few clicks, the world seems to value visual media more than ever…

  14. It's Not a Math Lesson--We're Learning to Draw! Teachers' Use of Visual Representations in Instructing Word Problem Solving in Sixth Grade of Elementary School

    ERIC Educational Resources Information Center

    Boonen, Anton J. H.; Reed, Helen C.; Schoonenboom, Judith; Jolles, Jelle

    2016-01-01

    Non-routine word problem solving is an essential feature of the mathematical development of elementary school students worldwide. Many students experience difficulties in solving these problems due to erroneous problem comprehension. These difficulties could be alleviated by instructing students how to use visual representations that clarify the…

  15. Audiovisual speech facilitates voice learning.

    PubMed

    Sheffert, Sonya M; Olson, Elizabeth

    2004-02-01

    In this research, we investigated the effects of voice and face information on the perceptual learning of talkers and on long-term memory for spoken words. In the first phase, listeners were trained over several days to identify voices from words presented auditorily or audiovisually. The training data showed that visual information about speakers enhanced voice learning, revealing cross-modal connections in talker processing akin to those observed in speech processing. In the second phase, the listeners completed an auditory or audiovisual word recognition memory test in which equal numbers of words were spoken by familiar and unfamiliar talkers. The data showed that words presented by familiar talkers were more likely to be retrieved from episodic memory, regardless of modality. Together, these findings provide new information about the representational code underlying familiar talker recognition and the role of stimulus familiarity in episodic word recognition.

  16. Real-world visual statistics and infants' first-learned object names.

    PubMed

    Clerkin, Elizabeth M; Hart, Elizabeth; Rehg, James M; Yu, Chen; Smith, Linda B

    2017-01-05

    We offer a new solution to the unsolved problem of how infants break into word learning based on the visual statistics of everyday infant-perspective scenes. Images from head camera video captured by 8 1/2 to 10 1/2 month-old infants at 147 at-home mealtime events were analysed for the objects in view. The images were found to be highly cluttered with many different objects in view. However, the frequency distribution of object categories was extremely right skewed such that a very small set of objects was pervasively present-a fact that may substantially reduce the problem of referential ambiguity. The statistical structure of objects in these infant egocentric scenes differs markedly from that in the training sets used in computational models and in experiments on statistical word-referent learning. Therefore, the results also indicate a need to re-examine current explanations of how infants break into word learning.This article is part of the themed issue 'New frontiers for statistical learning in the cognitive sciences'. © 2016 The Author(s).

  17. Help me if I can't: Social interaction effects in adult contextual word learning.

    PubMed

    Verga, Laura; Kotz, Sonja A

    2017-11-01

    A major challenge in second language acquisition is to build up new vocabulary. How is it possible to identify the meaning of a new word among several possible referents? Adult learners typically use contextual information, which reduces the number of possible referents a new word can have. Alternatively, a social partner may facilitate word learning by directing the learner's attention toward the correct new word meaning. While much is known about the role of this form of 'joint attention' in first language acquisition, little is known about its efficacy in second language acquisition. Consequently, we introduce and validate a novel visual word learning game to evaluate how joint attention affects the contextual learning of new words in a second language. Adult learners either acquired new words in a constant or variable sentence context by playing the game with a knowledgeable partner, or by playing the game alone on a computer. Results clearly show that participants who learned new words in social interaction (i) are faster in identifying a correct new word referent in variable sentence contexts, and (ii) temporally coordinate their behavior with a social partner. Testing the learned words in a post-learning recall or recognition task showed that participants, who learned interactively, better recognized words originally learned in a variable context. While this result may suggest that interactive learning facilitates the allocation of attention to a target referent, the differences in the performance during recognition and recall call for further studies investigating the effect of social interaction on learning performance. In summary, we provide first evidence on the role joint attention in second language learning. Furthermore, the new interactive learning game offers itself to further testing in complex neuroimaging research, where the lack of appropriate experimental set-ups has so far limited the investigation of the neural basis of adult word learning in social interaction. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Computer-Based Learning of Spelling Skills in Children with and without Dyslexia

    ERIC Educational Resources Information Center

    Kast, Monika; Baschera, Gian-Marco; Gross, Markus; Jancke, Lutz; Meyer, Martin

    2011-01-01

    Our spelling training software recodes words into multisensory representations comprising visual and auditory codes. These codes represent information about letters and syllables of a word. An enhanced version, developed for this study, contains an additional phonological code and an improved word selection controller relying on a phoneme-based…

  19. Cross-Language Priming of Word Meaning during Second Language Sentence Comprehension

    ERIC Educational Resources Information Center

    Yuan, Yanli; Woltz, Dan; Zheng, Robert

    2010-01-01

    The experiment investigated the benefit to second language (L2) sentence comprehension of priming word meanings with brief visual exposure to first language (L1) translation equivalents. Native English speakers learning Mandarin evaluated the validity of aurally presented Mandarin sentences. For selected words in half of the sentences there was…

  20. Building Reflection with Word Clouds for Online RN to BSN Students.

    PubMed

    Volkert, Delene R

    Reflection allows students to integrate learning with their personal context, developing deeper knowledge and promoting critical thinking. Word clouds help students develop themes/concepts beyond traditional methods, introducing visual aspects to an online learning environment. Students created word clouds and captions, then responded to those created by peers for a weekly discussion assignment. Students indicated overwhelming support for the use of word clouds to develop deeper understanding of the subject matter. This reflection assignment could be utilized in asynchronous, online undergraduate nursing courses for creative methods of building reflection and developing knowledge for the undergraduate RN to BSN student.

  1. Semantic and phonological coding in poor and normal readers.

    PubMed

    Vellutino, F R; Scanlon, D M; Spearing, D

    1995-02-01

    Three studies were conducted evaluating semantic and phonological coding deficits as alternative explanations of reading disability. In the first study, poor and normal readers in second and sixth grade were compared on various tests evaluating semantic development as well as on tests evaluating rapid naming and pseudoword decoding as independent measures of phonological coding ability. In a second study, the same subjects were given verbal memory and visual-verbal learning tasks using high and low meaning words as verbal stimuli and Chinese ideographs as visual stimuli. On the semantic tasks, poor readers performed below the level of the normal readers only at the sixth grade level, but, on the rapid naming and pseudoword learning tasks, they performed below the normal readers at the second as well as at the sixth grade level. On both the verbal memory and visual-verbal learning tasks, performance in poor readers approximated that of normal readers when the word stimuli were high in meaning but not when they were low in meaning. These patterns were essentially replicated in a third study that used some of the same semantic and phonological measures used in the first experiment, and verbal memory and visual-verbal learning tasks that employed word lists and visual stimuli (novel alphabetic characters) that more closely approximated those used in learning to read. It was concluded that semantic coding deficits are an unlikely cause of reading difficulties in most poor readers at the beginning stages of reading skills acquisition, but accrue as a consequence of prolonged reading difficulties in older readers. It was also concluded that phonological coding deficits are a probable cause of reading difficulties in most poor readers.

  2. Multimodal Word Meaning Induction From Minimal Exposure to Natural Text.

    PubMed

    Lazaridou, Angeliki; Marelli, Marco; Baroni, Marco

    2017-04-01

    By the time they reach early adulthood, English speakers are familiar with the meaning of thousands of words. In the last decades, computational simulations known as distributional semantic models (DSMs) have demonstrated that it is possible to induce word meaning representations solely from word co-occurrence statistics extracted from a large amount of text. However, while these models learn in batch mode from large corpora, human word learning proceeds incrementally after minimal exposure to new words. In this study, we run a set of experiments investigating whether minimal distributional evidence from very short passages suffices to trigger successful word learning in subjects, testing their linguistic and visual intuitions about the concepts associated with new words. After confirming that subjects are indeed very efficient distributional learners even from small amounts of evidence, we test a DSM on the same multimodal task, finding that it behaves in a remarkable human-like way. We conclude that DSMs provide a convincing computational account of word learning even at the early stages in which a word is first encountered, and the way they build meaning representations can offer new insights into human language acquisition. Copyright © 2017 Cognitive Science Society, Inc.

  3. Propose but verify: Fast mapping meets cross-situational word learning

    PubMed Central

    Trueswell, John C.; Medina, Tamara Nicol; Hafri, Alon; Gleitman, Lila R.

    2012-01-01

    We report three eyetracking experiments that examine the learning procedure used by adults as they pair novel words and visually presented referents over a sequence of referentially ambiguous trials. Successful learning under such conditions has been argued to be the product of a learning procedure in which participants provisionally pair each novel word with several possible referents and use a statistical-associative learning mechanism to gradually converge on a single mapping across learning instances. We argue here that successful learning in this setting is instead the product of a one-trial procedure in which a single hypothesized word-referent pairing is retained across learning instances, abandoned only if the subsequent instance fails to confirm the pairing – more a ‘fast mapping’ procedure than a gradual statistical one. We provide experimental evidence for this Propose-but-Verify learning procedure via three experiments in which adult participants attempted to learn the meanings of nonce words cross-situationally under varying degrees of referential uncertainty. The findings, using both explicit (referent selection) and implicit (eye movement) measures, show that even in these artificial learning contexts, which are far simpler than those encountered by a language learner in a natural environment, participants do not retain multiple meaning hypotheses across learning instances. As we discuss, these findings challenge ‘gradualist’ accounts of word learning and are consistent with the known rapid course of vocabulary learning in a first language. PMID:23142693

  4. Near or far: The effect of spatial distance and vocabulary knowledge on word learning.

    PubMed

    Axelsson, Emma L; Perry, Lynn K; Scott, Emilly J; Horst, Jessica S

    2016-01-01

    The current study investigated the role of spatial distance in word learning. Two-year-old children saw three novel objects named while the objects were either in close proximity to each other or spatially separated. Children were then tested on their retention for the name-object associations. Keeping the objects spatially separated from each other during naming was associated with increased retention for children with larger vocabularies. Children with a lower vocabulary size demonstrated better retention if they saw objects in close proximity to each other during naming. This demonstrates that keeping a clear view of objects during naming improves word learning for children who have already learned many words, but keeping objects within close proximal range is better for children at earlier stages of vocabulary acquisition. The effect of distance is therefore not equal across varying vocabulary sizes. The influences of visual crowding, cognitive load, and vocabulary size on word learning are discussed. Copyright © 2015 The Authors. Published by Elsevier B.V. All rights reserved.

  5. The Relationship Between Selected Subtests of the Detroit Tests of Learning Aptitude and Second Grade Reading Achievement.

    ERIC Educational Resources Information Center

    Sherwood, Charles; Chambless, Martha

    Relationships between reading achievement and perceptual skills as measured by selected subtests of the Detroit Tests of Learning Aptitude were investigated in a sample of 73 second graders. Verbal opposites, visual memory for designs, and visual attention span for letters were significantly correlated with both word meaning and vocabulary…

  6. Learning to See Words

    PubMed Central

    2011-01-01

    Skilled reading requires recognizing written words rapidly; functional neuroimaging research has clarified how the written word initiates a series of responses in visual cortex. These responses are communicated to circuits in ventral occipitotemporal (VOT) cortex that learn to identify words rapidly. Structural neuroimaging has further clarified aspects of the white matter pathways that communicate reading signals between VOT and language systems. We review this circuitry, its development, and its deficiencies in poor readers. This review emphasizes data that measure the cortical responses and white matter pathways in individual subjects rather than group differences. Such methods have the potential to clarify why a child has difficulty learning to read and to offer guidance about the interventions that may be useful for that child. PMID:21801018

  7. The Effects of Techniques of Vocabulary Portfolio on L2 Vocabulary Learning

    ERIC Educational Resources Information Center

    Zarei, Abbas Ali; Baftani, Fahimeh Nasiri

    2014-01-01

    To investigate the effects of different techniques of vocabulary portfolio including word map, word wizard, concept wheel, visual thesaurus, and word rose on L2 vocabulary comprehension and production, a sample of 75 female EFL learners of Kish Day Language Institute in Karaj, Iran were selected. They were in five groups and each group received…

  8. Hidden word learning capacity through orthography in aphasia.

    PubMed

    Tuomiranta, Leena M; Càmara, Estela; Froudist Walsh, Seán; Ripollés, Pablo; Saunavaara, Jani P; Parkkola, Riitta; Martin, Nadine; Rodríguez-Fornells, Antoni; Laine, Matti

    2014-01-01

    The ability to learn to use new words is thought to depend on the integrity of the left dorsal temporo-frontal speech processing pathway. We tested this assumption in a chronic aphasic individual (AA) with an extensive left temporal lesion using a new-word learning paradigm. She exhibited severe phonological problems and Magnetic Resonance Imaging (MRI) suggested a complete disconnection of this left-sided white-matter pathway comprising the arcuate fasciculus (AF). Diffusion imaging tractography confirmed the disconnection of the direct segment and the posterior indirect segment of her left AF, essential components of the left dorsal speech processing pathway. Despite her left-hemispheric damage and moderate aphasia, AA learned to name and maintain the novel words in her active vocabulary on par with healthy controls up to 6 months after learning. This exceeds previous demonstrations of word learning ability in aphasia. Interestingly, AA's preserved word learning ability was modality-specific as it was observed exclusively for written words. Functional magnetic resonance imaging (fMRI) revealed that in contrast to normals, AA showed a significantly right-lateralized activation pattern in the temporal and parietal regions when engaged in reading. Moreover, learning of visually presented novel word-picture pairs also activated the right temporal lobe in AA. Both AA and the controls showed increased activation during learning of novel versus familiar word-picture pairs in the hippocampus, an area critical for associative learning. AA's structural and functional imaging results suggest that in a literate person, a right-hemispheric network can provide an effective alternative route for learning of novel active vocabulary. Importantly, AA's previously undetected word learning ability translated directly into therapy, as she could use written input also to successfully re-learn and maintain familiar words that she had lost due to her left hemisphere lesion. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Skill dependent audiovisual integration in the fusiform induces repetition suppression.

    PubMed

    McNorgan, Chris; Booth, James R

    2015-02-01

    Learning to read entails mapping existing phonological representations to novel orthographic representations and is thus an ideal context for investigating experience driven audiovisual integration. Because two dominant brain-based theories of reading development hinge on the sensitivity of the visual-object processing stream to phonological information, we were interested in how reading skill relates to audiovisual integration in this area. Thirty-two children between 8 and 13 years of age spanning a range of reading skill participated in a functional magnetic resonance imaging experiment. Participants completed a rhyme judgment task to word pairs presented unimodally (auditory- or visual-only) and cross-modally (auditory followed by visual). Skill-dependent sub-additive audiovisual modulation was found in left fusiform gyrus, extending into the putative visual word form area, and was correlated with behavioral orthographic priming. These results suggest learning to read promotes facilitatory audiovisual integration in the ventral visual-object processing stream and may optimize this region for orthographic processing. Copyright © 2014 Elsevier Inc. All rights reserved.

  10. Skill Dependent Audiovisual Integration in the Fusiform Induces Repetition Suppression

    PubMed Central

    McNorgan, Chris; Booth, James R.

    2015-01-01

    Learning to read entails mapping existing phonological representations to novel orthographic representations and is thus an ideal context for investigating experience driven audiovisual integration. Because two dominant brain-based theories of reading development hinge on the sensitivity of the visual-object processing stream to phonological information, we were interested in how reading skill relates to audiovisual integration in this area. Thirty-two children between 8 and 13 years of age spanning a range of reading skill participated in a functional magnetic resonance imaging experiment. Participants completed a rhyme judgment task to word pairs presented unimodally (auditory- or visual-only) and cross-modally (auditory followed by visual). Skill-dependent sub-additive audiovisual modulation was found in left fusiform gyrus, extending into the putative visual word form area, and was correlated with behavioral orthographic priming. These results suggest learning to read promotes facilitatory audiovisual integration in the ventral visual-object processing stream and may optimize this region for orthographic processing. PMID:25585276

  11. Visual speech segmentation: using facial cues to locate word boundaries in continuous speech

    PubMed Central

    Mitchel, Aaron D.; Weiss, Daniel J.

    2014-01-01

    Speech is typically a multimodal phenomenon, yet few studies have focused on the exclusive contributions of visual cues to language acquisition. To address this gap, we investigated whether visual prosodic information can facilitate speech segmentation. Previous research has demonstrated that language learners can use lexical stress and pitch cues to segment speech and that learners can extract this information from talking faces. Thus, we created an artificial speech stream that contained minimal segmentation cues and paired it with two synchronous facial displays in which visual prosody was either informative or uninformative for identifying word boundaries. Across three familiarisation conditions (audio stream alone, facial streams alone, and paired audiovisual), learning occurred only when the facial displays were informative to word boundaries, suggesting that facial cues can help learners solve the early challenges of language acquisition. PMID:25018577

  12. Contextual diversity is a main determinant of word identification times in young readers.

    PubMed

    Perea, Manuel; Soares, Ana Paula; Comesaña, Montserrat

    2013-09-01

    Recent research with college-aged skilled readers by Adelman and colleagues revealed that contextual diversity (i.e., the number of contexts in which a word appears) is a more critical determinant of visual word recognition than mere repeated exposure (i.e., word frequency) (Psychological Science, 2006, Vol. 17, pp. 814-823). Given that contextual diversity has been claimed to be a relevant factor to word acquisition in developing readers, the effects of contextual diversity should also be a main determinant of word identification times in developing readers. A lexical decision experiment was conducted to examine the effects of contextual diversity and word frequency in young readers (children in fourth grade). Results revealed a sizable effect of contextual diversity, but not of word frequency, thereby generalizing Adelman and colleagues' data to a child population. These findings call for the implementation of dynamic developmental models of visual word recognition that go beyond a learning rule by mere exposure. Copyright © 2012 Elsevier Inc. All rights reserved.

  13. Differential Effects of Music and Video Gaming During Breaks on Auditory and Visual Learning.

    PubMed

    Liu, Shuyan; Kuschpel, Maxim S; Schad, Daniel J; Heinz, Andreas; Rapp, Michael A

    2015-11-01

    The interruption of learning processes by breaks filled with diverse activities is common in everyday life. This study investigated the effects of active computer gaming and passive relaxation (rest and music) breaks on auditory versus visual memory performance. Young adults were exposed to breaks involving (a) open eyes resting, (b) listening to music, and (c) playing a video game, immediately after memorizing auditory versus visual stimuli. To assess learning performance, words were recalled directly after the break (an 8:30 minute delay) and were recalled and recognized again after 7 days. Based on linear mixed-effects modeling, it was found that playing the Angry Birds video game during a short learning break impaired long-term retrieval in auditory learning but enhanced long-term retrieval in visual learning compared with the music and rest conditions. These differential effects of video games on visual versus auditory learning suggest specific interference of common break activities on learning.

  14. Hemispheric specialization for visual words is shaped by attention to sublexical units during initial learning.

    PubMed

    Yoncheva, Yuliya N; Wise, Jessica; McCandliss, Bruce

    2015-01-01

    Selective attention to grapheme-phoneme mappings during learning can impact the circuitry subsequently recruited during reading. Here we trained literate adults to read two novel scripts of glyph words containing embedded letters under different instructions. For one script, learners linked each embedded letter to its corresponding sound within the word (grapheme-phoneme focus); for the other, decoding was prevented so entire words had to be memorized. Post-training, ERPs were recorded during a reading task on the trained words within each condition and on untrained but decodable (transfer) words. Within this condition, reaction-time patterns suggested both trained and transfer words were accessed via sublexical units, yet a left-lateralized, late ERP response showed an enhanced left lateralization for transfer words relative to trained words, potentially reflecting effortful decoding. Collectively, these findings show that selective attention to grapheme-phoneme mappings during learning drives the lateralization of circuitry that supports later word recognition. This study thus provides a model example of how different instructional approaches to the same material may impact changes in brain circuitry. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  15. Creating visual explanations improves learning.

    PubMed

    Bobek, Eliza; Tversky, Barbara

    2016-01-01

    Many topics in science are notoriously difficult for students to learn. Mechanisms and processes outside student experience present particular challenges. While instruction typically involves visualizations, students usually explain in words. Because visual explanations can show parts and processes of complex systems directly, creating them should have benefits beyond creating verbal explanations. We compared learning from creating visual or verbal explanations for two STEM domains, a mechanical system (bicycle pump) and a chemical system (bonding). Both kinds of explanations were analyzed for content and learning assess by a post-test. For the mechanical system, creating a visual explanation increased understanding particularly for participants of low spatial ability. For the chemical system, creating both visual and verbal explanations improved learning without new teaching. Creating a visual explanation was superior and benefitted participants of both high and low spatial ability. Visual explanations often included crucial yet invisible features. The greater effectiveness of visual explanations appears attributable to the checks they provide for completeness and coherence as well as to their roles as platforms for inference. The benefits should generalize to other domains like the social sciences, history, and archeology where important information can be visualized. Together, the findings provide support for the use of learner-generated visual explanations as a powerful learning tool.

  16. Effects of Cumulative Frequency, but Not of Frequency Trajectory, in Lexical Decision Times of Older Adults and Patients with Alzheimer's Disease

    ERIC Educational Resources Information Center

    Caza, Nicole; Moscovitch, Morris

    2005-01-01

    The purpose of this study was to investigate the issue of age-limited learning effects on visual lexical decision in normal and pathological aging, by using words with different frequency trajectories and cumulative frequencies. We selected words that objectively changed in frequency trajectory from an early word count (Thorndike, 1921, 1932;…

  17. The effect of normal aging and age-related macular degeneration on perceptual learning.

    PubMed

    Astle, Andrew T; Blighe, Alan J; Webb, Ben S; McGraw, Paul V

    2015-01-01

    We investigated whether perceptual learning could be used to improve peripheral word identification speed. The relationship between the magnitude of learning and age was established in normal participants to determine whether perceptual learning effects are age invariant. We then investigated whether training could lead to improvements in patients with age-related macular degeneration (AMD). Twenty-eight participants with normal vision and five participants with AMD trained on a word identification task. They were required to identify three-letter words, presented 10° from fixation. To standardize crowding across each of the letters that made up the word, words were flanked laterally by randomly chosen letters. Word identification performance was measured psychophysically using a staircase procedure. Significant improvements in peripheral word identification speed were demonstrated following training (71% ± 18%). Initial task performance was correlated with age, with older participants having poorer performance. However, older adults learned more rapidly such that, following training, they reached the same level of performance as their younger counterparts. As a function of number of trials completed, patients with AMD learned at an equivalent rate as age-matched participants with normal vision. Improvements in word identification speed were maintained at least 6 months after training. We have demonstrated that temporal aspects of word recognition can be improved in peripheral vision with training across a range of ages and these learned improvements are relatively enduring. However, training targeted at other bottlenecks to peripheral reading ability, such as visual crowding, may need to be incorporated to optimize this approach.

  18. The effect of normal aging and age-related macular degeneration on perceptual learning

    PubMed Central

    Astle, Andrew T.; Blighe, Alan J.; Webb, Ben S.; McGraw, Paul V.

    2015-01-01

    We investigated whether perceptual learning could be used to improve peripheral word identification speed. The relationship between the magnitude of learning and age was established in normal participants to determine whether perceptual learning effects are age invariant. We then investigated whether training could lead to improvements in patients with age-related macular degeneration (AMD). Twenty-eight participants with normal vision and five participants with AMD trained on a word identification task. They were required to identify three-letter words, presented 10° from fixation. To standardize crowding across each of the letters that made up the word, words were flanked laterally by randomly chosen letters. Word identification performance was measured psychophysically using a staircase procedure. Significant improvements in peripheral word identification speed were demonstrated following training (71% ± 18%). Initial task performance was correlated with age, with older participants having poorer performance. However, older adults learned more rapidly such that, following training, they reached the same level of performance as their younger counterparts. As a function of number of trials completed, patients with AMD learned at an equivalent rate as age-matched participants with normal vision. Improvements in word identification speed were maintained at least 6 months after training. We have demonstrated that temporal aspects of word recognition can be improved in peripheral vision with training across a range of ages and these learned improvements are relatively enduring. However, training targeted at other bottlenecks to peripheral reading ability, such as visual crowding, may need to be incorporated to optimize this approach. PMID:26605694

  19. A Split-Attention Effect in Multimedia Learning: Evidence for Dual Processing Systems in Working Memory.

    ERIC Educational Resources Information Center

    Mayer, Richard E.; Moreno, Roxana

    1998-01-01

    Multimedia learners (n=146 college students) were able to integrate words and computer-presented pictures more easily when the words were presented aurally rather than visually. This split-attention effect is consistent with a dual-processing model of working memory. (SLD)

  20. Perceptual Learning Style Matching and L2 Vocabulary Acquisition

    ERIC Educational Resources Information Center

    Tight, Daniel G.

    2010-01-01

    This study explored learning and retention of concrete nouns in second language Spanish by first language English undergraduates (N = 128). Each completed a learning style (visual, auditory, tactile/kinesthetic, mixed) assessment, took a vocabulary pretest, and then studied 12 words each through three conditions (matching, mismatching, mixed…

  1. The Left Occipitotemporal Cortex Does Not Show Preferential Activity for Words

    PubMed Central

    Petersen, Steven E.; Schlaggar, Bradley L.

    2012-01-01

    Regions in left occipitotemporal (OT) cortex, including the putative visual word form area, are among the most commonly activated in imaging studies of single-word reading. It remains unclear whether this part of the brain is more precisely characterized as specialized for words and/or letters or contains more general-use visual regions having properties useful for processing word stimuli, among others. In Analysis 1, we found no evidence of greater activity in left OT regions for words or letter strings relative to other high–spatial frequency high-contrast stimuli, including line drawings and Amharic strings (which constitute the Ethiopian writing system). In Analysis 2, we further investigated processing characteristics of OT cortex potentially useful in reading. Analysis 2 showed that a specific part of OT cortex 1) is responsive to visual feature complexity, measured by the number of strokes forming groups of letters or Amharic strings and 2) processes learned combinations of characters, such as those in words and pseudowords, as groups but does not do so in consonant and Amharic strings. Together, these results indicate that while regions of left OT cortex are not specialized for words, at least part of OT cortex has properties particularly useful for processing words and letters. PMID:22235035

  2. Mechanisms of attention in reading parafoveal words: a cross-linguistic study in children.

    PubMed

    Siéroff, Eric; Dahmen, Riadh; Fagard, Jacqueline

    2012-05-01

    The right visual field superiority (RVFS) for words may be explained by the cerebral lateralization for language, the scanning habits in relation to script direction, and spatial attention. The present study explored the influence of spatial attention on the RVFS in relation to scanning habits in school-age children. French second- and fourth-graders identified briefly presented French parafoveal words. Tunisian second- and fourth-graders identified Arabic words, and Tunisian fourth-graders identified French words. The distribution of spatial attention was evaluated by using a distracter in the visual field opposite the word. The results of the correct identification score showed that reading direction had only a partial effect on the identification of parafoveal words and the distribution of attention, with a clear RVFS and a larger effect of the distracter in the left visual field in French children reading French words, and an absence of asymmetry when Tunisian children read Arabic words. Fourth-grade Tunisian children also showed an RVFS when reading French words without an asymmetric distribution of attention, suggesting that their native language may have partially influenced reading strategies in the newly learned language. However, the mode of letter processing, evaluated by a qualitative error score, was only influenced by reading direction, with more sequential processing in the visual field where reading "begins." The distribution of attention when reading parafoveal words is better explained by the interaction between left hemisphere activation and strategies related to reading direction. We discuss these results in light of an attentional theory that dissociates selection and preparation.

  3. To What Extent Does Children's Spelling Improve as a Result of Learning Words with the Look, Say, Cover, Write, Check, Fix Strategy Compared with Phonological Spelling Strategies?

    ERIC Educational Resources Information Center

    Dymock, Susan; Nicholson, Tom

    2017-01-01

    The ubiquitous weekly spelling test assumes that words are best learned by memorisation and testing but is this the best way? This study compared two well-known approaches to spelling instruction, the rule based and visual memory approaches. A group of 55 seven-year-olds in two Year 3 classrooms was taught spelling in small groups for three…

  4. Feedback Visualization in a Grammar-Based E-Learning System for German: A Preliminary User Evaluation with the COMPASS System

    ERIC Educational Resources Information Center

    Harbusch, Karin; Hausdörfer, Annette

    2016-01-01

    COMPASS is an e-learning system that can visualize grammar errors during sentence production in German as a first or second language. Via drag-and-drop dialogues, it allows users to freely select word forms from a lexicon and to combine them into phrases and sentences. The system's core component is a natural-language generator that, for every new…

  5. Could a Multimodal Dictionary Serve as a Learning Tool? An Examination of the Impact of Technologically Enhanced Visual Glosses on L2 Text Comprehension

    ERIC Educational Resources Information Center

    Sato, Takeshi

    2016-01-01

    This study examines the efficacy of a multimodal online bilingual dictionary based on cognitive linguistics in order to explore the advantages and limitations of explicit multimodal L2 vocabulary learning. Previous studies have examined the efficacy of the verbal and visual representation of words while reading L2 texts, concluding that it…

  6. Form–meaning links in the development of visual word recognition

    PubMed Central

    Nation, Kate

    2009-01-01

    Learning to read takes time and it requires explicit instruction. Three decades of research has taught us a good deal about how children learn about the links between orthography and phonology during word reading development. However, we have learned less about the links that children build between orthographic form and meaning. This is surprising given that the goal of reading development must be for children to develop an orthographic system that allows meanings to be accessed quickly, reliably and efficiently from orthography. This review considers whether meaning-related information is used when children read words aloud, and asks what we know about how and when children make connections between form and meaning during the course of reading development. PMID:19933139

  7. Reviewing or Retrieving: What Activity Best Promotes Long-Term Retention?

    ERIC Educational Resources Information Center

    Lindgren, Paul D.

    2012-01-01

    Research studies repeatedly emphasize the importance of vocabulary capabilities to a large variety of academic activities. This study compared a learning strategy that exclusively involved the visual review of vocabulary word-definition pairs to a strategy that, in addition, prompted participants to attempt free-recall retrieval of words to match…

  8. Visual Representations in Mathematics Teaching: An Experiment with Students

    ERIC Educational Resources Information Center

    Debrenti, Edith

    2015-01-01

    General problem-solving skills are of central importance in school mathematics achievement. Word problems play an important role not just in mathematical education, but in general education as well. Meaningful learning and understanding are basic aspects of all kinds of learning and it is even more important in the case of learning mathematics. In…

  9. The influence of contextual diversity on word learning.

    PubMed

    Johns, Brendan T; Dye, Melody; Jones, Michael N

    2016-08-01

    In a series of analyses over mega datasets, Jones, Johns, and Recchia (Canadian Journal of Experimental Psychology, 66(2), 115-124, 2012) and Johns et al. (Journal of the Acoustical Society of America, 132:2, EL74-EL80, 2012) found that a measure of contextual diversity that takes into account the semantic variability of a word's contexts provided a better fit to both visual and spoken word recognition data than traditional measures, such as word frequency or raw context counts. This measure was empirically validated with an artificial language experiment (Jones et al.). The present study extends the empirical results with a unique natural language learning paradigm, which allows for an examination of the semantic representations that are acquired as semantic diversity is varied. Subjects were incidentally exposed to novel words as they rated short selections from articles, books, and newspapers. When novel words were encountered across distinct discourse contexts, subjects were both faster and more accurate at recognizing them than when they were seen in redundant contexts. However, learning across redundant contexts promoted the development of more stable semantic representations. These findings are predicted by a distributional learning model trained on the same materials as our subjects.

  10. Lexical orthography acquisition: Is handwriting better than spelling aloud?

    PubMed Central

    Bosse, Marie-Line; Chaves, Nathalie; Valdois, Sylviane

    2014-01-01

    Lexical orthography acquisition is currently described as the building of links between the visual forms and the auditory forms of whole words. However, a growing body of data suggests that a motor component could further be involved in orthographic acquisition. A few studies support the idea that reading plus handwriting is a better lexical orthographic learning situation than reading alone. However, these studies did not explore which of the cognitive processes involved in handwriting enhanced lexical orthographic acquisition. Some findings suggest that the specific movements memorized when learning to write may participate in the establishment of orthographic representations in memory. The aim of the present study was to assess this hypothesis using handwriting and spelling aloud as two learning conditions. In two experiments, fifth graders were asked to read complex pseudo-words embedded in short sentences. Immediately after reading, participants had to recall the pseudo-words' spellings either by spelling them aloud or by handwriting them down. One week later, orthographic acquisition was tested using two post-tests: a pseudo-word production task (spelling by hand in Experiment 1 or spelling aloud in Experiment 2) and a pseudo-word recognition task. Results showed no significant difference in pseudo-word recognition between the two learning conditions. In the pseudo-word production task, orthography learning improved when the learning and post-test conditions were similar, thus showing a massive encoding-retrieval match effect in the two experiments. However, a mixed model analysis of the pseudo-word production results revealed a significant learning condition effect which remained after control of the encoding-retrieval match effect. This later finding suggests that orthography learning is more efficient when mediated by handwriting than by spelling aloud, whatever the post-test production task. PMID:24575058

  11. Lexical orthography acquisition: Is handwriting better than spelling aloud?

    PubMed

    Bosse, Marie-Line; Chaves, Nathalie; Valdois, Sylviane

    2014-01-01

    Lexical orthography acquisition is currently described as the building of links between the visual forms and the auditory forms of whole words. However, a growing body of data suggests that a motor component could further be involved in orthographic acquisition. A few studies support the idea that reading plus handwriting is a better lexical orthographic learning situation than reading alone. However, these studies did not explore which of the cognitive processes involved in handwriting enhanced lexical orthographic acquisition. Some findings suggest that the specific movements memorized when learning to write may participate in the establishment of orthographic representations in memory. The aim of the present study was to assess this hypothesis using handwriting and spelling aloud as two learning conditions. In two experiments, fifth graders were asked to read complex pseudo-words embedded in short sentences. Immediately after reading, participants had to recall the pseudo-words' spellings either by spelling them aloud or by handwriting them down. One week later, orthographic acquisition was tested using two post-tests: a pseudo-word production task (spelling by hand in Experiment 1 or spelling aloud in Experiment 2) and a pseudo-word recognition task. Results showed no significant difference in pseudo-word recognition between the two learning conditions. In the pseudo-word production task, orthography learning improved when the learning and post-test conditions were similar, thus showing a massive encoding-retrieval match effect in the two experiments. However, a mixed model analysis of the pseudo-word production results revealed a significant learning condition effect which remained after control of the encoding-retrieval match effect. This later finding suggests that orthography learning is more efficient when mediated by handwriting than by spelling aloud, whatever the post-test production task.

  12. Tone of voice guides word learning in informative referential contexts.

    PubMed

    Reinisch, Eva; Jesse, Alexandra; Nygaard, Lynne C

    2013-06-01

    Listeners infer which object in a visual scene a speaker refers to from the systematic variation of the speaker's tone of voice (ToV). We examined whether ToV also guides word learning. During exposure, participants heard novel adjectives (e.g., "daxen") spoken with a ToV representing hot, cold, strong, weak, big, or small while viewing picture pairs representing the meaning of the adjective and its antonym (e.g., elephant-ant for big-small). Eye fixations were recorded to monitor referent detection and learning. During test, participants heard the adjectives spoken with a neutral ToV, while selecting referents from familiar and unfamiliar picture pairs. Participants were able to learn the adjectives' meanings, and, even in the absence of informative ToV, generalize them to new referents. A second experiment addressed whether ToV provides sufficient information to infer the adjectival meaning or needs to operate within a referential context providing information about the relevant semantic dimension. Participants who saw printed versions of the novel words during exposure performed at chance during test. ToV, in conjunction with the referential context, thus serves as a cue to word meaning. ToV establishes relations between labels and referents for listeners to exploit in word learning.

  13. Cross-Modal Binding in Developmental Dyslexia

    ERIC Educational Resources Information Center

    Jones, Manon W.; Branigan, Holly P.; Parra, Mario A.; Logie, Robert H.

    2013-01-01

    The ability to learn visual-phonological associations is a unique predictor of word reading, and individuals with developmental dyslexia show impaired ability in learning these associations. In this study, we compared developmentally dyslexic and nondyslexic adults on their ability to form cross-modal associations (or "bindings") based…

  14. [A technological device for optimizing the time taken for blind people to learn Braille].

    PubMed

    Hernández, Cesar; Pedraza, Luis F; López, Danilo

    2011-10-01

    This project was aimed at designing and putting an electronic prototype into practice for improving the initial time taken by visually handicapped people for learning Braille, especially children. This project was mainly based on a prototype digital electronic device which identifies and translates material written by a user in Braille by a voice synthesis system, producing artificial words to determine whether a handicapped person's writing in Braille has been correct. A global system for mobile communications (GSM) module was also incorporated into the device which allowed it to send text messages, thereby involving innovation in the field of articles for aiding visually handicapped people. This project's main result was an easily accessed and understandable prototype device which improved visually handicapped people's initial learning of Braille. The time taken for visually handicapped people to learn Braille became significantly reduced whilst their interest increased, as did their concentration time regarding such learning.

  15. An Updated Account of the WISELAV Project: A Visual Construction of the English Verb System

    ERIC Educational Resources Information Center

    Pablos, Andrés Palacios

    2016-01-01

    This article presents the state of the art in WISELAV, an on-going research project based on the metaphor Languages Are (like) Visuals (LAV) and its mapping Words-In-Shapes Exchange (WISE). First, the cognitive premises that motivate the proposal are recalled: the power of images, students' increasingly visual cognitive learning style, and the…

  16. Representation Learning of Logic Words by an RNN: From Word Sequences to Robot Actions

    PubMed Central

    Yamada, Tatsuro; Murata, Shingo; Arie, Hiroaki; Ogata, Tetsuya

    2017-01-01

    An important characteristic of human language is compositionality. We can efficiently express a wide variety of real-world situations, events, and behaviors by compositionally constructing the meaning of a complex expression from a finite number of elements. Previous studies have analyzed how machine-learning models, particularly neural networks, can learn from experience to represent compositional relationships between language and robot actions with the aim of understanding the symbol grounding structure and achieving intelligent communicative agents. Such studies have mainly dealt with the words (nouns, adjectives, and verbs) that directly refer to real-world matters. In addition to these words, the current study deals with logic words, such as “not,” “and,” and “or” simultaneously. These words are not directly referring to the real world, but are logical operators that contribute to the construction of meaning in sentences. In human–robot communication, these words may be used often. The current study builds a recurrent neural network model with long short-term memory units and trains it to learn to translate sentences including logic words into robot actions. We investigate what kind of compositional representations, which mediate sentences and robot actions, emerge as the network's internal states via the learning process. Analysis after learning shows that referential words are merged with visual information and the robot's own current state, and the logical words are represented by the model in accordance with their functions as logical operators. Words such as “true,” “false,” and “not” work as non-linear transformations to encode orthogonal phrases into the same area in a memory cell state space. The word “and,” which required a robot to lift up both its hands, worked as if it was a universal quantifier. The word “or,” which required action generation that looked apparently random, was represented as an unstable space of the network's dynamical system. PMID:29311891

  17. Representation Learning of Logic Words by an RNN: From Word Sequences to Robot Actions.

    PubMed

    Yamada, Tatsuro; Murata, Shingo; Arie, Hiroaki; Ogata, Tetsuya

    2017-01-01

    An important characteristic of human language is compositionality. We can efficiently express a wide variety of real-world situations, events, and behaviors by compositionally constructing the meaning of a complex expression from a finite number of elements. Previous studies have analyzed how machine-learning models, particularly neural networks, can learn from experience to represent compositional relationships between language and robot actions with the aim of understanding the symbol grounding structure and achieving intelligent communicative agents. Such studies have mainly dealt with the words (nouns, adjectives, and verbs) that directly refer to real-world matters. In addition to these words, the current study deals with logic words, such as "not," "and," and "or" simultaneously. These words are not directly referring to the real world, but are logical operators that contribute to the construction of meaning in sentences. In human-robot communication, these words may be used often. The current study builds a recurrent neural network model with long short-term memory units and trains it to learn to translate sentences including logic words into robot actions. We investigate what kind of compositional representations, which mediate sentences and robot actions, emerge as the network's internal states via the learning process. Analysis after learning shows that referential words are merged with visual information and the robot's own current state, and the logical words are represented by the model in accordance with their functions as logical operators. Words such as "true," "false," and "not" work as non-linear transformations to encode orthogonal phrases into the same area in a memory cell state space. The word "and," which required a robot to lift up both its hands, worked as if it was a universal quantifier. The word "or," which required action generation that looked apparently random, was represented as an unstable space of the network's dynamical system.

  18. N170 Visual Word Specialization on Implicit and Explicit Reading Tasks in Spanish Speaking Adult Neoliterates

    ERIC Educational Resources Information Center

    Sanchez, Laura V.

    2014-01-01

    Adult literacy training is known to be difficult in terms of teaching and maintenance (Abadzi, 2003), perhaps because adults who recently learned to read in their first language have not acquired reading automaticity. This study examines fast word recognition process in neoliterate adults, to evaluate whether they show evidence of perceptual…

  19. Observational Word Learning in Two Bonobos ("Pan Panicus"): Ostensive and Non-Ostensive Contexts.

    ERIC Educational Resources Information Center

    Lyn, Heidi; Savage-Rumbaugh, E. Sue

    2000-01-01

    Using a modified human paradigm, this article explores two language-competent bonobos' abilities to map new words to objects in realistic surroundings with few exposures to the referents. Also investigates the necessity of the apes maintaining visual contact with the item to map the novel name onto the novel object. (Author/VWL)

  20. Tracking Multiple Statistics: Simultaneous Learning of Object Names and Categories in English and Mandarin Speakers

    ERIC Educational Resources Information Center

    Chen, Chi-hsin; Gershkoff-Stowe, Lisa; Wu, Chih-Yi; Cheung, Hintat; Yu, Chen

    2017-01-01

    Two experiments were conducted to examine adult learners' ability to extract multiple statistics in simultaneously presented visual and auditory input. Experiment 1 used a cross-situational learning paradigm to test whether English speakers were able to use co-occurrences to learn word-to-object mappings and concurrently form object categories…

  1. Learning To Learn: 15 Vocabulary Acquisition Activities. Tips and Hints.

    ERIC Educational Resources Information Center

    Holden, William R.

    1999-01-01

    This article describes a variety of ways learners can help themselves remember new words, choosing the ones that best suit their learning styles. It is asserted that repeated exposure to new lexical items using a variety of means is the most consistent predictor of retention. The use of verbal, visual, tactile, textual, kinesthetic, and sonic…

  2. Visual paired-associate learning: in search of material-specific effects in adult patients who have undergone temporal lobectomy.

    PubMed

    Smith, Mary Lou; Bigel, Marla; Miller, Laurie A

    2011-02-01

    The mesial temporal lobes are important for learning arbitrary associations. It has previously been demonstrated that left mesial temporal structures are involved in learning word pairs, but it is not yet known whether comparable lesions in the right temporal lobe impair visually mediated associative learning. Patients who had undergone left (n=16) or right (n=18) temporal lobectomy for relief of intractable epilepsy and healthy controls (n=13) were administered two paired-associate learning tasks assessing their learning and memory of pairs of abstract designs or pairs of symbols in unique locations. Both patient groups had deficits in learning the designs, but only the right temporal group was impaired in recognition. For the symbol location task, differences were not found in learning, but again a recognition deficit was found for the right temporal group. The findings implicate the mesial temporal structures in relational learning. They support a material-specific effect for recognition but not for learning and recall of arbitrary visual and visual-spatial associative information. Copyright © 2010 Elsevier Inc. All rights reserved.

  3. Top-down processing of symbolic meanings modulates the visual word form area.

    PubMed

    Song, Yiying; Tian, Moqian; Liu, Jia

    2012-08-29

    Functional magnetic resonance imaging (fMRI) studies on humans have identified a region in the left middle fusiform gyrus consistently activated by written words. This region is called the visual word form area (VWFA). Recently, a hypothesis, called the interactive account, is proposed that to effectively analyze the bottom-up visual properties of words, the VWFA receives predictive feedback from higher-order regions engaged in processing sounds, meanings, or actions associated with words. Further, this top-down influence on the VWFA is independent of stimulus formats. To test this hypothesis, we used fMRI to examine whether a symbolic nonword object (e.g., the Eiffel Tower) intended to represent something other than itself (i.e., Paris) could activate the VWFA. We found that scenes associated with symbolic meanings elicited a higher VWFA response than those not associated with symbolic meanings, and such top-down modulation on the VWFA can be established through short-term associative learning, even across modalities. In addition, the magnitude of the symbolic effect observed in the VWFA was positively correlated with the subjective experience on the strength of symbol-referent association across individuals. Therefore, the VWFA is likely a neural substrate for the interaction of the top-down processing of symbolic meanings with the analysis of bottom-up visual properties of sensory inputs, making the VWFA the location where the symbolic meaning of both words and nonword objects is represented.

  4. Visual cortex activation in late-onset, Braille naive blind individuals: an fMRI study during semantic and phonological tasks with heard words.

    PubMed

    Burton, Harold; McLaren, Donald G

    2006-01-09

    Visual cortex activity in the blind has been shown in Braille literate people, which raise the question of whether Braille literacy influences cross-modal reorganization. We used fMRI to examine visual cortex activation during semantic and phonological tasks with auditory presentation of words in two late-onset blind individuals who lacked Braille literacy. Multiple visual cortical regions were activated in the Braille naive individuals. Positive BOLD responses were noted in lower tier visuotopic (e.g., V1, V2, VP, and V3) and several higher tier visual areas (e.g., V4v, V8, and BA 37). Activity was more extensive and cross-correlation magnitudes were greater during the semantic compared to the phonological task. These results with Braille naive individuals plausibly suggest that visual deprivation alone induces visual cortex reorganization. Cross-modal reorganization of lower tier visual areas may be recruited by developing skills in attending to selected non-visual inputs (e.g., Braille literacy, enhanced auditory skills). Such learning might strengthen remote connections with multisensory cortical areas. Of necessity, the Braille naive participants must attend to auditory stimulation for language. We hypothesize that learning to attend to non-visual inputs probably strengthens the remaining active synapses following visual deprivation, and thereby, increases cross-modal activation of lower tier visual areas when performing highly demanding non-visual tasks of which reading Braille is just one example.

  5. Visual cortex activation in late-onset, Braille naive blind individuals: An fMRI study during semantic and phonological tasks with heard words

    PubMed Central

    Burton, Harold; McLaren, Donald G.

    2013-01-01

    Visual cortex activity in the blind has been shown in Braille literate people, which raise the question of whether Braille literacy influences cross-modal reorganization. We used fMRI to examine visual cortex activation during semantic and phonological tasks with auditory presentation of words in two late-onset blind individuals who lacked Braille literacy. Multiple visual cortical regions were activated in the Braille naive individuals. Positive BOLD responses were noted in lower tier visuotopic (e.g., V1, V2, VP, and V3) and several higher tier visual areas (e.g., V4v, V8, and BA 37). Activity was more extensive and cross-correlation magnitudes were greater during the semantic compared to the phonological task. These results with Braille naive individuals plausibly suggest that visual deprivation alone induces visual cortex reorganization. Cross-modal reorganization of lower tier visual areas may be recruited by developing skills in attending to selected non-visual inputs (e.g., Braille literacy, enhanced auditory skills). Such learning might strengthen remote connections with multisensory cortical areas. Of necessity, the Braille naive participants must attend to auditory stimulation for language. We hypothesize that learning to attend to non-visual inputs probably strengthens the remaining active synapses following visual deprivation, and thereby, increases cross-modal activation of lower tier visual areas when performing highly demanding non-visual tasks of which reading Braille is just one example. PMID:16198053

  6. Evaluation of the Level of Students with Visual Impairments in Turkey in Terms of the Concepts of Mobility Prerequisites (Body Plane/Traffic)

    ERIC Educational Resources Information Center

    Altunay Arslantekin, Banu

    2017-01-01

    Purpose: Visually impaired people are weak in terms of their learning words and concepts by hearing them and their experience of the world with their bodies. In addition to developing a standardized assessment tool in the Development of Orientation and Mobility Skill Assessment Tool (OMSAT/YOBDA) for Visually Impaired Students Project, supported…

  7. An Attentional Goldilocks Effect: An Optimal Amount of Social Interactivity Promotes Word Learning from Video

    PubMed Central

    Nussenbaum, Kate; Amso, Dima

    2015-01-01

    Television can be a powerful education tool; however, content-makers must understand the factors that engage attention and promote learning from screen media. Prior research suggests that social engagement is critical for learning and that interactivity may enhance the educational quality of children’s media. The present study examined the effects of increasing the social interactivity of television on children’s visual attention and word learning. Three- to 5-year-old (MAge = 4;5 years, SD = 9 months) children completed a task in which they viewed videos of an actress teaching them the Swahili label for an on-screen image. Each child viewed these video clips in four conditions that parametrically manipulated social engagement and interactivity. We then tested whether each child had successfully learned the Swahili labels. Though 5-year-old children were able to learn words in all conditions, we found that there was an optimal level of social engagement that best supported learning for all participants, defined by engaging the child but not distracting from word labeling. Our eye-tracking data indicated that children in this condition spent more time looking at the target image and less time looking at the actress’s face as compared to the most interactive condition. These findings suggest that social interactivity is critical to engaging attention and promoting learning from screen media up until a certain point, after which social stimuli may draw attention away from target images and impair children’s word learning. PMID:27030791

  8. An Attentional Goldilocks Effect: An Optimal Amount of Social Interactivity Promotes Word Learning from Video.

    PubMed

    Nussenbaum, Kate; Amso, Dima

    2016-01-01

    Television can be a powerful education tool; however, content-makers must understand the factors that engage attention and promote learning from screen media. Prior research suggests that social engagement is critical for learning and that interactivity may enhance the educational quality of children's media. The present study examined the effects of increasing the social interactivity of television on children's visual attention and word learning. Three- to 5-year-old ( M Age = 4;5 years, SD = 9 months) children completed a task in which they viewed videos of an actress teaching them the Swahili label for an on-screen image. Each child viewed these video clips in four conditions that parametrically manipulated social engagement and interactivity. We then tested whether each child had successfully learned the Swahili labels. Though 5-year-old children were able to learn words in all conditions, we found that there was an optimal level of social engagement that best supported learning for all participants, defined by engaging the child but not distracting from word labeling. Our eye-tracking data indicated that children in this condition spent more time looking at the target image and less time looking at the actress's face as compared to the most interactive condition. These findings suggest that social interactivity is critical to engaging attention and promoting learning from screen media up until a certain point, after which social stimuli may draw attention away from target images and impair children's word learning.

  9. Symbolic Play Connects to Language through Visual Object Recognition

    ERIC Educational Resources Information Center

    Smith, Linda B.; Jones, Susan S.

    2011-01-01

    Object substitutions in play (e.g. using a box as a car) are strongly linked to language learning and their absence is a diagnostic marker of language delay. Classic accounts posit a symbolic function that underlies both words and object substitutions. Here we show that object substitutions depend on developmental changes in visual object…

  10. Prestimulus brain activity predicts primacy in list learning

    PubMed Central

    Galli, Giulia; Choy, Tsee Leng; Otten, Leun J.

    2012-01-01

    Brain activity immediately before an event can predict whether the event will later be remembered. This indicates that memory formation is influenced by anticipatory mechanisms engaged ahead of stimulus presentation. Here, we asked whether anticipatory processes affect the learning of short word lists, and whether such activity varies as a function of serial position. Participants memorized lists of intermixed visual and auditory words with either an elaborative or rote rehearsal strategy. At the end of each list, a distraction task was performed followed by free recall. Recall performance was better for words in initial list positions and following elaborative rehearsal. Electrical brain activity before auditory words predicted later recall in the elaborative rehearsal condition. Crucially, anticipatory activity only affected recall when words occurred in initial list positions. This indicates that anticipatory processes, possibly related to general semantic preparation, contribute to primacy effects. PMID:22888370

  11. It's a Mad, Mad Wordle: For a New Take on Text, Try This Fun Word Cloud Generator

    ERIC Educational Resources Information Center

    Foote, Carolyn

    2009-01-01

    Nation. New. Common. Generation. These are among the most frequently used words spoken by President Barack Obama in his January 2009 inauguration speech as seen in a fascinating visual display called a Wordle. Educators, too, can harness the power of Wordle to enhance learning. Imagine providing students with a whole new perspective on…

  12. Tone of voice guides word learning in informative referential contexts

    PubMed Central

    Reinisch, Eva; Jesse, Alexandra; Nygaard, Lynne C.

    2012-01-01

    Listeners infer which object in a visual scene a speaker refers to from the systematic variation of the speaker’s tone of voice (ToV). We examined whether ToV also guides word learning. During exposure, participants heard novel adjectives (e.g., “daxen”) spoken with a ToV representing hot, cold, strong, weak, big, or small while viewing picture pairs representing the meaning of the adjective and its antonym (e.g., elephant-ant for big-small). Eye fixations were recorded to monitor referent detection and learning. During test, participants heard the adjectives spoken with a neutral ToV, while selecting referents from familiar and unfamiliar picture pairs. Participants were able to learn the adjectives’ meanings, and, even in the absence of informative ToV, generalise them to new referents. A second experiment addressed whether ToV provides sufficient information to infer the adjectival meaning or needs to operate within a referential context providing information about the relevant semantic dimension. Participants who saw printed versions of the novel words during exposure performed at chance during test. ToV, in conjunction with the referential context, thus serves as a cue to word meaning. ToV establishes relations between labels and referents for listeners to exploit in word learning. PMID:23134484

  13. Tracking Multiple Statistics: Simultaneous Learning of Object Names and Categories in English and Mandarin Speakers.

    PubMed

    Chen, Chi-Hsin; Gershkoff-Stowe, Lisa; Wu, Chih-Yi; Cheung, Hintat; Yu, Chen

    2017-08-01

    Two experiments were conducted to examine adult learners' ability to extract multiple statistics in simultaneously presented visual and auditory input. Experiment 1 used a cross-situational learning paradigm to test whether English speakers were able to use co-occurrences to learn word-to-object mappings and concurrently form object categories based on the commonalities across training stimuli. Experiment 2 replicated the first experiment and further examined whether speakers of Mandarin, a language in which final syllables of object names are more predictive of category membership than English, were able to learn words and form object categories when trained with the same type of structures. The results indicate that both groups of learners successfully extracted multiple levels of co-occurrence and used them to learn words and object categories simultaneously. However, marked individual differences in performance were also found, suggesting possible interference and competition in processing the two concurrent streams of regularities. Copyright © 2016 Cognitive Science Society, Inc.

  14. Age-related behavioural and neurofunctional patterns of second language word learning: different ways of being successful.

    PubMed

    Marcotte, Karine; Ansaldo, Ana Inés

    2014-08-01

    This study aimed at investigating the neural basis of word learning as a function of age and word type. Ten young and ten elderly French-speaking participants were trained by means of a computerized Spanish word program. Both age groups reached a similar naming accuracy, but the elderly required significantly more time. Despite equivalent performance, distinct neural networks characterized the ceiling. While the young cohort showed subcortical activations, the elderly recruited the left inferior frontal gyrus, the left lingual gyrus and the precuneus. The learning trajectory of the elderly, the neuroimaging findings together with their performance on the Stroop suggest that the young adults relied on control processing areas whereas the elderly relied on episodic memory circuits, which may reflect resorting to better preserved cognitive resources. Finally, the recruitment of visual processing areas by the elderly may reflect the impact of the language training method used. Copyright © 2014 Elsevier Inc. All rights reserved.

  15. Connectivity precedes function in the development of the visual word form area.

    PubMed

    Saygin, Zeynep M; Osher, David E; Norton, Elizabeth S; Youssoufian, Deanna A; Beach, Sara D; Feather, Jenelle; Gaab, Nadine; Gabrieli, John D E; Kanwisher, Nancy

    2016-09-01

    What determines the cortical location at which a given functionally specific region will arise in development? We tested the hypothesis that functionally specific regions develop in their characteristic locations because of pre-existing differences in the extrinsic connectivity of that region to the rest of the brain. We exploited the visual word form area (VWFA) as a test case, scanning children with diffusion and functional imaging at age 5, before they learned to read, and at age 8, after they learned to read. We found the VWFA developed functionally in this interval and that its location in a particular child at age 8 could be predicted from that child's connectivity fingerprints (but not functional responses) at age 5. These results suggest that early connectivity instructs the functional development of the VWFA, possibly reflecting a general mechanism of cortical development.

  16. Development of the Visual Word Form Area Requires Visual Experience: Evidence from Blind Braille Readers.

    PubMed

    Kim, Judy S; Kanjlia, Shipra; Merabet, Lotfi B; Bedny, Marina

    2017-11-22

    Learning to read causes the development of a letter- and word-selective region known as the visual word form area (VWFA) within the human ventral visual object stream. Why does a reading-selective region develop at this anatomical location? According to one hypothesis, the VWFA develops at the nexus of visual inputs from retinotopic cortices and linguistic input from the frontotemporal language network because reading involves extracting linguistic information from visual symbols. Surprisingly, the anatomical location of the VWFA is also active when blind individuals read Braille by touch, suggesting that vision is not required for the development of the VWFA. In this study, we tested the alternative prediction that VWFA development is in fact influenced by visual experience. We predicted that in the absence of vision, the "VWFA" is incorporated into the frontotemporal language network and participates in high-level language processing. Congenitally blind ( n = 10, 9 female, 1 male) and sighted control ( n = 15, 9 female, 6 male), male and female participants each took part in two functional magnetic resonance imaging experiments: (1) word reading (Braille for blind and print for sighted participants), and (2) listening to spoken sentences of different grammatical complexity (both groups). We find that in blind, but not sighted participants, the anatomical location of the VWFA responds both to written words and to the grammatical complexity of spoken sentences. This suggests that in blindness, this region takes on high-level linguistic functions, becoming less selective for reading. More generally, the current findings suggest that experience during development has a major effect on functional specialization in the human cortex. SIGNIFICANCE STATEMENT The visual word form area (VWFA) is a region in the human cortex that becomes specialized for the recognition of written letters and words. Why does this particular brain region become specialized for reading? We tested the hypothesis that the VWFA develops within the ventral visual stream because reading involves extracting linguistic information from visual symbols. Consistent with this hypothesis, we find that in congenitally blind Braille readers, but not sighted readers of print, the VWFA region is active during grammatical processing of spoken sentences. These results suggest that visual experience contributes to VWFA specialization, and that different neural implementations of reading are possible. Copyright © 2017 the authors 0270-6474/17/3711495-10$15.00/0.

  17. Development of the Visual Word Form Area Requires Visual Experience: Evidence from Blind Braille Readers

    PubMed Central

    Kanjlia, Shipra; Merabet, Lotfi B.

    2017-01-01

    Learning to read causes the development of a letter- and word-selective region known as the visual word form area (VWFA) within the human ventral visual object stream. Why does a reading-selective region develop at this anatomical location? According to one hypothesis, the VWFA develops at the nexus of visual inputs from retinotopic cortices and linguistic input from the frontotemporal language network because reading involves extracting linguistic information from visual symbols. Surprisingly, the anatomical location of the VWFA is also active when blind individuals read Braille by touch, suggesting that vision is not required for the development of the VWFA. In this study, we tested the alternative prediction that VWFA development is in fact influenced by visual experience. We predicted that in the absence of vision, the “VWFA” is incorporated into the frontotemporal language network and participates in high-level language processing. Congenitally blind (n = 10, 9 female, 1 male) and sighted control (n = 15, 9 female, 6 male), male and female participants each took part in two functional magnetic resonance imaging experiments: (1) word reading (Braille for blind and print for sighted participants), and (2) listening to spoken sentences of different grammatical complexity (both groups). We find that in blind, but not sighted participants, the anatomical location of the VWFA responds both to written words and to the grammatical complexity of spoken sentences. This suggests that in blindness, this region takes on high-level linguistic functions, becoming less selective for reading. More generally, the current findings suggest that experience during development has a major effect on functional specialization in the human cortex. SIGNIFICANCE STATEMENT The visual word form area (VWFA) is a region in the human cortex that becomes specialized for the recognition of written letters and words. Why does this particular brain region become specialized for reading? We tested the hypothesis that the VWFA develops within the ventral visual stream because reading involves extracting linguistic information from visual symbols. Consistent with this hypothesis, we find that in congenitally blind Braille readers, but not sighted readers of print, the VWFA region is active during grammatical processing of spoken sentences. These results suggest that visual experience contributes to VWFA specialization, and that different neural implementations of reading are possible. PMID:29061700

  18. The effect of visual and verbal modes of presentation on children's retention of images and words

    NASA Astrophysics Data System (ADS)

    Vasu, Ellen Storey; Howe, Ann C.

    This study tested the hypothesis that the use of two modes of presenting information to children has an additive memory effect for the retention of both images and words. Subjects were 22 first-grade and 22 fourth-grade children randomly assigned to visual and visual-verbal treatment groups. The visual-verbal group heard a description while observing an object; the visual group observed the same object but did not hear a description. Children were tested individually immediately after presentation of stimuli and two weeks later. They were asked to represent the information recalled through a drawing and an oral verbal description. In general, results supported the hypothesis and indicated, in addition, that children represent more information in iconic (pictorial) form than in symbolic (verbal) form. Strategies for using these results to enhance science learning at the elementary school level are discussed.

  19. Preserving Tradition through Technology.

    ERIC Educational Resources Information Center

    Wakshul, Barbra

    2001-01-01

    Language is easiest to learn before age 5. The Cherokee Nation supported production of a toy that teaches young children basic Cherokee words. When figures that come with the toy are placed into it, a computer chip activates a voice speaking the name of the figure in Cherokee. Learning takes place on visual, auditory, and tactile levels. (TD)

  20. Creating Meaning through Multimodality: Multiliteracies Assessment and Photo Projects for Online Portfolios

    ERIC Educational Resources Information Center

    Schmerbeck, Nicola; Lucht, Felecia

    2017-01-01

    Actively engaged in online media, learners today are surrounded by texts overtly and covertly transmitted by visual images, sound effects, and voices as well as the written word. Language learning portfolios can engage students in the literacy-oriented learning processes of interpretation, collaboration, and problem solving as outlined by Kern…

  1. Reading faces: investigating the use of a novel face-based orthography in acquired alexia.

    PubMed

    Moore, Michelle W; Brendel, Paul C; Fiez, Julie A

    2014-02-01

    Skilled visual word recognition is thought to rely upon a particular region within the left fusiform gyrus, the visual word form area (VWFA). We investigated whether an individual (AA1) with pure alexia resulting from acquired damage to the VWFA territory could learn an alphabetic "FaceFont" orthography, in which faces rather than typical letter-like units are used to represent phonemes. FaceFont was designed to distinguish between perceptual versus phonological influences on the VWFA. AA1 was unable to learn more than five face-phoneme mappings, performing well below that of controls. AA1 succeeded, however, in learning and using a proto-syllabary comprising 15 face-syllable mappings. These results suggest that the VWFA provides a "linguistic bridge" into left hemisphere speech and language regions, irrespective of the perceptual characteristics of a written language. They also suggest that some individuals may be able to acquire a non-alphabetic writing system more readily than an alphabetic writing system. Copyright © 2013 Elsevier Inc. All rights reserved.

  2. Reading faces: Investigating the use of a novel face-based orthography in acquired alexia

    PubMed Central

    Moore, Michelle W.; Brendel, Paul C.; Fiez, Julie A.

    2014-01-01

    Skilled visual word recognition is thought to rely upon a particular region within the left fusiform gyrus, the visual word form area (VWFA). We investigated whether an individual (AA1) with pure alexia resulting from acquired damage to the VWFA territory could learn an alphabetic “FaceFont” orthography, in which faces rather than typical letter-like units are used to represent phonemes. FaceFont was designed to distinguish between perceptual versus phonological influences on the VWFA. AA1 was unable to learn more than five face-phoneme mappings, performing well below that of controls. AA1 succeeded, however, in learning and using a proto-syllabary comprising 15 face-syllable mappings. These results suggest that the VWFA provides a “linguistic bridge” into left hemisphere speech and language regions, irrespective of the perceptual characteristics of a written language. They also suggest that some individuals may be able to acquire a non-alphabetic writing system more readily than an alphabetic writing system. PMID:24463310

  3. Visualizing Intercultural Literacy: Engaging Critically with Diversity and Migration in the Classroom through an Image-Based Approach

    ERIC Educational Resources Information Center

    Arizpe, Evelyn; Bagelman, Caroline; Devlin, Alison M.; Farrell, Maureen; McAdam, Julie E.

    2014-01-01

    Accessible forms of language, learning and literacy, as well as strategies that support intercultural communication are needed for the diverse population of refugee, asylum seeker and migrant children within schools. The research project "Journeys from Images to Words" explored the potential of visual texts to address these issues.…

  4. The Effect of Modality Shifts on Practive Interference in Long-Term Memory.

    ERIC Educational Resources Information Center

    Dean, Raymond S.; And Others

    1983-01-01

    In experiment one, subjects learned a word list in blocked or random forms of auditory/visual change. In experiment two, high- and low-conceptual rigid subjects read passages in shift conditions or nonshift, exclusively in auditory or visual modes. A shift in modality provided a powerful release from proactive interference. (Author/CM)

  5. A Personal Vision Quest: Learning To Think Like an Artist.

    ERIC Educational Resources Information Center

    Dake, Dennis M.

    Using the metaphoric story device of two tribes, one that builds their culture around words and the other which depends primarily on visual perception, this paper suggests a distinctive mental paradigm at work within the society of artists, who pursue visual literacy through graphic ideation. The author discusses his education in art and his…

  6. The Effects of a Contextual Visual on Recall Measures of Listening Comprehension in Beginning College German.

    DTIC Science & Technology

    1979-05-01

    34Pictures did not make learning easier; they tended to distract the children from the printed word" (p. 623). Hammerly (1974) empir- ically tested the...effects observed in this study resulted from combining this - articular 84 visual with a particular passage, and it is difficult, therefore, - determine to

  7. Bag of Visual Words Model with Deep Spatial Features for Geographical Scene Classification

    PubMed Central

    Wu, Lin

    2017-01-01

    With the popular use of geotagging images, more and more research efforts have been placed on geographical scene classification. In geographical scene classification, valid spatial feature selection can significantly boost the final performance. Bag of visual words (BoVW) can do well in selecting feature in geographical scene classification; nevertheless, it works effectively only if the provided feature extractor is well-matched. In this paper, we use convolutional neural networks (CNNs) for optimizing proposed feature extractor, so that it can learn more suitable visual vocabularies from the geotagging images. Our approach achieves better performance than BoVW as a tool for geographical scene classification, respectively, in three datasets which contain a variety of scene categories. PMID:28706534

  8. Plasticity in the adult language system: a longitudinal electrophysiological study on second language learning.

    PubMed

    Stein, M; Dierks, T; Brandeis, D; Wirth, M; Strik, W; Koenig, T

    2006-11-01

    Event-related potentials (ERPs) were used to trace changes in brain activity related to progress in second language learning. Twelve English-speaking exchange students learning German in Switzerland were recruited. ERPs to visually presented single words from the subjects' native language (English), second language (German) and an unknown language (Romansh) were measured before (day 1) and after (day 2) 5 months of intense German language learning. When comparing ERPs to German words from day 1 and day 2, we found topographic differences between 396 and 540 ms. These differences could be interpreted as a latency shift indicating faster processing of German words on day 2. Source analysis indicated that the topographic differences were accounted for by shorter activation of left inferior frontal gyrus (IFG) on day 2. In ERPs to English words, we found Global Field Power differences between 472 and 644 ms. This may due to memory traces related to English words being less easily activated on day 2. Alternatively, it might reflect the fact that--with German words becoming familiar on day 2--English words loose their oddball character and thus produce a weaker P300-like effect on day 2. In ERPs to Romansh words, no differences were observed. Our results reflect plasticity in the neuronal networks underlying second language acquisition. They indicate that with a higher level of second language proficiency, second language word processing is faster and requires shorter frontal activation. Thus, our results suggest that the reduced IFG activation found in previous fMRI studies might not reflect a generally lower activation but rather a shorter duration of activity.

  9. Evidence from neglect dyslexia for morphological decomposition at the early stages of orthographic-visual analysis

    PubMed Central

    Reznick, Julia; Friedmann, Naama

    2015-01-01

    This study examined whether and how the morphological structure of written words affects reading in word-based neglect dyslexia (neglexia), and what can be learned about morphological decomposition in reading from the effect of morphology on neglexia. The oral reading of 7 Hebrew-speaking participants with acquired neglexia at the word level—6 with left neglexia and 1 with right neglexia—was evaluated. The main finding was that the morphological role of the letters on the neglected side of the word affected neglect errors: When an affix appeared on the neglected side, it was neglected significantly more often than when the neglected side was part of the root; root letters on the neglected side were never omitted, whereas affixes were. Perceptual effects of length and final letter form were found for words with an affix on the neglected side, but not for words in which a root letter appeared in the neglected side. Semantic and lexical factors did not affect the participants' reading and error pattern, and neglect errors did not preserve the morpho-lexical characteristics of the target words. These findings indicate that an early morphological decomposition of words to their root and affixes occurs before access to the lexicon and to semantics, at the orthographic-visual analysis stage, and that the effects did not result from lexical feedback. The same effects of morphological structure on reading were manifested by the participants with left- and right-sided neglexia. Since neglexia is a deficit at the orthographic-visual analysis level, the effect of morphology on reading patterns in neglexia further supports that morphological decomposition occurs in the orthographic-visual analysis stage, prelexically, and that the search for the three letters of the root in Hebrew is a trigger for attention shift in neglexia. PMID:26528159

  10. Enrichment Effects of Gestures and Pictures on Abstract Words in a Second Language.

    PubMed

    Repetto, Claudia; Pedroli, Elisa; Macedonia, Manuela

    2017-01-01

    Laboratory research has demonstrated that multisensory enrichment promotes verbal learning in a foreign language (L2). Enrichment can be done in various ways, e.g., by adding a picture that illustrates the L2 word's meaning or by the learner performing a gesture to the word (enactment). Most studies have tested enrichment on concrete but not on abstract words. Unlike concrete words, the representation of abstract words is deprived of sensory-motor features. This has been addressed as one of the reasons why abstract words are difficult to remember. Here, we ask whether a brief enrichment training by means of pictures and by self-performed gestures also enhances the memorability of abstract words in L2. Further, we explore which of these two enrichment strategies is more effective. Twenty young adults learned 30 novel abstract words in L2 according to three encoding conditions: (1) reading, (2) reading and pairing the novel word to a picture, and (3) reading and enacting the word by means of a gesture. We measured memory performance in free and cued recall tests, as well as in a visual recognition task. Words encoded with gestures were better remembered in the free recall in the native language (L1). When recognizing the novel words, participants made less errors for words encoded with gestures compared to words encoded with pictures. The reaction times in the recognition task did not differ across conditions. The present findings support, even if only partially, the idea that enactment promotes learning of abstract words and that it is superior to enrichment by means of pictures even after short training.

  11. Enrichment Effects of Gestures and Pictures on Abstract Words in a Second Language

    PubMed Central

    Repetto, Claudia; Pedroli, Elisa; Macedonia, Manuela

    2017-01-01

    Laboratory research has demonstrated that multisensory enrichment promotes verbal learning in a foreign language (L2). Enrichment can be done in various ways, e.g., by adding a picture that illustrates the L2 word’s meaning or by the learner performing a gesture to the word (enactment). Most studies have tested enrichment on concrete but not on abstract words. Unlike concrete words, the representation of abstract words is deprived of sensory-motor features. This has been addressed as one of the reasons why abstract words are difficult to remember. Here, we ask whether a brief enrichment training by means of pictures and by self-performed gestures also enhances the memorability of abstract words in L2. Further, we explore which of these two enrichment strategies is more effective. Twenty young adults learned 30 novel abstract words in L2 according to three encoding conditions: (1) reading, (2) reading and pairing the novel word to a picture, and (3) reading and enacting the word by means of a gesture. We measured memory performance in free and cued recall tests, as well as in a visual recognition task. Words encoded with gestures were better remembered in the free recall in the native language (L1). When recognizing the novel words, participants made less errors for words encoded with gestures compared to words encoded with pictures. The reaction times in the recognition task did not differ across conditions. The present findings support, even if only partially, the idea that enactment promotes learning of abstract words and that it is superior to enrichment by means of pictures even after short training. PMID:29326617

  12. Multiple Views of Space: Continuous Visual Flow Enhances Small-Scale Spatial Learning

    ERIC Educational Resources Information Center

    Holmes, Corinne A.; Marchette, Steven A.; Newcombe, Nora S.

    2017-01-01

    In the real word, we perceive our environment as a series of static and dynamic views, with viewpoint transitions providing a natural link from one static view to the next. The current research examined if experiencing such transitions is fundamental to learning the spatial layout of small-scale displays. In Experiment 1, participants viewed a…

  13. The Effects of Seductive Details on Motivation and Learning in Multimedia Environments: Does Individual Interest Matter?

    ERIC Educational Resources Information Center

    Schehl, Jeanne M.

    2012-01-01

    Research about motivation indicates that a student's attention must be gained and sustained for learning to occur. As a result, motivational tactics, including adding interesting words, sounds and visuals to instructional materials, are commonly used by designers of instruction to trigger and sustain learners' interest and engagement…

  14. The Development of Long-Term Lexical Representations through Hebb Repetition Learning

    ERIC Educational Resources Information Center

    Szmalec, Arnaud; Page, Mike P. A.; Duyck, Wouter

    2012-01-01

    This study clarifies the involvement of short- and long-term memory in novel word-form learning, using the Hebb repetition paradigm. In Experiment 1, participants recalled sequences of visually presented syllables (e.g., "la"-"va"-"bu"-"sa"-"fa"-"ra"-"re"-"si"-"di"), with one particular (Hebb) sequence repeated on every third trial. Crucially,…

  15. The Keyword Method of Vocabulary Acquisition: An Experimental Evaluation.

    ERIC Educational Resources Information Center

    Griffith, Douglas

    The keyword method of vocabulary acquisition is a two-step mnemonic technique for learning vocabulary terms. The first step, the acoustic link, generates a keyword based on the sound of the foreign word. The second step, the imagery link, ties the keyword to the meaning of the item to be learned, via an interactive visual image or other…

  16. Development of a Math-Learning App for Students with Visual Impairments

    ERIC Educational Resources Information Center

    Beal, Carole R.; Rosenblum, L. Penny

    2015-01-01

    The project was conducted to make an online tutoring program for math word problem solving accessible to students with visual impairments (VI). An online survey of teachers of students with VI (TVIs) guided the decision to provide the math content in the form of an iPad app, accompanied by print and braille materials. The app includes audio…

  17. What's behind a face: person context coding in fusiform face area as revealed by multivoxel pattern analysis.

    PubMed

    van den Hurk, J; Gentile, F; Jansma, B M

    2011-12-01

    The identification of a face comprises processing of both visual features and conceptual knowledge. Studies showing that the fusiform face area (FFA) is sensitive to face identity generally neglect this dissociation. The present study is the first that isolates conceptual face processing by using words presented in a person context instead of faces. The design consisted of 2 different conditions. In one condition, participants were presented with blocks of words related to each other at the categorical level (e.g., brands of cars, European cities). The second condition consisted of blocks of words linked to the personality features of a specific face. Both conditions were created from the same 8 × 8 word matrix, thereby controlling for visual input across conditions. Univariate statistical contrasts did not yield any significant differences between the 2 conditions in FFA. However, a machine learning classification algorithm was able to successfully learn the functional relationship between the 2 contexts and their underlying response patterns in FFA, suggesting that these activation patterns can code for different semantic contexts. These results suggest that the level of processing in FFA goes beyond facial features. This has strong implications for the debate about the role of FFA in face identification.

  18. Visual Word Recognition in Deaf Readers: Lexicality Is Modulated by Communication Mode

    PubMed Central

    Barca, Laura; Pezzulo, Giovanni; Castrataro, Marianna; Rinaldi, Pasquale; Caselli, Maria Cristina

    2013-01-01

    Evidence indicates that adequate phonological abilities are necessary to develop proficient reading skills and that later in life phonology also has a role in the covert visual word recognition of expert readers. Impairments of acoustic perception, such as deafness, can lead to atypical phonological representations of written words and letters, which in turn can affect reading proficiency. Here, we report an experiment in which young adults with different levels of acoustic perception (i.e., hearing and deaf individuals) and different modes of communication (i.e., hearing individuals using spoken language, deaf individuals with a preference for sign language, and deaf individuals using the oral modality with less or no competence in sign language) performed a visual lexical decision task, which consisted of categorizing real words and consonant strings. The lexicality effect was restricted to deaf signers who responded faster to real words than consonant strings, showing over-reliance on whole word lexical processing of stimuli. No effect of stimulus type was found in deaf individuals using the oral modality or in hearing individuals. Thus, mode of communication modulates the lexicality effect. This suggests that learning a sign language during development shapes visuo-motor representations of words, which are tuned to the actions used to express them (phono-articulatory movements vs. hand movements) and to associated perceptions. As these visuo-motor representations are elicited during on-line linguistic processing and can overlap with the perceptual-motor processes required to execute the task, they can potentially produce interference or facilitation effects. PMID:23554976

  19. Visual word recognition in deaf readers: lexicality is modulated by communication mode.

    PubMed

    Barca, Laura; Pezzulo, Giovanni; Castrataro, Marianna; Rinaldi, Pasquale; Caselli, Maria Cristina

    2013-01-01

    Evidence indicates that adequate phonological abilities are necessary to develop proficient reading skills and that later in life phonology also has a role in the covert visual word recognition of expert readers. Impairments of acoustic perception, such as deafness, can lead to atypical phonological representations of written words and letters, which in turn can affect reading proficiency. Here, we report an experiment in which young adults with different levels of acoustic perception (i.e., hearing and deaf individuals) and different modes of communication (i.e., hearing individuals using spoken language, deaf individuals with a preference for sign language, and deaf individuals using the oral modality with less or no competence in sign language) performed a visual lexical decision task, which consisted of categorizing real words and consonant strings. The lexicality effect was restricted to deaf signers who responded faster to real words than consonant strings, showing over-reliance on whole word lexical processing of stimuli. No effect of stimulus type was found in deaf individuals using the oral modality or in hearing individuals. Thus, mode of communication modulates the lexicality effect. This suggests that learning a sign language during development shapes visuo-motor representations of words, which are tuned to the actions used to express them (phono-articulatory movements vs. hand movements) and to associated perceptions. As these visuo-motor representations are elicited during on-line linguistic processing and can overlap with the perceptual-motor processes required to execute the task, they can potentially produce interference or facilitation effects.

  20. Event-related brain potentials in memory: correlates of episodic, semantic and implicit memory.

    PubMed

    Wieser, Stephan; Wieser, Heinz Gregor

    2003-06-01

    To study cognitive evoked potentials, recorded from scalp EEG and foramen ovale electrodes, during activation of explicit and implicit memory. The subgroups of explicit memory, episodic and semantic memory, are looked at separately. A word-learning task was used, which has been shown to activate hippocampus in H(2)(15)O positron emission tomography studies. Subjects had to study and remember word pairs using different learning strategies: (i) associative word learning (AWL), which activates the episodic memory, (ii) deep single word encoding (DSWE), which activates the semantic memory, and (iii) shallow single word encoding (SSWE), which activates the implicit memory and serves as a baseline. The test included the 'remember/know' paradigm as a behavioural learning control. During the task condition, a 10-20 scalp EEG with additional electrodes in both temporal lobes regions was recorded from 11 healthy volunteers. In one patient with mesiotemporal lobe epilepsy, the EEG was recorded from bilateral foramen ovale electrodes directly from mesial temporal lobe structures. Event-related potentials (ERPs) were calculated off-line and visual and statistical analyses were made. Associative learning strategy produced the best memory performance and the best noetic awareness experience, whereas shallow single word encoding produced the worst performance and the smallest noetic awareness. Deep single word encoding performance was in between. ERPs differed according to the test condition, during both encoding and retrieval, from both the scalp EEG and the foramen ovale electrode recordings. Encoding showed significant differences between the shallow single word encoding (SSWE), which is mainly a function of graphical characteristics, and the other two strategies, deep single word (DSWE) and associative learning (AWL), in which there is a semantic processing of the meaning. ERPs generated by these two categories, which are both functions of explicit memory, differed as well, indicating the presence or the absence of associative binding. Retrieval showed a significant test effect between the word pairs learned by association (AWL) and the ones learned by encoding the words in isolation of each other (DSWE and SSWE). The comparison of the ERPs generated by autonoetic awareness ('remember') and noetic awareness ('know') exhibited a significant test effect as well. The results of behavioural data, in particular that of the 'remember/know' procedure, are evidence that the task paradigm was efficient in activating different kinds of memory. Associative word learning generated a high degree of autonoetic awareness, which is a result of the episodic memory, whereas both kinds of single word learning generated less. AWL, DSWE and SSWE resulted in different electrophysiological correlates, both for encoding as well as retrieval, indicating that different brain structures were activated in different temporal sequence.

  1. Audiovisual alignment of co-speech gestures to speech supports word learning in 2-year-olds.

    PubMed

    Jesse, Alexandra; Johnson, Elizabeth K

    2016-05-01

    Analyses of caregiver-child communication suggest that an adult tends to highlight objects in a child's visual scene by moving them in a manner that is temporally aligned with the adult's speech productions. Here, we used the looking-while-listening paradigm to examine whether 25-month-olds use audiovisual temporal alignment to disambiguate and learn novel word-referent mappings in a difficult word-learning task. Videos of two equally interesting and animated novel objects were simultaneously presented to children, but the movement of only one of the objects was aligned with an accompanying object-labeling audio track. No social cues (e.g., pointing, eye gaze, touch) were available to the children because the speaker was edited out of the videos. Immediately afterward, toddlers were presented with still images of the two objects and asked to look at one or the other. Toddlers looked reliably longer to the labeled object, demonstrating their acquisition of the novel word-referent mapping. A control condition showed that children's performance was not solely due to the single unambiguous labeling that had occurred at experiment onset. We conclude that the temporal link between a speaker's utterances and the motion they imposed on the referent object helps toddlers to deduce a speaker's intended reference in a difficult word-learning scenario. In combination with our previous work, these findings suggest that intersensory redundancy is a source of information used by language users of all ages. That is, intersensory redundancy is not just a word-learning tool used by young infants. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. Visual Field Differences in Visual Word Recognition Can Emerge Purely from Perceptual Learning: Evidence from Modeling Chinese Character Pronunciation

    ERIC Educational Resources Information Center

    Hsiao, Janet Hui-wen

    2011-01-01

    In Chinese orthography, a dominant character structure exists in which a semantic radical appears on the left and a phonetic radical on the right (SP characters); a minority opposite arrangement also exists (PS characters). As the number of phonetic radical types is much greater than semantic radical types, in SP characters the information is…

  3. English- and Mandarin-Learning Infants' Discrimination of Actions and Objects in Dynamic Events

    ERIC Educational Resources Information Center

    Chen, Jie; Tardif, Twila; Pulverman, Rachel; Casasola, Marianella; Zhu, Liqi; Zheng, Xiaobei; Meng, Xiangzhi

    2015-01-01

    The present studies examined the role of linguistic experience in directing English and Mandarin learners' attention to aspects of a visual scene. Specifically, they asked whether young language learners in these 2 cultures attend to differential aspects of a word-learning situation. Two groups of English and Mandarin learners, 6-8-month-olds (n =…

  4. Reading with sounds: sensory substitution selectively activates the visual word form area in the blind.

    PubMed

    Striem-Amit, Ella; Cohen, Laurent; Dehaene, Stanislas; Amedi, Amir

    2012-11-08

    Using a visual-to-auditory sensory-substitution algorithm, congenitally fully blind adults were taught to read and recognize complex images using "soundscapes"--sounds topographically representing images. fMRI was used to examine key questions regarding the visual word form area (VWFA): its selectivity for letters over other visual categories without visual experience, its feature tolerance for reading in a novel sensory modality, and its plasticity for scripts learned in adulthood. The blind activated the VWFA specifically and selectively during the processing of letter soundscapes relative to both textures and visually complex object categories and relative to mental imagery and semantic-content controls. Further, VWFA recruitment for reading soundscapes emerged after 2 hr of training in a blind adult on a novel script. Therefore, the VWFA shows category selectivity regardless of input sensory modality, visual experience, and long-term familiarity or expertise with the script. The VWFA may perform a flexible task-specific rather than sensory-specific computation, possibly linking letter shapes to phonology. Copyright © 2012 Elsevier Inc. All rights reserved.

  5. Pornographic image recognition and filtering using incremental learning in compressed domain

    NASA Astrophysics Data System (ADS)

    Zhang, Jing; Wang, Chao; Zhuo, Li; Geng, Wenhao

    2015-11-01

    With the rapid development and popularity of the network, the openness, anonymity, and interactivity of networks have led to the spread and proliferation of pornographic images on the Internet, which have done great harm to adolescents' physical and mental health. With the establishment of image compression standards, pornographic images are mainly stored with compressed formats. Therefore, how to efficiently filter pornographic images is one of the challenging issues for information security. A pornographic image recognition and filtering method in the compressed domain is proposed by using incremental learning, which includes the following steps: (1) low-resolution (LR) images are first reconstructed from the compressed stream of pornographic images, (2) visual words are created from the LR image to represent the pornographic image, and (3) incremental learning is adopted to continuously adjust the classification rules to recognize the new pornographic image samples after the covering algorithm is utilized to train and recognize the visual words in order to build the initial classification model of pornographic images. The experimental results show that the proposed pornographic image recognition method using incremental learning has a higher recognition rate as well as costing less recognition time in the compressed domain.

  6. Braille in the Sighted: Teaching Tactile Reading to Sighted Adults.

    PubMed

    Bola, Łukasz; Siuda-Krzywicka, Katarzyna; Paplińska, Małgorzata; Sumera, Ewa; Hańczur, Paweł; Szwed, Marcin

    2016-01-01

    Blind people are known to have superior perceptual abilities in their remaining senses. Several studies suggest that these enhancements are dependent on the specific experience of blind individuals, who use those remaining senses more than sighted subjects. In line with this view, sighted subjects, when trained, are able to significantly progress in relatively simple tactile tasks. However, the case of complex tactile tasks is less obvious, as some studies suggest that visual deprivation itself could confer large advantages in learning them. It remains unclear to what extent those complex skills, such as braille reading, can be learnt by sighted subjects. Here we enrolled twenty-nine sighted adults, mostly braille teachers and educators, in a 9-month braille reading course. At the beginning of the course, all subjects were naive in tactile braille reading. After the course, almost all were able to read whole braille words at a mean speed of 6 words-per-minute. Subjects with low tactile acuity did not differ significantly in braille reading speed from the rest of the group, indicating that low tactile acuity is not a limiting factor for learning braille, at least at this early stage of learning. Our study shows that most sighted adults can learn whole-word braille reading, given the right method and a considerable amount of motivation. The adult sensorimotor system can thus adapt, to some level, to very complex tactile tasks without visual deprivation. The pace of learning in our group was comparable to congenitally and early blind children learning braille in primary school, which suggests that the blind's mastery of complex tactile tasks can, to a large extent, be explained by experience-dependent mechanisms.

  7. Braille in the Sighted: Teaching Tactile Reading to Sighted Adults

    PubMed Central

    Bola, Łukasz; Siuda-Krzywicka, Katarzyna; Paplińska, Małgorzata; Sumera, Ewa; Hańczur, Paweł; Szwed, Marcin

    2016-01-01

    Blind people are known to have superior perceptual abilities in their remaining senses. Several studies suggest that these enhancements are dependent on the specific experience of blind individuals, who use those remaining senses more than sighted subjects. In line with this view, sighted subjects, when trained, are able to significantly progress in relatively simple tactile tasks. However, the case of complex tactile tasks is less obvious, as some studies suggest that visual deprivation itself could confer large advantages in learning them. It remains unclear to what extent those complex skills, such as braille reading, can be learnt by sighted subjects. Here we enrolled twenty-nine sighted adults, mostly braille teachers and educators, in a 9-month braille reading course. At the beginning of the course, all subjects were naive in tactile braille reading. After the course, almost all were able to read whole braille words at a mean speed of 6 words-per-minute. Subjects with low tactile acuity did not differ significantly in braille reading speed from the rest of the group, indicating that low tactile acuity is not a limiting factor for learning braille, at least at this early stage of learning. Our study shows that most sighted adults can learn whole-word braille reading, given the right method and a considerable amount of motivation. The adult sensorimotor system can thus adapt, to some level, to very complex tactile tasks without visual deprivation. The pace of learning in our group was comparable to congenitally and early blind children learning braille in primary school, which suggests that the blind’s mastery of complex tactile tasks can, to a large extent, be explained by experience-dependent mechanisms. PMID:27187496

  8. Altered Activation and Functional Asymmetry of Exner's Area but not the Visual Word Form Area in a Child with Sudden-onset, Persistent Mirror Writing.

    PubMed

    Linke, Annika; Roach-Fox, Elizabeth; Vriezen, Ellen; Prasad, Asuri Narayan; Cusack, Rhodri

    2018-06-02

    Mirror writing is often produced by healthy children during early acquisition of literacy, and has been observed in adults following neurological disorders or insults. The neural mechanisms responsible for involuntary mirror writing remain debated, but in healthy children, it is typically attributed to the delayed development of a process of overcoming mirror invariance while learning to read and write. We present an unusual case of sudden-onset, persistent mirror writing in a previously typical seven-year-old girl. Using her dominant right hand only, she copied and spontaneously produced all letters, words and sentences, as well as some numbers and objects, in mirror image. Additionally, she frequently misidentified letter orientations in perceptual assessments. Clinical, neuropsychological, and functional neuroimaging studies were carried out over sixteen months. Neurologic and ophthalmologic examinations and a standard clinical MRI scan of the head were normal. Neuropsychological testing revealed average scores on most tests of intellectual function, language function, verbal learning and memory. Visual perception and visual reasoning were average, with the exception of below average form constancy, and mild difficulties on some visual memory tests. Activation and functional connectivity of the reading and writing network was assessed with fMRI. During a reading task, the VWFA showed a strong response to words in mirror but not in normal letter orientation - similar to what has been observed in typically developing children previously - but activation was atypically reduced in right primary visual cortex and Exner's Area. Resting-state connectivity within the reading and writing network was similar to that of age-matched controls, but hemispheric asymmetry between the balance of motor-to-visual input was found for Exner's Area. In summary, this unusual case suggests that a disruption to visual-motor integration rather than to the VWFA can contribute to sudden-onset, persistent mirror writing in the absence of clinically detectable neurological insult. Copyright © 2018. Published by Elsevier Ltd.

  9. A Bayesian generative model for learning semantic hierarchies

    PubMed Central

    Mittelman, Roni; Sun, Min; Kuipers, Benjamin; Savarese, Silvio

    2014-01-01

    Building fine-grained visual recognition systems that are capable of recognizing tens of thousands of categories, has received much attention in recent years. The well known semantic hierarchical structure of categories and concepts, has been shown to provide a key prior which allows for optimal predictions. The hierarchical organization of various domains and concepts has been subject to extensive research, and led to the development of the WordNet domains hierarchy (Fellbaum, 1998), which was also used to organize the images in the ImageNet (Deng et al., 2009) dataset, in which the category count approaches the human capacity. Still, for the human visual system, the form of the hierarchy must be discovered with minimal use of supervision or innate knowledge. In this work, we propose a new Bayesian generative model for learning such domain hierarchies, based on semantic input. Our model is motivated by the super-subordinate organization of domain labels and concepts that characterizes WordNet, and accounts for several important challenges: maintaining context information when progressing deeper into the hierarchy, learning a coherent semantic concept for each node, and modeling uncertainty in the perception process. PMID:24904452

  10. Sounds and meanings working together: Word learning as a collaborative effort

    PubMed Central

    Saffran, Jenny

    2014-01-01

    Over the past several decades, researchers have discovered a great deal of information about the processes underlying language acquisition. From as early as they can be studied, infants are sensitive to the nuances of native-language sound structure. Similarly, infants are attuned to the visual and conceptual structure of their environments starting in the early postnatal period. Months later, they become adept at putting these two arenas of experience together, mapping sounds to meanings. How might learning sounds influence learning meanings, and vice versa? In this paper, I will describe several recent lines of research suggesting that knowledge concerning the sound structure of language facilitates subsequent mapping of sounds to meanings. I will also discuss recent findings suggesting that from its beginnings, the lexicon incorporates relationships amongst the sounds and meanings of newly learned words. PMID:25202163

  11. Effects of visual familiarity for words on interhemispheric cooperation for lexical processing.

    PubMed

    Yoshizaki, K

    2001-12-01

    The purpose of this study was to examine the effects of visual familiarity of words on interhemispheric lexical processing. Words and pseudowords were tachistoscopically presented in a left, a right, or bilateral visual fields. Two types of words, Katakana-familiar-type and Hiragana-familiar-type, were used as the word stimuli. The former refers to the words which are more frequently written with Katakana script, and the latter refers to the words which are written predominantly in Hiragana script. Two conditions for the words were set up in terms of visual familiarity for a word. In visually familiar condition, words were presented in familiar script form and in visually unfamiliar condition, words were presented in less familiar script form. The 32 right-handed Japanese students were asked to make a lexical decision. Results showed that a bilateral gain, which indicated that the performance in the bilateral visual fields was superior to that in the unilateral visual field, was obtained only in the visually familiar condition, not in the visually unfamiliar condition. These results suggested that the visual familiarity for a word had an influence on the interhemispheric lexical processing.

  12. Why Do Pictures, but Not Visual Words, Reduce Older Adults’ False Memories?

    PubMed Central

    Smith, Rebekah E.; Hunt, R. Reed; Dunlap, Kathryn R.

    2015-01-01

    Prior work shows that false memories resulting from the study of associatively related lists are reduced for both young and older adults when the auditory presentation of study list words is accompanied by related pictures relative to when auditory word presentation is combined with visual presentation of the word. In contrast, young adults, but not older adults, show a reduction in false memories when presented with the visual word along with the auditory word relative to hearing the word only. In both the case of pictures relative to visual words and visual words relative to auditory words alone, the benefit of picture and visual words in reducing false memories has been explained in terms of monitoring for perceptual information. In our first experiment we provide the first simultaneous comparison of all three study presentation modalities (auditory only, auditory plus visual word, and auditory plus picture). Young and older adults show a reduction in false memories in the auditory plus picture condition, but only young adults show a reduction in the visual word condition relative to the auditory only condition. A second experiment investigates whether older adults fail to show a reduction in false memory in the visual word condition because they do not encode perceptual information in the visual word condition. In addition, the second experiment provides evidence that the failure of older adults to show the benefits of visual word presentation is related to reduced cognitive resources. PMID:26213799

  13. Why do pictures, but not visual words, reduce older adults' false memories?

    PubMed

    Smith, Rebekah E; Hunt, R Reed; Dunlap, Kathryn R

    2015-09-01

    Prior work shows that false memories resulting from the study of associatively related lists are reduced for both young and older adults when the auditory presentation of study list words is accompanied by related pictures relative to when auditory word presentation is combined with visual presentation of the word. In contrast, young adults, but not older adults, show a reduction in false memories when presented with the visual word along with the auditory word relative to hearing the word only. In both cases of pictures relative to visual words and visual words relative to auditory words alone, the benefit of picture and visual words in reducing false memories has been explained in terms of monitoring for perceptual information. In our first experiment, we provide the first simultaneous comparison of all 3 study presentation modalities (auditory only, auditory plus visual word, and auditory plus picture). Young and older adults show a reduction in false memories in the auditory plus picture condition, but only young adults show a reduction in the visual word condition relative to the auditory only condition. A second experiment investigates whether older adults fail to show a reduction in false memory in the visual word condition because they do not encode perceptual information in the visual word condition. In addition, the second experiment provides evidence that the failure of older adults to show the benefits of visual word presentation is related to reduced cognitive resources. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  14. Aural mapping of STEM concepts using literature mining

    NASA Astrophysics Data System (ADS)

    Bharadwaj, Venkatesh

    Recent technological applications have made the life of people too much dependent on Science, Technology, Engineering, and Mathematics (STEM) and its applications. Understanding basic level science is a must in order to use and contribute to this technological revolution. Science education in middle and high school levels however depends heavily on visual representations such as models, diagrams, figures, animations and presentations etc. This leaves visually impaired students with very few options to learn science and secure a career in STEM related areas. Recent experiments have shown that small aural clues called Audemes are helpful in understanding and memorization of science concepts among visually impaired students. Audemes are non-verbal sound translations of a science concept. In order to facilitate science concepts as Audemes, for visually impaired students, this thesis presents an automatic system for audeme generation from STEM textbooks. This thesis describes the systematic application of multiple Natural Language Processing tools and techniques, such as dependency parser, POS tagger, Information Retrieval algorithm, Semantic mapping of aural words, machine learning etc., to transform the science concept into a combination of atomic-sounds, thus forming an audeme. We present a rule based classification method for all STEM related concepts. This work also presents a novel way of mapping and extracting most related sounds for the words being used in textbook. Additionally, machine learning methods are used in the system to guarantee the customization of output according to a user's perception. The system being presented is robust, scalable, fully automatic and dynamically adaptable for audeme generation.

  15. What You Learn is What You See: Using Eye Movements to Study Infant Cross-Situational Word Learning

    PubMed Central

    Smith, Linda

    2016-01-01

    Recent studies show that both adults and young children possess powerful statistical learning capabilities to solve the word-to-world mapping problem. However, the underlying mechanisms that make statistical learning possible and powerful are not yet known. With the goal of providing new insights into this issue, the research reported in this paper used an eye tracker to record the moment-by-moment eye movement data of 14-month-old babies in statistical learning tasks. Various measures are applied to such fine-grained temporal data, such as looking duration and shift rate (the number of shifts in gaze from one visual object to the other) trial by trial, showing different eye movement patterns between strong and weak statistical learners. Moreover, an information-theoretic measure is developed and applied to gaze data to quantify the degree of learning uncertainty trial by trial. Next, a simple associative statistical learning model is applied to eye movement data and these simulation results are compared with empirical results from young children, showing strong correlations between these two. This suggests that an associative learning mechanism with selective attention can provide a cognitively plausible model of cross-situational statistical learning. The work represents the first steps to use eye movement data to infer underlying real-time processes in statistical word learning. PMID:22213894

  16. The Effect of Imagery Instruction on Vocabulary Development. College Reading and Learning Assistance Technical Report No. 87-05.

    ERIC Educational Resources Information Center

    Smith, Brenda D.; And Others

    To explore the usefulness of imagery as a learning tool in a classroom situation, this study investigated whether a visual image has an additive effect on the recall of definitions of previously unknown English words. One-hundred-forty-two students enrolled in six sections of an upper level developmental reading course at Georgia State University…

  17. Does a Picture Say More than 7000 Words? Windows of Opportunity to Learn Languages--An Attempt at a Creative Reflective Poster

    ERIC Educational Resources Information Center

    Schaller-Schwaner, Iris

    2015-01-01

    This article originated in a creative attempt to engage audiences visually, on a poster, with ideas about language(s), teaching and learning which have been informing language education at university language centres. It was originally locally grounded and devised to take soundings with colleagues and with participants at the CercleS 2014…

  18. Massive cortical reorganization in sighted Braille readers.

    PubMed

    Siuda-Krzywicka, Katarzyna; Bola, Łukasz; Paplińska, Małgorzata; Sumera, Ewa; Jednoróg, Katarzyna; Marchewka, Artur; Śliwińska, Magdalena W; Amedi, Amir; Szwed, Marcin

    2016-03-15

    The brain is capable of large-scale reorganization in blindness or after massive injury. Such reorganization crosses the division into separate sensory cortices (visual, somatosensory...). As its result, the visual cortex of the blind becomes active during tactile Braille reading. Although the possibility of such reorganization in the normal, adult brain has been raised, definitive evidence has been lacking. Here, we demonstrate such extensive reorganization in normal, sighted adults who learned Braille while their brain activity was investigated with fMRI and transcranial magnetic stimulation (TMS). Subjects showed enhanced activity for tactile reading in the visual cortex, including the visual word form area (VWFA) that was modulated by their Braille reading speed and strengthened resting-state connectivity between visual and somatosensory cortices. Moreover, TMS disruption of VWFA activity decreased their tactile reading accuracy. Our results indicate that large-scale reorganization is a viable mechanism recruited when learning complex skills.

  19. The effect of animation on learning action symbols by individuals with intellectual disabilities.

    PubMed

    Fujisawa, Kazuko; Inoue, Tomoyoshi; Yamana, Yuko; Hayashi, Humirhiro

    2011-03-01

    The purpose of the present study was to investigate whether participants with intellectual impairments could benefit from the movement associated with animated pictures while they were learning symbol names. Sixteen school students, whose linguistic-developmental age ranged from 38?91 months, participated in the experiment. They were taught 16 static visual symbols and the corresponding action words (naming task) in two sessions conducted one week apart. In the experimental condition, animation was employed to facilitate comprehension, whereas no animation was used in the control condition. Enhancement of learning was shown in the experimental condition, suggesting that the participants benefited from animated symbols. Furthermore, it was found that the lower the linguistic developmental age, the more effective the animated cue was in learning static visual symbols.

  20. Learning to read an alphabet of human faces produces left-lateralized training effects in the fusiform gyrus.

    PubMed

    Moore, Michelle W; Durisko, Corrine; Perfetti, Charles A; Fiez, Julie A

    2014-04-01

    Numerous functional neuroimaging studies have shown that most orthographic stimuli, such as printed English words, produce a left-lateralized response within the fusiform gyrus (FG) at a characteristic location termed the visual word form area (VWFA). We developed an experimental alphabet (FaceFont) comprising 35 face-phoneme pairs to disentangle phonological and perceptual influences on the lateralization of orthographic processing within the FG. Using functional imaging, we found that a region in the vicinity of the VWFA responded to FaceFont words more strongly in trained versus untrained participants, whereas no differences were observed in the right FG. The trained response magnitudes in the left FG region correlated with behavioral reading performance, providing strong evidence that the neural tissue recruited by training supported the newly acquired reading skill. These results indicate that the left lateralization of the orthographic processing is not restricted to stimuli with particular visual-perceptual features. Instead, lateralization may occur because the anatomical projections in the vicinity of the VWFA provide a unique interconnection between the visual system and left-lateralized language areas involved in the representation of speech.

  1. Task-Driven Dictionary Learning Based on Mutual Information for Medical Image Classification.

    PubMed

    Diamant, Idit; Klang, Eyal; Amitai, Michal; Konen, Eli; Goldberger, Jacob; Greenspan, Hayit

    2017-06-01

    We present a novel variant of the bag-of-visual-words (BoVW) method for automated medical image classification. Our approach improves the BoVW model by learning a task-driven dictionary of the most relevant visual words per task using a mutual information-based criterion. Additionally, we generate relevance maps to visualize and localize the decision of the automatic classification algorithm. These maps demonstrate how the algorithm works and show the spatial layout of the most relevant words. We applied our algorithm to three different tasks: chest x-ray pathology identification (of four pathologies: cardiomegaly, enlarged mediastinum, right consolidation, and left consolidation), liver lesion classification into four categories in computed tomography (CT) images and benign/malignant clusters of microcalcifications (MCs) classification in breast mammograms. Validation was conducted on three datasets: 443 chest x-rays, 118 portal phase CT images of liver lesions, and 260 mammography MCs. The proposed method improves the classical BoVW method for all tested applications. For chest x-ray, area under curve of 0.876 was obtained for enlarged mediastinum identification compared to 0.855 using classical BoVW (with p-value 0.01). For MC classification, a significant improvement of 4% was achieved using our new approach (with p-value = 0.03). For liver lesion classification, an improvement of 6% in sensitivity and 2% in specificity were obtained (with p-value 0.001). We demonstrated that classification based on informative selected set of words results in significant improvement. Our new BoVW approach shows promising results in clinically important domains. Additionally, it can discover relevant parts of images for the task at hand without explicit annotations for training data. This can provide computer-aided support for medical experts in challenging image analysis tasks.

  2. Neural networks involved in learning lexical-semantic and syntactic information in a second language.

    PubMed

    Mueller, Jutta L; Rueschemeyer, Shirley-Ann; Ono, Kentaro; Sugiura, Motoaki; Sadato, Norihiro; Nakamura, Akinori

    2014-01-01

    The present study used functional magnetic resonance imaging (fMRI) to investigate the neural correlates of language acquisition in a realistic learning environment. Japanese native speakers were trained in a miniature version of German prior to fMRI scanning. During scanning they listened to (1) familiar sentences, (2) sentences including a novel sentence structure, and (3) sentences containing a novel word while visual context provided referential information. Learning-related decreases of brain activation over time were found in a mainly left-hemispheric network comprising classical frontal and temporal language areas as well as parietal and subcortical regions and were largely overlapping for novel words and the novel sentence structure in initial stages of learning. Differences occurred at later stages of learning during which content-specific activation patterns in prefrontal, parietal and temporal cortices emerged. The results are taken as evidence for a domain-general network supporting the initial stages of language learning which dynamically adapts as learners become proficient.

  3. Visual Cortical Representation of Whole Words and Hemifield-split Word Parts.

    PubMed

    Strother, Lars; Coros, Alexandra M; Vilis, Tutis

    2016-02-01

    Reading requires the neural integration of visual word form information that is split between our retinal hemifields. We examined multiple visual cortical areas involved in this process by measuring fMRI responses while observers viewed words that changed or repeated in one or both hemifields. We were specifically interested in identifying brain areas that exhibit decreased fMRI responses as a result of repeated versus changing visual word form information in each visual hemifield. Our method yielded highly significant effects of word repetition in a previously reported visual word form area (VWFA) in occipitotemporal cortex, which represents hemifield-split words as whole units. We also identified a more posterior occipital word form area (OWFA), which represents word form information in the right and left hemifields independently and is thus both functionally and anatomically distinct from the VWFA. Both the VWFA and the OWFA were left-lateralized in our study and strikingly symmetric in anatomical location relative to known face-selective visual cortical areas in the right hemisphere. Our findings are consistent with the observation that category-selective visual areas come in pairs and support the view that neural mechanisms in left visual cortex--especially those that evolved to support the visual processing of faces--are developmentally malleable and become incorporated into a left-lateralized visual word form network that supports rapid word recognition and reading.

  4. Generating descriptive visual words and visual phrases for large-scale image applications.

    PubMed

    Zhang, Shiliang; Tian, Qi; Hua, Gang; Huang, Qingming; Gao, Wen

    2011-09-01

    Bag-of-visual Words (BoWs) representation has been applied for various problems in the fields of multimedia and computer vision. The basic idea is to represent images as visual documents composed of repeatable and distinctive visual elements, which are comparable to the text words. Notwithstanding its great success and wide adoption, visual vocabulary created from single-image local descriptors is often shown to be not as effective as desired. In this paper, descriptive visual words (DVWs) and descriptive visual phrases (DVPs) are proposed as the visual correspondences to text words and phrases, where visual phrases refer to the frequently co-occurring visual word pairs. Since images are the carriers of visual objects and scenes, a descriptive visual element set can be composed by the visual words and their combinations which are effective in representing certain visual objects or scenes. Based on this idea, a general framework is proposed for generating DVWs and DVPs for image applications. In a large-scale image database containing 1506 object and scene categories, the visual words and visual word pairs descriptive to certain objects or scenes are identified and collected as the DVWs and DVPs. Experiments show that the DVWs and DVPs are informative and descriptive and, thus, are more comparable with the text words than the classic visual words. We apply the identified DVWs and DVPs in several applications including large-scale near-duplicated image retrieval, image search re-ranking, and object recognition. The combination of DVW and DVP performs better than the state of the art in large-scale near-duplicated image retrieval in terms of accuracy, efficiency and memory consumption. The proposed image search re-ranking algorithm: DWPRank outperforms the state-of-the-art algorithm by 12.4% in mean average precision and about 11 times faster in efficiency.

  5. Efficient Learning for the Poor: New Insights into Literacy Acquisition for Children

    NASA Astrophysics Data System (ADS)

    Abadzi, Helen

    2008-11-01

    Reading depends on the speed of visual recognition and capacity of short-term memory. To understand a sentence, the mind must read it fast enough to capture it within the limits of the short-term memory. This means that children must attain a minimum speed of fairly accurate reading to understand a passage. Learning to read involves "tricking" the brain into perceiving groups of letters as coherent words. This is achieved most efficiently by pairing small units consistently with sounds rather than learning entire words. To link the letters with sounds, explicit and extensive practice is needed; the more complex the spelling of a language, the more practice is necessary. However, schools of low-income students often waste instructional time and lack reading resources, so students cannot get sufficient practice to automatize reading and may remain illiterate for years. Lack of reading fluency in the early grades creates inefficiencies that affect the entire educational system. Neurocognitive research on reading points to benchmarks and monitoring indicators. All students should attain reading speeds of 45-60 words per minute by the end of grade 2 and 120-150 words per minute for grades 6-8.

  6. Reading in the dark: neural correlates and cross-modal plasticity for learning to read entire words without visual experience.

    PubMed

    Sigalov, Nadine; Maidenbaum, Shachar; Amedi, Amir

    2016-03-01

    Cognitive neuroscience has long attempted to determine the ways in which cortical selectivity develops, and the impact of nature vs. nurture on it. Congenital blindness (CB) offers a unique opportunity to test this question as the brains of blind individuals develop without visual experience. Here we approach this question through the reading network. Several areas in the visual cortex have been implicated as part of the reading network, and one of the main ones among them is the VWFA, which is selective to the form of letters and words. But what happens in the CB brain? On the one hand, it has been shown that cross-modal plasticity leads to the recruitment of occipital areas, including the VWFA, for linguistic tasks. On the other hand, we have recently demonstrated VWFA activity for letters in contrast to other visual categories when the information is provided via other senses such as touch or audition. Which of these tasks is more dominant? By which mechanism does the CB brain process reading? Using fMRI and visual-to-auditory sensory substitution which transfers the topographical features of the letters we compare reading with semantic and scrambled conditions in a group of CB. We found activation in early auditory and visual cortices during the early processing phase (letter), while the later phase (word) showed VWFA and bilateral dorsal-intraparietal activations for words. This further supports the notion that many visual regions in general, even early visual areas, also maintain a predilection for task processing even when the modality is variable and in spite of putative lifelong linguistic cross-modal plasticity. Furthermore, we find that the VWFA is recruited preferentially for letter and word form, while it was not recruited, and even exhibited deactivation, for an immediately subsequent semantic task suggesting that despite only short sensory substitution experience orthographic task processing can dominate semantic processing in the VWFA. On a wider scope, this implies that at least in some cases cross-modal plasticity which enables the recruitment of areas for new tasks may be dominated by sensory independent task specific activation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Lexical learning in mild aphasia: gesture benefit depends on patholinguistic profile and lesion pattern.

    PubMed

    Kroenke, Klaus-Martin; Kraft, Indra; Regenbrecht, Frank; Obrig, Hellmuth

    2013-01-01

    Gestures accompany speech and enrich human communication. When aphasia interferes with verbal abilities, gestures become even more relevant, compensating for and/or facilitating verbal communication. However, small-scale clinical studies yielded diverging results with regard to a therapeutic gesture benefit for lexical retrieval. Based on recent functional neuroimaging results, delineating a speech-gesture integration network for lexical learning in healthy adults, we hypothesized that the commonly observed variability may stem from differential patholinguistic profiles in turn depending on lesion pattern. Therefore we used a controlled novel word learning paradigm to probe the impact of gestures on lexical learning, in the lesioned language network. Fourteen patients with chronic left hemispheric lesions and mild residual aphasia learned 30 novel words for manipulable objects over four days. Half of the words were trained with gestures while the other half were trained purely verbally. For the gesture condition, rootwords were visually presented (e.g., Klavier, [piano]), followed by videos of the corresponding gestures and the auditory presentation of the novel words (e.g., /krulo/). Participants had to repeat pseudowords and simultaneously reproduce gestures. In the verbal condition no gesture-video was shown and participants only repeated pseudowords orally. Correlational analyses confirmed that gesture benefit depends on the patholinguistic profile: lesser lexico-semantic impairment correlated with better gesture-enhanced learning. Conversely largely preserved segmental-phonological capabilities correlated with better purely verbal learning. Moreover, structural MRI-analysis disclosed differential lesion patterns, most interestingly suggesting that integrity of the left anterior temporal pole predicted gesture benefit. Thus largely preserved semantic capabilities and relative integrity of a semantic integration network are prerequisites for successful use of the multimodal learning strategy, in which gestures may cause a deeper semantic rooting of the novel word-form. The results tap into theoretical accounts of gestures in lexical learning and suggest an explanation for the diverging effect in therapeutical studies advocating gestures in aphasia rehabilitation. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. Learning to Read Words in a New Language Shapes the Neural Organization of the Prior Languages

    PubMed Central

    Mei, Leilei; Xue, Gui; Lu, Zhong-Lin; Chen, Chuansheng; Zhang, Mingxia; He, Qinghua; Wei, Miao; Dong, Qi

    2014-01-01

    Learning a new language entails interactions with one's prior language(s). Much research has shown how native language affects the cognitive and neural mechanisms of a new language, but little is known about whether and how learning a new language shapes the neural mechanisms of prior language(s). In two experiments in the current study, we used an artificial language training paradigm in combination with fMRI to examine (1) the effects of different linguistic components (phonology and semantics) of a new language on the neural process of prior languages (i.e., native and second languages), and (2) whether such effects were modulated by the proficiency level in the new language. Results of Experiment 1 showed that when the training in a new language involved semantics (as opposed to only visual forms and phonology), neural activity during word reading in the native language (Chinese) was reduced in several reading-related regions, including the left pars opercularis, pars triangularis, bilateral inferior temporal gyrus, fusiform gyrus, and inferior occipital gyrus. Results of Experiment 2 replicated the results of Experiment 1 and further found that semantic training also affected neural activity during word reading in the subjects’ second language (English). Furthermore, we found that the effects of the new language were modulated by the subjects’ proficiency level in the new language. These results provide critical imaging evidence for the influence of learning to read words in a new language on word reading in native and second languages. PMID:25447375

  9. Prescriptive Teaching from the DTLA.

    ERIC Educational Resources Information Center

    Banas, Norma; Wills, I. H.

    1979-01-01

    The article (Part 2 of a series) discusses the Auditory Attention Span for Unrelated Words and the Visual Attention Span for Objects subtests of the Detroit Tests of Learning Aptitude. Skills measured and related factors influencing performance are among aspects considered. Suggestions for remediating deficits and capitalizing on strengths are…

  10. Teaching the Special Needs Learner: When Words Are Not Enough

    ERIC Educational Resources Information Center

    Brill, Michelle F.

    2011-01-01

    Extension educators and volunteers provide programs to people of all ages and abilities. This includes individuals with developmental disabilities. Individuals with autism and other developmental disabilities often have difficulty communicating verbally but have strong visual learning skills. This article describes the importance of using visual…

  11. Blinded by taboo words in L1 but not L2.

    PubMed

    Colbeck, Katie L; Bowers, Jeffrey S

    2012-04-01

    The present study compares the emotionality of English taboo words in native English speakers and native Chinese speakers who learned English as a second language. Neutral and taboo/sexual words were included in a Rapid Serial Visual Presentation (RSVP) task as to-be-ignored distracters in a short- and long-lag condition. Compared with neutral distracters, taboo/sexual distracters impaired the performance in the short-lag condition only. Of critical note, however, is that the performance of Chinese speakers was less impaired by taboo/sexual distracters. This supports the view that a first language is more emotional than a second language, even when words are processed quickly and automatically. (PsycINFO Database Record (c) 2012 APA, all rights reserved).

  12. Differential Gaze Patterns on Eyes and Mouth During Audiovisual Speech Segmentation

    PubMed Central

    Lusk, Laina G.; Mitchel, Aaron D.

    2016-01-01

    Speech is inextricably multisensory: both auditory and visual components provide critical information for all aspects of speech processing, including speech segmentation, the visual components of which have been the target of a growing number of studies. In particular, a recent study (Mitchel and Weiss, 2014) established that adults can utilize facial cues (i.e., visual prosody) to identify word boundaries in fluent speech. The current study expanded upon these results, using an eye tracker to identify highly attended facial features of the audiovisual display used in Mitchel and Weiss (2014). Subjects spent the most time watching the eyes and mouth. A significant trend in gaze durations was found with the longest gaze duration on the mouth, followed by the eyes and then the nose. In addition, eye-gaze patterns changed across familiarization as subjects learned the word boundaries, showing decreased attention to the mouth in later blocks while attention on other facial features remained consistent. These findings highlight the importance of the visual component of speech processing and suggest that the mouth may play a critical role in visual speech segmentation. PMID:26869959

  13. Neural correlates of visualizations of concrete and abstract words in preschool children: a developmental embodied approach

    PubMed Central

    D’Angiulli, Amedeo; Griffiths, Gordon; Marmolejo-Ramos, Fernando

    2015-01-01

    The neural correlates of visualization underlying word comprehension were examined in preschool children. On each trial, a concrete or abstract word was delivered binaurally (part 1: post-auditory visualization), followed by a four-picture array (a target plus three distractors; part 2: matching visualization). Children were to select the picture matching the word they heard in part 1. Event-related potentials (ERPs) locked to each stimulus presentation and task interval were averaged over sets of trials of increasing word abstractness. ERP time-course during both parts of the task showed that early activity (i.e., <300 ms) was predominant in response to concrete words, while activity in response to abstract words became evident only at intermediate (i.e., 300–699 ms) and late (i.e., 700–1000 ms) ERP intervals. Specifically, ERP topography showed that while early activity during post-auditory visualization was linked to left temporo-parietal areas for concrete words, early activity during matching visualization occurred mostly in occipito-parietal areas for concrete words, but more anteriorly in centro-parietal areas for abstract words. In intermediate ERPs, post-auditory visualization coincided with parieto-occipital and parieto-frontal activity in response to both concrete and abstract words, while in matching visualization a parieto-central activity was common to both types of words. In the late ERPs for both types of words, the post-auditory visualization involved right-hemispheric activity following a “post-anterior” pathway sequence: occipital, parietal, and temporal areas; conversely, matching visualization involved left-hemispheric activity following an “ant-posterior” pathway sequence: frontal, temporal, parietal, and occipital areas. These results suggest that, similarly, for concrete and abstract words, meaning in young children depends on variably complex visualization processes integrating visuo-auditory experiences and supramodal embodying representations. PMID:26175697

  14. Emotional words facilitate lexical but not early visual processing.

    PubMed

    Trauer, Sophie M; Kotz, Sonja A; Müller, Matthias M

    2015-12-12

    Emotional scenes and faces have shown to capture and bind visual resources at early sensory processing stages, i.e. in early visual cortex. However, emotional words have led to mixed results. In the current study ERPs were assessed simultaneously with steady-state visual evoked potentials (SSVEPs) to measure attention effects on early visual activity in emotional word processing. Neutral and negative words were flickered at 12.14 Hz whilst participants performed a Lexical Decision Task. Emotional word content did not modulate the 12.14 Hz SSVEP amplitude, neither did word lexicality. However, emotional words affected the ERP. Negative compared to neutral words as well as words compared to pseudowords lead to enhanced deflections in the P2 time range indicative of lexico-semantic access. The N400 was reduced for negative compared to neutral words and enhanced for pseudowords compared to words indicating facilitated semantic processing of emotional words. LPC amplitudes reflected word lexicality and thus the task-relevant response. In line with previous ERP and imaging evidence, the present results indicate that written emotional words are facilitated in processing only subsequent to visual analysis.

  15. Supervised guiding long-short term memory for image caption generation based on object classes

    NASA Astrophysics Data System (ADS)

    Wang, Jian; Cao, Zhiguo; Xiao, Yang; Qi, Xinyuan

    2018-03-01

    The present models of image caption generation have the problems of image visual semantic information attenuation and errors in guidance information. In order to solve these problems, we propose a supervised guiding Long Short Term Memory model based on object classes, named S-gLSTM for short. It uses the object detection results from R-FCN as supervisory information with high confidence, and updates the guidance word set by judging whether the last output matches the supervisory information. S-gLSTM learns how to extract the current interested information from the image visual se-mantic information based on guidance word set. The interested information is fed into the S-gLSTM at each iteration as guidance information, to guide the caption generation. To acquire the text-related visual semantic information, the S-gLSTM fine-tunes the weights of the network through the back-propagation of the guiding loss. Complementing guidance information at each iteration solves the problem of visual semantic information attenuation in the traditional LSTM model. Besides, the supervised guidance information in our model can reduce the impact of the mismatched words on the caption generation. We test our model on MSCOCO2014 dataset, and obtain better performance than the state-of-the- art models.

  16. Semantic information mediates visual attention during spoken word recognition in Chinese: Evidence from the printed-word version of the visual-world paradigm.

    PubMed

    Shen, Wei; Qu, Qingqing; Li, Xingshan

    2016-07-01

    In the present study, we investigated whether the activation of semantic information during spoken word recognition can mediate visual attention's deployment to printed Chinese words. We used a visual-world paradigm with printed words, in which participants listened to a spoken target word embedded in a neutral spoken sentence while looking at a visual display of printed words. We examined whether a semantic competitor effect could be observed in the printed-word version of the visual-world paradigm. In Experiment 1, the relationship between the spoken target words and the printed words was manipulated so that they were semantically related (a semantic competitor), phonologically related (a phonological competitor), or unrelated (distractors). We found that the probability of fixations on semantic competitors was significantly higher than that of fixations on the distractors. In Experiment 2, the orthographic similarity between the spoken target words and their semantic competitors was manipulated to further examine whether the semantic competitor effect was modulated by orthographic similarity. We found significant semantic competitor effects regardless of orthographic similarity. Our study not only reveals that semantic information can affect visual attention, it also provides important new insights into the methodology employed to investigate the semantic processing of spoken words during spoken word recognition using the printed-word version of the visual-world paradigm.

  17. Assessing the Formation of Experience-Based Gender Expectations in an Implicit Learning Scenario

    PubMed Central

    Öttl, Anton; Behne, Dawn M.

    2017-01-01

    The present study investigates the formation of new word-referent associations in an implicit learning scenario, using a gender-coded artificial language with spoken words and visual referents. Previous research has shown that when participants are explicitly instructed about the gender-coding system underlying an artificial lexicon, they monitor the frequency of exposure to male vs. female referents within this lexicon, and subsequently use this probabilistic information to predict the gender of an upcoming referent. In an explicit learning scenario, the auditory and visual gender cues are necessarily highlighted prior to acqusition, and the effects previously observed may therefore depend on participants' overt awareness of these cues. To assess whether the formation of experience-based expectations is dependent on explicit awareness of the underlying coding system, we present data from an experiment in which gender-coding was acquired implicitly, thereby reducing the likelihood that visual and auditory gender cues are used strategically during acquisition. Results show that even if the gender coding system was not perfectly mastered (as reflected in the number of gender coding errors), participants develop frequency based expectations comparable to those previously observed in an explicit learning scenario. In line with previous findings, participants are quicker at recognizing a referent whose gender is consistent with an induced expectation than one whose gender is inconsistent with an induced expectation. At the same time however, eyetracking data suggest that these expectations may surface earlier in an implicit learning scenario. These findings suggest that experience-based expectations are robust against manner of acquisition, and contribute to understanding why similar expectations observed in the activation of stereotypes during the processing of natural language stimuli are difficult or impossible to suppress. PMID:28936186

  18. Where Are the Quadratic's Complex Roots?

    ERIC Educational Resources Information Center

    Páll-Szabó, Ágnes Orsolya

    2015-01-01

    A picture is worth more than a thousand words--in mathematics too. Many students fail in learning mathematics because, in some cases, teachers do not offer the necessary visualization. Nowadays technology overcomes this problem: computer aided instruction is one of the most efficients methods in teaching mathematics. In this article we try to…

  19. Radical Thoughts on Simplifying Square Roots

    ERIC Educational Resources Information Center

    Schultz, Kyle T.; Bismarck, Stephen F.

    2013-01-01

    A picture is worth a thousand words. This statement is especially true in mathematics teaching and learning. Visual representations such as pictures, diagrams, charts, and tables can illuminate ideas that can be elusive when displayed in symbolic form only. The prevalence of representation as a mathematical process in such documents as…

  20. Spelling and Learning Style in Children.

    ERIC Educational Resources Information Center

    Riding, R. J.; Tempest, J.

    1986-01-01

    Seventy-two 11-year-old students were tested on 32 dictated words containing two levels of both visual and phonemic complexity. Students were grouped within sexes on their extraversion scores on the Junior Eysenck Personality Inventory and quotients on the Raven's Matrices. Spelling performance was found to interact significantly with level of…

  1. Brain activation in teenagers with isolated spelling disorder during tasks involving spelling assessment and comparison of pseudowords. fMRI study.

    PubMed

    Borkowska, Aneta Rita; Francuz, Piotr; Soluch, Paweł; Wolak, Tomasz

    2014-10-01

    The present study aimed at defining the specific traits of brain activation in teenagers with isolated spelling disorder in comparison with good spellers. fMRI examination was performed where the subject's task involved taking a decision 1/whether the visually presented words were spelled correctly or not (the orthographic decision task), and 2/whether the two presented letters strings (pseudowords) were identical or not (the visual decision task). Half of the displays showing meaningful words with an orthographic difficulty contained pairs with both words spelled correctly, and half of them contained one misspelled word. Half of the pseudowords were identical, half of them were not. The participants of the study included 15 individuals with isolated spelling disorder and 14 good spellers, aged 13-15. The results demonstrated that the essential differences in brain activation between teenagers with isolated spelling disorder and good spellers were found in the left inferior frontal gyrus, left medial frontal gyrus and right cerebellum posterior lobe, i.e. structures important for language processes, working memory and automaticity of behaviour. Spelling disorder is not only an effect of language dysfunction, it could be a symptom of difficulties in learning and automaticity of motor and visual shapes of written words, rapid information processing as well as automating use of orthographic lexicon. Copyright © 2013 The Japanese Society of Child Neurology. Published by Elsevier B.V. All rights reserved.

  2. Symbol Grounding Without Direct Experience: Do Words Inherit Sensorimotor Activation From Purely Linguistic Context?

    PubMed

    Günther, Fritz; Dudschig, Carolin; Kaup, Barbara

    2018-05-01

    Theories of embodied cognition assume that concepts are grounded in non-linguistic, sensorimotor experience. In support of this assumption, previous studies have shown that upwards response movements are faster than downwards movements after participants have been presented with words whose referents are typically located in the upper vertical space (and vice versa for downwards responses). This is taken as evidence that processing these words reactivates sensorimotor experiential traces. This congruency effect was also found for novel words, after participants learned these words as labels for novel objects that they encountered either in their upper or lower visual field. While this indicates that direct experience with a word's referent is sufficient to evoke said congruency effects, the present study investigates whether this direct experience is also a necessary condition. To this end, we conducted five experiments in which participants learned novel words from purely linguistic input: Novel words were presented in pairs with real up- or down-words (Experiment 1); they were presented in natural sentences where they replaced these real words (Experiment 2); they were presented as new labels for these real words (Experiment 3); and they were presented as labels for novel combined concepts based on these real words (Experiment 4 and 5). In all five experiments, we did not find any congruency effects elicited by the novel words; however, participants were always able to make correct explicit judgements about the vertical dimension associated to the novel words. These results suggest that direct experience is necessary for reactivating experiential traces, but this reactivation is not a necessary condition for understanding (in the sense of storing and accessing) the corresponding aspects of word meaning. Copyright © 2017 Cognitive Science Society, Inc.

  3. Development of visual expertise for reading: rapid emergence of visual familiarity for an artificial script

    PubMed Central

    Maurer, Urs; Blau, Vera C.; Yoncheva, Yuliya N.; McCandliss, Bruce D.

    2010-01-01

    Adults produce left-lateralized N170 responses to visual words relative to control stimuli, even within tasks that do not require active reading. This specialization begins in preschoolers as a right-lateralized N170 effect. We investigated whether this developmental shift reflects an early learning phenomenon, such as attaining visual familiarity with a script, by training adults in an artificial script and measuring N170 responses before and afterward. Training enhanced the N170 response, especially over the right hemisphere. This suggests N170 sensitivity to visual familiarity with a script before reading becomes sufficiently automatic to drive left-lateralized effects in a shallow encoding task. PMID:20614357

  4. Modeling loosely annotated images using both given and imagined annotations

    NASA Astrophysics Data System (ADS)

    Tang, Hong; Boujemaa, Nozha; Chen, Yunhao; Deng, Lei

    2011-12-01

    In this paper, we present an approach to learn latent semantic analysis models from loosely annotated images for automatic image annotation and indexing. The given annotation in training images is loose due to: 1. ambiguous correspondences between visual features and annotated keywords; 2. incomplete lists of annotated keywords. The second reason motivates us to enrich the incomplete annotation in a simple way before learning a topic model. In particular, some ``imagined'' keywords are poured into the incomplete annotation through measuring similarity between keywords in terms of their co-occurrence. Then, both given and imagined annotations are employed to learn probabilistic topic models for automatically annotating new images. We conduct experiments on two image databases (i.e., Corel and ESP) coupled with their loose annotations, and compare the proposed method with state-of-the-art discrete annotation methods. The proposed method improves word-driven probability latent semantic analysis (PLSA-words) up to a comparable performance with the best discrete annotation method, while a merit of PLSA-words is still kept, i.e., a wider semantic range.

  5. Stimulus modality and working memory performance in Greek children with reading disabilities: additional evidence for the pictorial superiority hypothesis.

    PubMed

    Constantinidou, Fofi; Evripidou, Christiana

    2012-01-01

    This study investigated the effects of stimulus presentation modality on working memory performance in children with reading disabilities (RD) and in typically developing children (TDC), all native speakers of Greek. It was hypothesized that the visual presentation of common objects would result in improved learning and recall performance as compared to the auditory presentation of stimuli. Twenty children, ages 10-12, diagnosed with RD were matched to 20 TDC age peers. The experimental tasks implemented a multitrial verbal learning paradigm incorporating three modalities: auditory, visual, and auditory plus visual. Significant group differences were noted on language, verbal and nonverbal memory, and measures of executive abilities. A mixed-model MANOVA indicated that children with RD had a slower learning curve and recalled fewer words than TDC across experimental modalities. Both groups of participants benefited from the visual presentation of objects; however, children with RD showed the greatest gains during this condition. In conclusion, working memory for common verbal items is impaired in children with RD; however, performance can be facilitated, and learning efficiency maximized, when information is presented visually. The results provide further evidence for the pictorial superiority hypothesis and the theory that pictorial presentation of verbal stimuli is adequate for dual coding.

  6. Learning to Read an Alphabet of Human Faces Produces Left-lateralized Training Effects in the Fusiform Gyrus

    PubMed Central

    Moore, Michelle W.; Durisko, Corrine; Perfetti, Charles A.; Fiez, Julie A.

    2014-01-01

    Numerous functional neuroimaging studies have shown that most orthographic stimuli, such as printed English words, produce a left-lateralized response within the fusiform gyrus (FG) at a characteristic location termed the visual word form area (VWFA). We developed an experimental alphabet (FaceFont) comprising 35 face–phoneme pairs to disentangle phonological and perceptual influences on the lateralization of orthographic processing within the FG. Using functional imaging, we found that a region in the vicinity of the VWFA responded to FaceFont words more strongly in trained versus untrained participants, whereas no differences were observed in the right FG. The trained response magnitudes in the left FG region correlated with behavioral reading performance, providing strong evidence that the neural tissue recruited by training supported the newly acquired reading skill. These results indicate that the left lateralization of the orthographic processing is not restricted to stimuli with particular visual-perceptual features. Instead, lateralization may occur because the anatomical projections in the vicinity of the VWFA provide a unique interconnection between the visual system and left-lateralized language areas involved in the representation of speech. PMID:24168219

  7. Mapping the meanings of novel visual symbols by youth with moderate or severe mental retardation.

    PubMed

    Romski, M A; Sevcik, R A; Robinson, B F; Mervis, C B; Bertrand, J

    1996-01-01

    The word-learning ability of 12 school-age subjects with moderate or severe mental retardation was assessed. Subjects had little or no functional speech and used the System for Augmenting Language with visual-graphic symbols for communication. Their ability to fast map novel symbols revealed whether they possessed the novel name-nameless category (N3C) lexical operating principle. On first exposure, 7 subjects were able to map symbol meanings for novel objects. Follow-up assessments indicated that mappers retained comprehension of some of the novel words for up to delays of 15 days and generalized their knowledge to production. Ability to fast map reliably was related to symbol achievement status. Implications for understanding vocabulary acquisition by youth with mental retardation were discussed.

  8. Massive cortical reorganization in sighted Braille readers

    PubMed Central

    Siuda-Krzywicka, Katarzyna; Bola, Łukasz; Paplińska, Małgorzata; Sumera, Ewa; Jednoróg, Katarzyna; Marchewka, Artur; Śliwińska, Magdalena W; Amedi, Amir; Szwed, Marcin

    2016-01-01

    The brain is capable of large-scale reorganization in blindness or after massive injury. Such reorganization crosses the division into separate sensory cortices (visual, somatosensory...). As its result, the visual cortex of the blind becomes active during tactile Braille reading. Although the possibility of such reorganization in the normal, adult brain has been raised, definitive evidence has been lacking. Here, we demonstrate such extensive reorganization in normal, sighted adults who learned Braille while their brain activity was investigated with fMRI and transcranial magnetic stimulation (TMS). Subjects showed enhanced activity for tactile reading in the visual cortex, including the visual word form area (VWFA) that was modulated by their Braille reading speed and strengthened resting-state connectivity between visual and somatosensory cortices. Moreover, TMS disruption of VWFA activity decreased their tactile reading accuracy. Our results indicate that large-scale reorganization is a viable mechanism recruited when learning complex skills. DOI: http://dx.doi.org/10.7554/eLife.10762.001 PMID:26976813

  9. Language experience changes subsequent learning

    PubMed Central

    Onnis, Luca; Thiessen, Erik

    2013-01-01

    What are the effects of experience on subsequent learning? We explored the effects of language-specific word order knowledge on the acquisition of sequential conditional information. Korean and English adults were engaged in a sequence learning task involving three different sets of stimuli: auditory linguistic (nonsense syllables), visual non-linguistic (nonsense shapes), and auditory non-linguistic (pure tones). The forward and backward probabilities between adjacent elements generated two equally probable and orthogonal perceptual parses of the elements, such that any significant preference at test must be due to either general cognitive biases, or prior language-induced biases. We found that language modulated parsing preferences with the linguistic stimuli only. Intriguingly, these preferences are congruent with the dominant word order patterns of each language, as corroborated by corpus analyses, and are driven by probabilistic preferences. Furthermore, although the Korean individuals had received extensive formal explicit training in English and lived in an English-speaking environment, they exhibited statistical learning biases congruent with their native language. Our findings suggest that mechanisms of statistical sequential learning are implicated in language across the lifespan, and experience with language may affect cognitive processes and later learning. PMID:23200510

  10. Signed reward prediction errors drive declarative learning.

    PubMed

    De Loof, Esther; Ergo, Kate; Naert, Lien; Janssens, Clio; Talsma, Durk; Van Opstal, Filip; Verguts, Tom

    2018-01-01

    Reward prediction errors (RPEs) are thought to drive learning. This has been established in procedural learning (e.g., classical and operant conditioning). However, empirical evidence on whether RPEs drive declarative learning-a quintessentially human form of learning-remains surprisingly absent. We therefore coupled RPEs to the acquisition of Dutch-Swahili word pairs in a declarative learning paradigm. Signed RPEs (SRPEs; "better-than-expected" signals) during declarative learning improved recognition in a follow-up test, with increasingly positive RPEs leading to better recognition. In addition, classic declarative memory mechanisms such as time-on-task failed to explain recognition performance. The beneficial effect of SRPEs on recognition was subsequently affirmed in a replication study with visual stimuli.

  11. Learning to read words in a new language shapes the neural organization of the prior languages.

    PubMed

    Mei, Leilei; Xue, Gui; Lu, Zhong-Lin; Chen, Chuansheng; Zhang, Mingxia; He, Qinghua; Wei, Miao; Dong, Qi

    2014-12-01

    Learning a new language entails interactions with one׳s prior language(s). Much research has shown how native language affects the cognitive and neural mechanisms of a new language, but little is known about whether and how learning a new language shapes the neural mechanisms of prior language(s). In two experiments in the current study, we used an artificial language training paradigm in combination with an fMRI to examine (1) the effects of different linguistic components (phonology and semantics) of a new language on the neural process of prior languages (i.e., native and second languages), and (2) whether such effects were modulated by the proficiency level in the new language. Results of Experiment 1 showed that when the training in a new language involved semantics (as opposed to only visual forms and phonology), neural activity during word reading in the native language (Chinese) was reduced in several reading-related regions, including the left pars opercularis, pars triangularis, bilateral inferior temporal gyrus, fusiform gyrus, and inferior occipital gyrus. Results of Experiment 2 replicated the results of Experiment 1 and further found that semantic training also affected neural activity during word reading in the subjects׳ second language (English). Furthermore, we found that the effects of the new language were modulated by the subjects׳ proficiency level in the new language. These results provide critical imaging evidence for the influence of learning to read words in a new language on word reading in native and second languages. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Auditory and visual sequence learning in humans and monkeys using an artificial grammar learning paradigm.

    PubMed

    Milne, Alice E; Petkov, Christopher I; Wilson, Benjamin

    2017-07-05

    Language flexibly supports the human ability to communicate using different sensory modalities, such as writing and reading in the visual modality and speaking and listening in the auditory domain. Although it has been argued that nonhuman primate communication abilities are inherently multisensory, direct behavioural comparisons between human and nonhuman primates are scant. Artificial grammar learning (AGL) tasks and statistical learning experiments can be used to emulate ordering relationships between words in a sentence. However, previous comparative work using such paradigms has primarily investigated sequence learning within a single sensory modality. We used an AGL paradigm to evaluate how humans and macaque monkeys learn and respond to identically structured sequences of either auditory or visual stimuli. In the auditory and visual experiments, we found that both species were sensitive to the ordering relationships between elements in the sequences. Moreover, the humans and monkeys produced largely similar response patterns to the visual and auditory sequences, indicating that the sequences are processed in comparable ways across the sensory modalities. These results provide evidence that human sequence processing abilities stem from an evolutionarily conserved capacity that appears to operate comparably across the sensory modalities in both human and nonhuman primates. The findings set the stage for future neurobiological studies to investigate the multisensory nature of these sequencing operations in nonhuman primates and how they compare to related processes in humans. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  13. What predicts successful literacy acquisition in a second language?

    PubMed Central

    Frost, Ram; Siegelman, Noam; Narkiss, Alona; Afek, Liron

    2013-01-01

    We examined whether success (or failure) in assimilating the structure of a second language could be predicted by general statistical learning abilities that are non-linguistic in nature. We employed a visual statistical learning (VSL) task, monitoring our participants’ implicit learning of the transitional probabilities of visual shapes. A pretest revealed that performance in the VSL task is not correlated with abilities related to a general G factor or working memory. We found that native speakers of English who picked up the implicit statistical structure embedded in the continuous stream of shapes, on average, better assimilated the Semitic structure of Hebrew words. Our findings thus suggest that languages and their writing systems are characterized by idiosyncratic correlations of form and meaning, and these are picked up in the process of literacy acquisition, as they are picked up in any other type of learning, for the purpose of making sense of the environment. PMID:23698615

  14. The architecture of intuition: Fluency and affect determine intuitive judgments of semantic and visual coherence and judgments of grammaticality in artificial grammar learning.

    PubMed

    Topolinski, Sascha; Strack, Fritz

    2009-02-01

    People can intuitively detect whether a word triad has a common remote associate (coherent) or does not have one (incoherent) before and independently of actually retrieving the common associate. The authors argue that semantic coherence increases the processing fluency for coherent triads and that this increased fluency triggers a brief and subtle positive affect, which is the experiential basis of these intuitions. In a series of 11 experiments with 3 different fluency manipulations (figure-ground contrast, repeated exposure, and subliminal visual priming) and 3 different affect inductions (short-timed facial feedback, subliminal facial priming, and affect-laden word triads), high fluency and positive affect independently and additively increased the probability that triads would be judged as coherent, irrespective of actual coherence. The authors could equalize and even reverse coherence judgments (i.e., incoherent triads were judged to be coherent more frequently than were coherent triads). When explicitly instructed, participants were unable to correct their judgments for the influence of affect, although they were aware of the manipulation. The impact of fluency and affect was also generalized to intuitions of visual coherence and intuitions of grammaticality in an artificial grammar learning paradigm. (PsycINFO Database Record (c) 2009 APA, all rights reserved).

  15. Genetic and Environmental Overlap between Chinese and English Reading-Related Skills in Chinese Children

    ERIC Educational Resources Information Center

    Wong, Simpson W. L.; Chow, Bonnie Wing-Yin; Ho, Connie Suk-Han; Waye, Mary M. Y.; Bishop, Dorothy V. M.

    2014-01-01

    This twin study examined the relative contributions of genes and environment on 2nd language reading acquisition of Chinese-speaking children learning English. We examined whether specific skills-visual word recognition, receptive vocabulary, phonological awareness, phonological memory, and speech discrimination-in the 1st and 2nd languages have…

  16. Visualizing Neuroscience: Learning about the Brain through Art

    ERIC Educational Resources Information Center

    Chudler, Eric H.; Konrady, Paula

    2006-01-01

    Neuroscience is a subject that can motivate, excite, and stimulate the curiosity of everyone However, the study of the brain is made difficult by an abundance of new vocabulary words and abstract concepts. Although neuroscience has the potential to inspire students, many teachers find it difficult to include a study of the brain in their…

  17. Sounds and Meanings Working Together: Word Learning as a Collaborative Effort

    ERIC Educational Resources Information Center

    Saffran, Jenny

    2014-01-01

    Over the past several decades, researchers have discovered a great deal of information about the processes underlying language acquisition. From as early as they can be studied, infants are sensitive to the nuances of native-language sound structure. Similarly, infants are attuned to the visual and conceptual structure of their environments…

  18. A Picture Is Worth a Thousand Words: Applying Image-Based Learning to Course Design

    ERIC Educational Resources Information Center

    Whitley, Cameron T.

    2013-01-01

    Although images are often used in the classroom to communicate difficult concepts, students have little input into their selection and application. This approach can create a passive experience for students and represents a missed opportunity for instructors to engage participation. By applying concepts found in visual sociology to techniques…

  19. Engaging Students through Image and Word

    ERIC Educational Resources Information Center

    Newland, Abby

    2013-01-01

    This article focuses on the connection between the visual arts and language arts with the many teaching and learning possibilities that may arise from an art curriculum infused with language arts. As a K-5 art specialist in a rural Georgia public school, the author feels passionately about the importance of interdisciplinary art education for…

  20. Media Literacy: What, Why, and How?

    ERIC Educational Resources Information Center

    Grace, Donna J.

    2005-01-01

    Literacy has traditionally been associated with the printed word. But today, print literacy is not enough. Children and youth need to learn to "read" and interpret visual images as well. Film, television, videos, DVDs, computer games, and the Internet all hold a prominent and pervasive place in one's culture. Its presence in people's lives is only…

  1. Infographics: More than Words Can Say

    ERIC Educational Resources Information Center

    Krauss, Jane

    2012-01-01

    Good learning experiences ask students to investigate and make sense of the world. While there are many ways to do this, K-12 curriculum has traditionally skewed toward reading and writing to interpret and express students' sense-making. But there is another way. Infographics represent data and ideas visually, in pictures, engaging more parts of…

  2. Educational Technology in Distance Learning (for the Deaf).

    ERIC Educational Resources Information Center

    Hales, Gerald

    This discussion of the use of distance education for deaf students argues that distance education methodologies appear to be relatively attractive to the hearing impaired student because they rely to a substantial extent upon the written word and visual transmission of information. Several projects that use computer or interactive systems to teach…

  3. Protein Analysis Meets Visual Word Recognition: A Case for String Kernels in the Brain

    ERIC Educational Resources Information Center

    Hannagan, Thomas; Grainger, Jonathan

    2012-01-01

    It has been recently argued that some machine learning techniques known as Kernel methods could be relevant for capturing cognitive and neural mechanisms (Jakel, Scholkopf, & Wichmann, 2009). We point out that "String kernels," initially designed for protein function prediction and spam detection, are virtually identical to one contending proposal…

  4. When semantics aids phonology: A processing advantage for iconic word forms in aphasia.

    PubMed

    Meteyard, Lotte; Stoppard, Emily; Snudden, Dee; Cappa, Stefano F; Vigliocco, Gabriella

    2015-09-01

    Iconicity is the non-arbitrary relation between properties of a phonological form and semantic content (e.g. "moo", "splash"). It is a common feature of both spoken and signed languages, and recent evidence shows that iconic forms confer an advantage during word learning. We explored whether iconic forms conferred a processing advantage for 13 individuals with aphasia following left-hemisphere stroke. Iconic and control words were compared in four different tasks: repetition, reading aloud, auditory lexical decision and visual lexical decision. An advantage for iconic words was seen for some individuals in all tasks, with consistent group effects emerging in reading aloud and auditory lexical decision. Both these tasks rely on mapping between semantics and phonology. We conclude that iconicity aids spoken word processing for individuals with aphasia. This advantage is due to a stronger connection between semantic information and phonological forms. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Visual event-related potential studies supporting the validity of VARK learning styles' visual and read/write learners.

    PubMed

    Thepsatitporn, Sarawin; Pichitpornchai, Chailerd

    2016-06-01

    The validity of learning styles needs supports of additional objective evidence. The identification of learning styles using subjective evidence from VARK questionnaires (where V is visual, A is auditory, R is read/write, and K is kinesthetic) combined with objective evidence from visual event-related potential (vERP) studies has never been investigated. It is questionable whether picture superiority effects exist in V learners and R learners. Thus, the present study aimed to investigate whether vERP could show the relationship between vERP components and VARK learning styles and to identify the existence of picture superiority effects in V learners and R learners. Thirty medical students (15 V learners and 15 R learners) performed recognition tasks with vERP and an intermediate-term memory (ITM) test. The results of within-group comparisons showed that pictures elicited larger P200 amplitudes than words at the occipital 2 site (P < 0.05) in V learners and at the occipital 1 and 2 sites (P < 0.05) in R learners. The between-groups comparison showed that P200 amplitudes elicited by pictures in V learners were larger than those of R learners at the parietal 4 site (P < 0.05). The ITM test result showed that a picture set showed distinctively more correct responses than that of a word set for both V learners (P < 0.001) and R learners (P < 0.01). In conclusion, the result indicated that the P200 amplitude at the parietal 4 site could be used to objectively distinguish V learners from R learners. A lateralization existed to the right brain (occipital 2 site) in V learners. The ITM test demonstrated the existence of picture superiority effects in both learners. The results revealed the first objective electrophysiological evidence partially supporting the validity of the subjective psychological VARK questionnaire study. Copyright © 2016 The American Physiological Society.

  6. Rapid Extraction of Lexical Tone Phonology in Chinese Characters: A Visual Mismatch Negativity Study

    PubMed Central

    Wang, Xiao-Dong; Liu, A-Ping; Wu, Yin-Yuan; Wang, Peng

    2013-01-01

    Background In alphabetic languages, emerging evidence from behavioral and neuroimaging studies shows the rapid and automatic activation of phonological information in visual word recognition. In the mapping from orthography to phonology, unlike most alphabetic languages in which there is a natural correspondence between the visual and phonological forms, in logographic Chinese, the mapping between visual and phonological forms is rather arbitrary and depends on learning and experience. The issue of whether the phonological information is rapidly and automatically extracted in Chinese characters by the brain has not yet been thoroughly addressed. Methodology/Principal Findings We continuously presented Chinese characters differing in orthography and meaning to adult native Mandarin Chinese speakers to construct a constant varying visual stream. In the stream, most stimuli were homophones of Chinese characters: The phonological features embedded in these visual characters were the same, including consonants, vowels and the lexical tone. Occasionally, the rule of phonology was randomly violated by characters whose phonological features differed in the lexical tone. Conclusions/Significance We showed that the violation of the lexical tone phonology evoked an early, robust visual response, as revealed by whole-head electrical recordings of the visual mismatch negativity (vMMN), indicating the rapid extraction of phonological information embedded in Chinese characters. Source analysis revealed that the vMMN was involved in neural activations of the visual cortex, suggesting that the visual sensory memory is sensitive to phonological information embedded in visual words at an early processing stage. PMID:23437235

  7. Visual attention based bag-of-words model for image classification

    NASA Astrophysics Data System (ADS)

    Wang, Qiwei; Wan, Shouhong; Yue, Lihua; Wang, Che

    2014-04-01

    Bag-of-words is a classical method for image classification. The core problem is how to count the frequency of the visual words and what visual words to select. In this paper, we propose a visual attention based bag-of-words model (VABOW model) for image classification task. The VABOW model utilizes visual attention method to generate a saliency map, and uses the saliency map as a weighted matrix to instruct the statistic process for the frequency of the visual words. On the other hand, the VABOW model combines shape, color and texture cues and uses L1 regularization logistic regression method to select the most relevant and most efficient features. We compare our approach with traditional bag-of-words based method on two datasets, and the result shows that our VABOW model outperforms the state-of-the-art method for image classification.

  8. Functional Specificity of the Visual Word Form Area: General Activation for Words and Symbols but Specific Network Activation for Words

    ERIC Educational Resources Information Center

    Reinke, Karen; Fernandes, Myra; Schwindt, Graeme; O'Craven, Kathleen; Grady, Cheryl L.

    2008-01-01

    The functional specificity of the brain region known as the Visual Word Form Area (VWFA) was examined using fMRI. We explored whether this area serves a general role in processing symbolic stimuli, rather than being selective for the processing of words. Brain activity was measured during a visual 1-back task to English words, meaningful symbols…

  9. Phonological-orthographic consistency for Japanese words and its impact on visual and auditory word recognition.

    PubMed

    Hino, Yasushi; Kusunose, Yuu; Miyamura, Shinobu; Lupker, Stephen J

    2017-01-01

    In most models of word processing, the degrees of consistency in the mappings between orthographic, phonological, and semantic representations are hypothesized to affect reading time. Following Hino, Miyamura, and Lupker's (2011) examination of the orthographic-phonological (O-P) and orthographic-semantic (O-S) consistency for 1,114 Japanese words (339 katakana and 775 kanji words), in the present research, we initially attempted to measure the phonological-orthographic (P-O) consistency for those same words. In contrast to the O-P and O-S consistencies, which were equivalent for kanji and katakana words, the P-O relationships were much more inconsistent for the kanji words than for the katakana words. The impact of kanji words' P-O consistency was then examined in both visual and auditory word recognition tasks. Although there was no effect of P-O consistency in the standard visual lexical-decision task, significant effects were detected in a lexical-decision task with auditory stimuli, in a perceptual identification task using masked visual stimuli, and in a lexical-decision task with degraded visual stimuli. The implications of these results are discussed in terms of the impact of P-O consistency in auditory and visual word recognition. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  10. Dictionary Pruning with Visual Word Significance for Medical Image Retrieval

    PubMed Central

    Zhang, Fan; Song, Yang; Cai, Weidong; Hauptmann, Alexander G.; Liu, Sidong; Pujol, Sonia; Kikinis, Ron; Fulham, Michael J; Feng, David Dagan; Chen, Mei

    2016-01-01

    Content-based medical image retrieval (CBMIR) is an active research area for disease diagnosis and treatment but it can be problematic given the small visual variations between anatomical structures. We propose a retrieval method based on a bag-of-visual-words (BoVW) to identify discriminative characteristics between different medical images with Pruned Dictionary based on Latent Semantic Topic description. We refer to this as the PD-LST retrieval. Our method has two main components. First, we calculate a topic-word significance value for each visual word given a certain latent topic to evaluate how the word is connected to this latent topic. The latent topics are learnt, based on the relationship between the images and words, and are employed to bridge the gap between low-level visual features and high-level semantics. These latent topics describe the images and words semantically and can thus facilitate more meaningful comparisons between the words. Second, we compute an overall-word significance value to evaluate the significance of a visual word within the entire dictionary. We designed an iterative ranking method to measure overall-word significance by considering the relationship between all latent topics and words. The words with higher values are considered meaningful with more significant discriminative power in differentiating medical images. We evaluated our method on two public medical imaging datasets and it showed improved retrieval accuracy and efficiency. PMID:27688597

  11. Dictionary Pruning with Visual Word Significance for Medical Image Retrieval.

    PubMed

    Zhang, Fan; Song, Yang; Cai, Weidong; Hauptmann, Alexander G; Liu, Sidong; Pujol, Sonia; Kikinis, Ron; Fulham, Michael J; Feng, David Dagan; Chen, Mei

    2016-02-12

    Content-based medical image retrieval (CBMIR) is an active research area for disease diagnosis and treatment but it can be problematic given the small visual variations between anatomical structures. We propose a retrieval method based on a bag-of-visual-words (BoVW) to identify discriminative characteristics between different medical images with Pruned Dictionary based on Latent Semantic Topic description. We refer to this as the PD-LST retrieval. Our method has two main components. First, we calculate a topic-word significance value for each visual word given a certain latent topic to evaluate how the word is connected to this latent topic. The latent topics are learnt, based on the relationship between the images and words, and are employed to bridge the gap between low-level visual features and high-level semantics. These latent topics describe the images and words semantically and can thus facilitate more meaningful comparisons between the words. Second, we compute an overall-word significance value to evaluate the significance of a visual word within the entire dictionary. We designed an iterative ranking method to measure overall-word significance by considering the relationship between all latent topics and words. The words with higher values are considered meaningful with more significant discriminative power in differentiating medical images. We evaluated our method on two public medical imaging datasets and it showed improved retrieval accuracy and efficiency.

  12. Effects of Experimentally Imposed Noise on Task Performance of Black Children Attending Day Care Centers Near Elevated Subway Trains.

    ERIC Educational Resources Information Center

    Hambrick-Dixon, Priscilla Janet

    1986-01-01

    Investigates whether an experimentally imposed 80dB (A) noise affected psychomotor, serial memory words and pictures, incidental memory, visual recall, paired associates, perceptual learning, and coding performance of five-year-old Black children attending day care centers near and far from elevated subways. (HOD)

  13. Teaching Students How to Self-Regulate Their Online Vocabulary Learning by Using a Structured Think-to-Yourself Procedure

    ERIC Educational Resources Information Center

    Ebner, Rachel J.; Ehri, Linnea C.

    2016-01-01

    Using the Internet for vocabulary development is a powerful way for students to rapidly expand their vocabularies. The Internet affords students opportunities to interact both instantaneously and multimodaly with words in different contexts. By using search engines and hyperlinks, students can immediately access textual, visual, and auditory…

  14. Signs as Pictures and Signs as Words: Effect of Language Knowledge on Memory for New Vocabulary.

    ERIC Educational Resources Information Center

    Siple, Patricia; And Others

    1982-01-01

    The role of sensory attributes in a vocabulary learning task was investigated for a non-oral language using deaf and hearing individuals, more or less skilled in the use of sign language. Skilled signers encoded invented signs in terms of linguistic structure rather than as visual-pictorial events. (Author/RD)

  15. The anatomy of language: contributions from functional neuroimaging

    PubMed Central

    PRICE, CATHY J.

    2000-01-01

    This article illustrates how functional neuroimaging can be used to test the validity of neurological and cognitive models of language. Three models of language are described: the 19th Century neurological model which describes both the anatomy and cognitive components of auditory and visual word processing, and 2 20th Century cognitive models that are not constrained by anatomy but emphasise 2 different routes to reading that are not present in the neurological model. A series of functional imaging studies are then presented which show that, as predicted by the 19th Century neurologists, auditory and visual word repetition engage the left posterior superior temporal and posterior inferior frontal cortices. More specifically, the roles Wernicke and Broca assigned to these regions lie respectively in the posterior superior temporal sulcus and the anterior insula. In addition, a region in the left posterior inferior temporal cortex is activated for word retrieval, thereby providing a second route to reading, as predicted by the 20th Century cognitive models. This region and its function may have been missed by the 19th Century neurologists because selective damage is rare. The angular gyrus, previously linked to the visual word form system, is shown to be part of a distributed semantic system that can be accessed by objects and faces as well as speech. Other components of the semantic system include several regions in the inferior and middle temporal lobes. From these functional imaging results, a new anatomically constrained model of word processing is proposed which reconciles the anatomical ambitions of the 19th Century neurologists and the cognitive finesse of the 20th Century cognitive models. The review focuses on single word processing and does not attempt to discuss how words are combined to generate sentences or how several languages are learned and interchanged. Progress in unravelling these and other related issues will depend on the integration of behavioural, computational and neurophysiological approaches, including neuroimaging. PMID:11117622

  16. Computer-based learning of spelling skills in children with and without dyslexia.

    PubMed

    Kast, Monika; Baschera, Gian-Marco; Gross, Markus; Jäncke, Lutz; Meyer, Martin

    2011-12-01

    Our spelling training software recodes words into multisensory representations comprising visual and auditory codes. These codes represent information about letters and syllables of a word. An enhanced version, developed for this study, contains an additional phonological code and an improved word selection controller relying on a phoneme-based student model. We investigated the spelling behavior of children by means of learning curves based on log-file data of the previous and the enhanced software version. First, we compared the learning progress of children with dyslexia working either with the previous software (n = 28) or the adapted version (n = 37). Second, we investigated the spelling behavior of children with dyslexia (n = 37) and matched children without dyslexia (n = 25). To gain deeper insight into which factors are relevant for acquiring spelling skills, we analyzed the influence of cognitive abilities, such as attention functions and verbal memory skills, on the learning behavior. All investigations of the learning process are based on learning curve analyses of the collected log-file data. The results evidenced that those children with dyslexia benefit significantly from the additional phonological cue and the corresponding phoneme-based student model. Actually, children with dyslexia improve their spelling skills to the same extent as children without dyslexia and were able to memorize phoneme to grapheme correspondence when given the correct support and adequate training. In addition, children with low attention functions benefit from the structured learning environment. Generally, our data showed that memory sources are supportive cognitive functions for acquiring spelling skills and for using the information cues of a multi-modal learning environment.

  17. Does the Sound of a Barking Dog Activate its Corresponding Visual Form? An fMRI Investigation of Modality-Specific Semantic Access

    PubMed Central

    Reilly, Jamie; Garcia, Amanda; Binney, Richard J.

    2016-01-01

    Much remains to be learned about the neural architecture underlying word meaning. Fully distributed models of semantic memory predict that the sound of a barking dog will conjointly engage a network of distributed sensorimotor spokes. An alternative framework holds that modality-specific features additionally converge within transmodal hubs. Participants underwent functional MRI while covertly naming familiar objects versus newly learned novel objects from only one of their constituent semantic features (visual form, characteristic sound, or point-light motion representation). Relative to the novel object baseline, familiar concepts elicited greater activation within association regions specific to that presentation modality. Furthermore, visual form elicited activation within high-level auditory association cortex. Conversely, environmental sounds elicited activation in regions proximal to visual association cortex. Both conditions commonly engaged a putative hub region within lateral anterior temporal cortex. These results support hybrid semantic models in which local hubs and distributed spokes are dually engaged in service of semantic memory. PMID:27289210

  18. Signed reward prediction errors drive declarative learning

    PubMed Central

    Naert, Lien; Janssens, Clio; Talsma, Durk; Van Opstal, Filip; Verguts, Tom

    2018-01-01

    Reward prediction errors (RPEs) are thought to drive learning. This has been established in procedural learning (e.g., classical and operant conditioning). However, empirical evidence on whether RPEs drive declarative learning–a quintessentially human form of learning–remains surprisingly absent. We therefore coupled RPEs to the acquisition of Dutch-Swahili word pairs in a declarative learning paradigm. Signed RPEs (SRPEs; “better-than-expected” signals) during declarative learning improved recognition in a follow-up test, with increasingly positive RPEs leading to better recognition. In addition, classic declarative memory mechanisms such as time-on-task failed to explain recognition performance. The beneficial effect of SRPEs on recognition was subsequently affirmed in a replication study with visual stimuli. PMID:29293493

  19. Dysfunctional visual word form processing in progressive alexia

    PubMed Central

    Rising, Kindle; Stib, Matthew T.; Rapcsak, Steven Z.; Beeson, Pélagie M.

    2013-01-01

    Progressive alexia is an acquired reading deficit caused by degeneration of brain regions that are essential for written word processing. Functional imaging studies have shown that early processing of the visual word form depends on a hierarchical posterior-to-anterior processing stream in occipito-temporal cortex, whereby successive areas code increasingly larger and more complex perceptual attributes of the letter string. A region located in the left lateral occipito-temporal sulcus and adjacent fusiform gyrus shows maximal selectivity for words and has been dubbed the ‘visual word form area’. We studied two patients with progressive alexia in order to determine whether their reading deficits were associated with structural and/or functional abnormalities in this visual word form system. Voxel-based morphometry showed left-lateralized occipito-temporal atrophy in both patients, very mild in one, but moderate to severe in the other. The two patients, along with 10 control subjects, were scanned with functional magnetic resonance imaging as they viewed rapidly presented words, false font strings, or a fixation crosshair. This paradigm was optimized to reliably map brain regions involved in orthographic processing in individual subjects. All 10 control subjects showed a posterior-to-anterior gradient of selectivity for words, and all 10 showed a functionally defined visual word form area in the left hemisphere that was activated for words relative to false font strings. In contrast, neither of the two patients with progressive alexia showed any evidence for a selectivity gradient or for word-specific activation of the visual word form area. The patient with mild atrophy showed normal responses to both words and false font strings in the posterior part of the visual word form system, but a failure to develop selectivity for words in the more anterior part of the system. In contrast, the patient with moderate to severe atrophy showed minimal activation of any part of the visual word form system for either words or false font strings. Our results suggest that progressive alexia is associated with a dysfunctional visual word form system, with or without substantial cortical atrophy. Furthermore, these findings demonstrate that functional MRI has the potential to reveal the neural bases of cognitive deficits in neurodegenerative patients at very early stages, in some cases before the development of extensive atrophy. PMID:23471694

  20. Dysfunctional visual word form processing in progressive alexia.

    PubMed

    Wilson, Stephen M; Rising, Kindle; Stib, Matthew T; Rapcsak, Steven Z; Beeson, Pélagie M

    2013-04-01

    Progressive alexia is an acquired reading deficit caused by degeneration of brain regions that are essential for written word processing. Functional imaging studies have shown that early processing of the visual word form depends on a hierarchical posterior-to-anterior processing stream in occipito-temporal cortex, whereby successive areas code increasingly larger and more complex perceptual attributes of the letter string. A region located in the left lateral occipito-temporal sulcus and adjacent fusiform gyrus shows maximal selectivity for words and has been dubbed the 'visual word form area'. We studied two patients with progressive alexia in order to determine whether their reading deficits were associated with structural and/or functional abnormalities in this visual word form system. Voxel-based morphometry showed left-lateralized occipito-temporal atrophy in both patients, very mild in one, but moderate to severe in the other. The two patients, along with 10 control subjects, were scanned with functional magnetic resonance imaging as they viewed rapidly presented words, false font strings, or a fixation crosshair. This paradigm was optimized to reliably map brain regions involved in orthographic processing in individual subjects. All 10 control subjects showed a posterior-to-anterior gradient of selectivity for words, and all 10 showed a functionally defined visual word form area in the left hemisphere that was activated for words relative to false font strings. In contrast, neither of the two patients with progressive alexia showed any evidence for a selectivity gradient or for word-specific activation of the visual word form area. The patient with mild atrophy showed normal responses to both words and false font strings in the posterior part of the visual word form system, but a failure to develop selectivity for words in the more anterior part of the system. In contrast, the patient with moderate to severe atrophy showed minimal activation of any part of the visual word form system for either words or false font strings. Our results suggest that progressive alexia is associated with a dysfunctional visual word form system, with or without substantial cortical atrophy. Furthermore, these findings demonstrate that functional MRI has the potential to reveal the neural bases of cognitive deficits in neurodegenerative patients at very early stages, in some cases before the development of extensive atrophy.

  1. A Graph-Embedding Approach to Hierarchical Visual Word Mergence.

    PubMed

    Wang, Lei; Liu, Lingqiao; Zhou, Luping

    2017-02-01

    Appropriately merging visual words are an effective dimension reduction method for the bag-of-visual-words model in image classification. The approach of hierarchically merging visual words has been extensively employed, because it gives a fully determined merging hierarchy. Existing supervised hierarchical merging methods take different approaches and realize the merging process with various formulations. In this paper, we propose a unified hierarchical merging approach built upon the graph-embedding framework. Our approach is able to merge visual words for any scenario, where a preferred structure and an undesired structure are defined, and, therefore, can effectively attend to all kinds of requirements for the word-merging process. In terms of computational efficiency, we show that our algorithm can seamlessly integrate a fast search strategy developed in our previous work and, thus, well maintain the state-of-the-art merging speed. To the best of our survey, the proposed approach is the first one that addresses the hierarchical visual word mergence in such a flexible and unified manner. As demonstrated, it can maintain excellent image classification performance even after a significant dimension reduction, and outperform all the existing comparable visual word-merging methods. In a broad sense, our work provides an open platform for applying, evaluating, and developing new criteria for hierarchical word-merging tasks.

  2. W-tree indexing for fast visual word generation.

    PubMed

    Shi, Miaojing; Xu, Ruixin; Tao, Dacheng; Xu, Chao

    2013-03-01

    The bag-of-visual-words representation has been widely used in image retrieval and visual recognition. The most time-consuming step in obtaining this representation is the visual word generation, i.e., assigning visual words to the corresponding local features in a high-dimensional space. Recently, structures based on multibranch trees and forests have been adopted to reduce the time cost. However, these approaches cannot perform well without a large number of backtrackings. In this paper, by considering the spatial correlation of local features, we can significantly speed up the time consuming visual word generation process while maintaining accuracy. In particular, visual words associated with certain structures frequently co-occur; hence, we can build a co-occurrence table for each visual word for a large-scale data set. By associating each visual word with a probability according to the corresponding co-occurrence table, we can assign a probabilistic weight to each node of a certain index structure (e.g., a KD-tree and a K-means tree), in order to re-direct the searching path to be close to its global optimum within a small number of backtrackings. We carefully study the proposed scheme by comparing it with the fast library for approximate nearest neighbors and the random KD-trees on the Oxford data set. Thorough experimental results suggest the efficiency and effectiveness of the new scheme.

  3. Phonological similarity influences word learning in adults learning Spanish as a foreign language

    PubMed Central

    Stamer, Melissa K.; Vitevitch, Michael S.

    2013-01-01

    Neighborhood density—the number of words that sound similar to a given word (Luce & Pisoni, 1998)—influences word-learning in native English speaking children and adults (Storkel, 2004; Storkel, Armbruster, & Hogan, 2006): novel words with many similar sounding English words (i.e., dense neighborhood) are learned more quickly than novel words with few similar sounding English words (i.e., sparse neighborhood). The present study examined how neighborhood density influences word-learning in native English speaking adults learning Spanish as a foreign language. Students in their third-semester of Spanish language classes learned advanced Spanish words that sounded similar to many known Spanish words (i.e., dense neighborhood) or sounded similar to few known Spanish words (i.e., sparse neighborhood). In three word-learning tasks, performance was better for Spanish words with dense rather than sparse neighborhoods. These results suggest that a similar mechanism may be used to learn new words in a native and a foreign language. PMID:23950692

  4. Is nevtral NEUTRAL? Visual similarity effects in the early phases of written-word recognition.

    PubMed

    Marcet, Ana; Perea, Manuel

    2017-08-01

    For simplicity, contemporary models of written-word recognition and reading have unspecified feature/letter levels-they predict that the visually similar substituted-letter nonword PEQPLE is as effective at activating the word PEOPLE as the visually dissimilar substituted-letter nonword PEYPLE. Previous empirical evidence on the effects of visual similarly across letters during written-word recognition is scarce and nonconclusive. To examine whether visual similarity across letters plays a role early in word processing, we conducted two masked priming lexical decision experiments (stimulus-onset asynchrony = 50 ms). The substituted-letter primes were visually very similar to the target letters (u/v in Experiment 1 and i/j in Experiment 2; e.g., nevtral-NEUTRAL). For comparison purposes, we included an identity prime condition (neutral-NEUTRAL) and a dissimilar-letter prime condition (neztral-NEUTRAL). Results showed that the similar-letter prime condition produced faster word identification times than the dissimilar-letter prime condition. We discuss how models of written-word recognition should be amended to capture visual similarity effects across letters.

  5. The relationship between visual word and face processing lateralization in the fusiform gyri: A cross-sectional study.

    PubMed

    Davies-Thompson, Jodie; Johnston, Samantha; Tashakkor, Yashar; Pancaroglu, Raika; Barton, Jason J S

    2016-08-01

    Visual words and faces activate similar networks but with complementary hemispheric asymmetries, faces being lateralized to the right and words to the left. A recent theory proposes that this reflects developmental competition between visual word and face processing. We investigated whether this results in an inverse correlation between the degree of lateralization of visual word and face activation in the fusiform gyri. 26 literate right-handed healthy adults underwent functional MRI with face and word localizers. We derived lateralization indices for cluster size and peak responses for word and face activity in left and right fusiform gyri, and correlated these across subjects. A secondary analysis examined all face- and word-selective voxels in the inferior occipitotemporal cortex. No negative correlations were found. There were positive correlations for the peak MR response between word and face activity within the left hemisphere, and between word activity in the left visual word form area and face activity in the right fusiform face area. The face lateralization index was positively rather than negatively correlated with the word index. In summary, we do not find a complementary relationship between visual word and face lateralization across subjects. The significance of the positive correlations is unclear: some may reflect the influences of general factors such as attention, but others may point to other factors that influence lateralization of function. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Language experience changes subsequent learning.

    PubMed

    Onnis, Luca; Thiessen, Erik

    2013-02-01

    What are the effects of experience on subsequent learning? We explored the effects of language-specific word order knowledge on the acquisition of sequential conditional information. Korean and English adults were engaged in a sequence learning task involving three different sets of stimuli: auditory linguistic (nonsense syllables), visual non-linguistic (nonsense shapes), and auditory non-linguistic (pure tones). The forward and backward probabilities between adjacent elements generated two equally probable and orthogonal perceptual parses of the elements, such that any significant preference at test must be due to either general cognitive biases, or prior language-induced biases. We found that language modulated parsing preferences with the linguistic stimuli only. Intriguingly, these preferences are congruent with the dominant word order patterns of each language, as corroborated by corpus analyses, and are driven by probabilistic preferences. Furthermore, although the Korean individuals had received extensive formal explicit training in English and lived in an English-speaking environment, they exhibited statistical learning biases congruent with their native language. Our findings suggest that mechanisms of statistical sequential learning are implicated in language across the lifespan, and experience with language may affect cognitive processes and later learning. Copyright © 2012 Elsevier B.V. All rights reserved.

  7. Evaluating the developmental trajectory of the episodic buffer component of working memory and its relation to word recognition in children.

    PubMed

    Wang, Shinmin; Allen, Richard J; Lee, Jun Ren; Hsieh, Chia-En

    2015-05-01

    The creation of temporary bound representation of information from different sources is one of the key abilities attributed to the episodic buffer component of working memory. Whereas the role of working memory in word learning has received substantial attention, very little is known about the link between the development of word recognition skills and the ability to bind information in the episodic buffer of working memory and how it may develop with age. This study examined the performance of Grade 2 children (8 years old), Grade 3 children (9 years old), and young adults on a task designed to measure their ability to bind visual and auditory-verbal information in working memory. Children's performance on this task significantly correlated with their word recognition skills even when chronological age, memory for individual elements, and other possible reading-related factors were taken into account. In addition, clear developmental trajectories were observed, with improvements in the ability to hold temporary bound information in working memory between Grades 2 and 3, and between the child and adult groups, that were independent from memory for the individual elements. These findings suggest that the capacity to temporarily bind novel auditory-verbal information to visual form in working memory is linked to the development of word recognition in children and improves with age. Copyright © 2015 Elsevier Inc. All rights reserved.

  8. Audio-visual speech perception in infants and toddlers with Down syndrome, fragile X syndrome, and Williams syndrome.

    PubMed

    D'Souza, Dean; D'Souza, Hana; Johnson, Mark H; Karmiloff-Smith, Annette

    2016-08-01

    Typically-developing (TD) infants can construct unified cross-modal percepts, such as a speaking face, by integrating auditory-visual (AV) information. This skill is a key building block upon which higher-level skills, such as word learning, are built. Because word learning is seriously delayed in most children with neurodevelopmental disorders, we assessed the hypothesis that this delay partly results from a deficit in integrating AV speech cues. AV speech integration has rarely been investigated in neurodevelopmental disorders, and never previously in infants. We probed for the McGurk effect, which occurs when the auditory component of one sound (/ba/) is paired with the visual component of another sound (/ga/), leading to the perception of an illusory third sound (/da/ or /tha/). We measured AV integration in 95 infants/toddlers with Down, fragile X, or Williams syndrome, whom we matched on Chronological and Mental Age to 25 TD infants. We also assessed a more basic AV perceptual ability: sensitivity to matching vs. mismatching AV speech stimuli. Infants with Williams syndrome failed to demonstrate a McGurk effect, indicating poor AV speech integration. Moreover, while the TD children discriminated between matching and mismatching AV stimuli, none of the other groups did, hinting at a basic deficit or delay in AV speech processing, which is likely to constrain subsequent language development. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. The reward of seeing: Different types of visual reward and their ability to modify oculomotor learning.

    PubMed

    Meermeier, Annegret; Gremmler, Svenja; Richert, Kerstin; Eckermann, Til; Lappe, Markus

    2017-10-01

    Saccadic adaptation is an oculomotor learning process that maintains the accuracy of eye movements to ensure effective perception of the environment. Although saccadic adaptation is commonly considered an automatic and low-level motor calibration in the cerebellum, we recently found that strength of adaptation is influenced by the visual content of the target: pictures of humans produced stronger adaptation than noise stimuli. This suggests that meaningful images may be considered rewarding or valuable in oculomotor learning. Here we report three experiments that establish the boundaries of this effect. In the first, we tested whether stimuli that were associated with high and low value following long term self-administered reinforcement learning produce stronger adaptation. Twenty-eight expert gamers participated in two sessions of adaptation to game-related high- and low-reward stimuli, but revealed no difference in saccadic adaptation (Bayes Factor01 = 5.49). In the second experiment, we tested whether cognitive (literate) meaning could induce stronger adaptation by comparing targets consisting of words and nonwords. The results of twenty subjects revealed no difference in adaptation strength (Bayes Factor01 = 3.21). The third experiment compared images of human figures to noise patterns for reactive saccades. Twenty-two subjects adapted significantly more toward images of human figures in comparison to noise (p < 0.001). We conclude that only primary (human vs. noise), but not secondary, reinforcement affects saccadic adaptation (words vs. nonwords, high- vs. low-value video game images).

  10. Representation of visual symbols in the visual word processing network.

    PubMed

    Muayqil, Taim; Davies-Thompson, Jodie; Barton, Jason J S

    2015-03-01

    Previous studies have shown that word processing involves a predominantly left-sided occipitotemporal network. Words are a form of symbolic representation, in that they are arbitrary perceptual stimuli that represent other objects, actions or concepts. Lesions of parts of the visual word processing network can cause alexia, which can be associated with difficulty processing other types of symbols such as musical notation or road signs. We investigated whether components of the visual word processing network were also activated by other types of symbols. In 16 music-literate subjects, we defined the visual word network using fMRI and examined responses to four symbolic categories: visual words, musical notation, instructive symbols (e.g. traffic signs), and flags and logos. For each category we compared responses not only to scrambled stimuli, but also to similar stimuli that lacked symbolic meaning. The left visual word form area and a homologous right fusiform region responded similarly to all four categories, but equally to both symbolic and non-symbolic equivalents. Greater response to symbolic than non-symbolic stimuli occurred only in the left inferior frontal and middle temporal gyri, but only for words, and in the case of the left inferior frontal gyri, also for musical notation. A whole-brain analysis comparing symbolic versus non-symbolic stimuli revealed a distributed network of inferior temporooccipital and parietal regions that differed for different symbols. The fusiform gyri are involved in processing the form of many symbolic stimuli, but not specifically for stimuli with symbolic content. Selectivity for stimuli with symbolic content only emerges in the visual word network at the level of the middle temporal and inferior frontal gyri, but is specific for words and musical notation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  11. Introducing social cues in multimedia learning: The role of pedagogic agents' image and language in a scientific lesson

    NASA Astrophysics Data System (ADS)

    Moreno, Roxana Arleen

    The present dissertation tested the hypothesis that software pedagogical agents can promote constructivist learning in a discovery-based multimedia environment. In a preliminary study, students who received a computer-based lesson on environmental science performed better on subsequent tests of problem solving and motivation when they learned with the mediation of a fictional agent compared to when they learned the same material from text. In order to investigate further the basis for this personal agent effect, I varied whether the agent's words were presented as speech or on-screen text and whether or not the agent's image appeared on the screen. Both with a fictional agent (Experiment 1) and a video of a human face (Experiment 2), students performed better on tests of retention, problem-solving transfer, and program ratings when words were presented as speech rather than on-screen text (producing a modality effect) but visual presence of the agent did not affect test performance (producing no image effect). Next, I varied whether or not the agent's words were presented in conversational style (i.e., as dialogue) or formal style (i.e., as monologue) both using speech (Experiment 3) and on-screen text (Experiment 4). In both experiments, there was a dialogue effect in which conversational-style produced better retention and transfer performance. Students who learned with conversational-style text rated the program more favorably than those who learned with monologue-style text. The results support cognitive principles of multimedia learning which underlie the understanding of a computer lesson about a complex scientific system.

  12. The Role of Native-Language Phonology in the Auditory Word Identification and Visual Word Recognition of Russian-English Bilinguals

    ERIC Educational Resources Information Center

    Shafiro, Valeriy; Kharkhurin, Anatoliy V.

    2009-01-01

    Abstract Does native language phonology influence visual word processing in a second language? This question was investigated in two experiments with two groups of Russian-English bilinguals, differing in their English experience, and a monolingual English control group. Experiment 1 tested visual word recognition following semantic…

  13. Mothers' multimodal information processing is modulated by multimodal interactions with their infants.

    PubMed

    Tanaka, Yukari; Fukushima, Hirokata; Okanoya, Kazuo; Myowa-Yamakoshi, Masako

    2014-10-17

    Social learning in infancy is known to be facilitated by multimodal (e.g., visual, tactile, and verbal) cues provided by caregivers. In parallel with infants' development, recent research has revealed that maternal neural activity is altered through interaction with infants, for instance, to be sensitive to infant-directed speech (IDS). The present study investigated the effect of mother- infant multimodal interaction on maternal neural activity. Event-related potentials (ERPs) of mothers were compared to non-mothers during perception of tactile-related words primed by tactile cues. Only mothers showed ERP modulation when tactile cues were incongruent with the subsequent words, and only when the words were delivered with IDS prosody. Furthermore, the frequency of mothers' use of those words was correlated with the magnitude of ERP differentiation between congruent and incongruent stimuli presentations. These results suggest that mother-infant daily interactions enhance multimodal integration of the maternal brain in parenting contexts.

  14. Mothers' multimodal information processing is modulated by multimodal interactions with their infants

    PubMed Central

    Tanaka, Yukari; Fukushima, Hirokata; Okanoya, Kazuo; Myowa-Yamakoshi, Masako

    2014-01-01

    Social learning in infancy is known to be facilitated by multimodal (e.g., visual, tactile, and verbal) cues provided by caregivers. In parallel with infants' development, recent research has revealed that maternal neural activity is altered through interaction with infants, for instance, to be sensitive to infant-directed speech (IDS). The present study investigated the effect of mother- infant multimodal interaction on maternal neural activity. Event-related potentials (ERPs) of mothers were compared to non-mothers during perception of tactile-related words primed by tactile cues. Only mothers showed ERP modulation when tactile cues were incongruent with the subsequent words, and only when the words were delivered with IDS prosody. Furthermore, the frequency of mothers' use of those words was correlated with the magnitude of ERP differentiation between congruent and incongruent stimuli presentations. These results suggest that mother-infant daily interactions enhance multimodal integration of the maternal brain in parenting contexts. PMID:25322936

  15. Eye-fixation behavior, lexical storage, and visual word recognition in a split processing model.

    PubMed

    Shillcock, R; Ellison, T M; Monaghan, P

    2000-10-01

    Some of the implications of a model of visual word recognition in which processing is conditioned by the anatomical splitting of the visual field between the two hemispheres of the brain are explored. The authors investigate the optimal processing of visually presented words within such an architecture, and, for a realistically sized lexicon of English, characterize a computationally optimal fixation point in reading. They demonstrate that this approach motivates a range of behavior observed in reading isolated words and text, including the optimal viewing position and its relationship with the preferred viewing location, the failure to fixate smaller words, asymmetries in hemisphere-specific processing, and the priority given to the exterior letters of words. The authors also show that split architectures facilitate the uptake of all the letter-position information necessary for efficient word recognition and that this information may be less specific than is normally assumed. A split model of word recognition captures a range of behavior in reading that is greater than that covered by existing models of visual word recognition.

  16. Land-use Scene Classification in High-Resolution Remote Sensing Images by Multiscale Deeply Described Correlatons

    NASA Astrophysics Data System (ADS)

    Qi, K.; Qingfeng, G.

    2017-12-01

    With the popular use of High-Resolution Satellite (HRS) images, more and more research efforts have been placed on land-use scene classification. However, it makes the task difficult with HRS images for the complex background and multiple land-cover classes or objects. This article presents a multiscale deeply described correlaton model for land-use scene classification. Specifically, the convolutional neural network is introduced to learn and characterize the local features at different scales. Then, learnt multiscale deep features are explored to generate visual words. The spatial arrangement of visual words is achieved through the introduction of adaptive vector quantized correlograms at different scales. Experiments on two publicly available land-use scene datasets demonstrate that the proposed model is compact and yet discriminative for efficient representation of land-use scene images, and achieves competitive classification results with the state-of-art methods.

  17. THE ROLE OF VISUALS IN VERBAL LEARNING--STUDIES IN TELEVISED INSTRUCTION, REPORT 3, SUMMARY REPORT.

    ERIC Educational Resources Information Center

    GROPPER, GEORGE L.

    THE INTEGRATION OF WORDS AND PICTURES IN THE TWO STUDIES REPORTED IN THIS VOLUME WAS ACCOMPLISHED UNCONVENTIONALLY. IN ONE STUDY, AN ENTIRE TOPIC, ARCHIMEDES' LAW, WAS COVERED IN A SELF-CONTAINED, ENTIRELY PICTORIAL LESSON AND ALSO IN A SELF-CONTAINED, ENTIRELY VERBAL LESSON. STUDENTS ACQUIRED ALL THE CONCEPTS AND PRINCIPLES MAKING UP ARCHIMEDES'…

  18. Using Reinforcement Learning to Understand the Emergence of "Intelligent" Eye-Movement Behavior during Reading

    ERIC Educational Resources Information Center

    Reichle, Erik D.; Laurent, Patryk A.

    2006-01-01

    The eye movements of skilled readers are typically very regular (K. Rayner, 1998). This regularity may arise as a result of the perceptual, cognitive, and motor limitations of the reader (e.g., limited visual acuity) and the inherent constraints of the task (e.g., identifying the words in their correct order). To examine this hypothesis,…

  19. An Analysis of the Units "I'm Learning My Past" and "The Place Where We Live" in the Social Studies Textbook Related to Critical Thinking Standards

    ERIC Educational Resources Information Center

    Aybek, Birsel; Aslan, Serkan

    2016-01-01

    Problem Statement: Various research have been conducted investigating the quality and quantity of textbooks such as wording, content, design, visuality, physical properties, activities, methods and techniques, questions and experiments, events, misconceptions, organizations, pictures, text selection, end of unit questions and assessments, indexes…

  20. [Analysis of intrusion errors in free recall].

    PubMed

    Diesfeldt, H F A

    2017-06-01

    Extra-list intrusion errors during five trials of the eight-word list-learning task of the Amsterdam Dementia Screening Test (ADST) were investigated in 823 consecutive psychogeriatric patients (87.1% suffering from major neurocognitive disorder). Almost half of the participants (45.9%) produced one or more intrusion errors on the verbal recall test. Correct responses were lower when subjects made intrusion errors, but learning slopes did not differ between subjects who committed intrusion errors and those who did not so. Bivariate regression analyses revealed that participants who committed intrusion errors were more deficient on measures of eight-word recognition memory, delayed visual recognition and tests of executive control (the Behavioral Dyscontrol Scale and the ADST-Graphical Sequences as measures of response inhibition). Using hierarchical multiple regression, only free recall and delayed visual recognition retained an independent effect in the association with intrusion errors, such that deficient scores on tests of episodic memory were sufficient to explain the occurrence of intrusion errors. Measures of inhibitory control did not add significantly to the explanation of intrusion errors in free recall, which makes insufficient strength of memory traces rather than a primary deficit in inhibition the preferred account for intrusion errors in free recall.

  1. Automatic lip reading by using multimodal visual features

    NASA Astrophysics Data System (ADS)

    Takahashi, Shohei; Ohya, Jun

    2013-12-01

    Since long time ago, speech recognition has been researched, though it does not work well in noisy places such as in the car or in the train. In addition, people with hearing-impaired or difficulties in hearing cannot receive benefits from speech recognition. To recognize the speech automatically, visual information is also important. People understand speeches from not only audio information, but also visual information such as temporal changes in the lip shape. A vision based speech recognition method could work well in noisy places, and could be useful also for people with hearing disabilities. In this paper, we propose an automatic lip-reading method for recognizing the speech by using multimodal visual information without using any audio information such as speech recognition. First, the ASM (Active Shape Model) is used to track and detect the face and lip in a video sequence. Second, the shape, optical flow and spatial frequencies of the lip features are extracted from the lip detected by ASM. Next, the extracted multimodal features are ordered chronologically so that Support Vector Machine is performed in order to learn and classify the spoken words. Experiments for classifying several words show promising results of this proposed method.

  2. Word Recognition and Word Identification: A Review of Research on Effective Instructional Practices with Learning Disabled Students.

    ERIC Educational Resources Information Center

    McCormick, Sandra; Becker, Evelyn Z.

    1996-01-01

    Reviews investigations related to word learning of learning disabled students. Finds that direct word study leads to reading improvement for learning disabled pupils, but that indirect instruction also provides assistance. Finds also that word knowledge instruction not only promotes word learning, but can heighten learning disabled students'…

  3. Analysis of brain activity and response to colour stimuli during learning tasks: an EEG study

    NASA Astrophysics Data System (ADS)

    Folgieri, Raffaella; Lucchiari, Claudio; Marini, Daniele

    2013-02-01

    The research project intends to demonstrate how EEG detection through BCI device can improve the analysis and the interpretation of colours-driven cognitive processes through the combined approach of cognitive science and information technology methods. To this end, firstly it was decided to design an experiment based on comparing the results of the traditional (qualitative and quantitative) cognitive analysis approach with the EEG signal analysis of the evoked potentials. In our case, the sensorial stimulus is represented by the colours, while the cognitive task consists in remembering the words appearing on the screen, with different combination of foreground (words) and background colours. In this work we analysed data collected from a sample of students involved in a learning process during which they received visual stimuli based on colour variation. The stimuli concerned both the background of the text to learn and the colour of the characters. The experiment indicated some interesting results concerning the use of primary (RGB) and complementary (CMY) colours.

  4. Effects of a Word-Learning Training on Children With Cochlear Implants

    PubMed Central

    Lund, Emily

    2014-01-01

    Preschool children with hearing loss who use cochlear implants demonstrate vocabulary delays when compared to their peers without hearing loss. These delays may be a result of deficient word-learning abilities; children with cochlear implants perform more poorly on rapid word-learning tasks than children with normal hearing. This study explored the malleability of rapid word learning of preschoolers with cochlear implants by evaluating the effects of a word-learning training on rapid word learning. A single-subject, multiple probe design across participants measured the impact of the training on children’s rapid word-learning performance. Participants included 5 preschool children with cochlear implants who had an expressive lexicon of less than 150 words. An investigator guided children to identify, repeat, and learn about unknown sets of words in 2-weekly sessions across 10 weeks. The probe measure, a rapid word-learning task with a different set of words than those taught during training, was collected in the baseline, training, and maintenance conditions. All participants improved their receptive rapid word-learning performance in the training condition. The functional relation indicates that the receptive rapid word-learning performance of children with cochlear implants is malleable. PMID:23981321

  5. Consolidation of novel word learning in native English-speaking adults.

    PubMed

    Kurdziel, Laura B F; Spencer, Rebecca M C

    2016-01-01

    Sleep has been shown to improve the retention of newly learned words. However, most methodologies have used artificial or foreign language stimuli, with learning limited to word/novel word or word/image pairs. Such stimuli differ from many word-learning scenarios in which definition strings are learned with novel words. Thus, we examined sleep's benefit on learning new words within a native language by using very low-frequency words. Participants learned 45 low-frequency English words and, at subsequent recall, attempted to recall the words when given the corresponding definitions. Participants either learned in the morning with recall in the evening (wake group), or learned in the evening with recall the following morning (sleep group). Performance change across the delay was significantly better in the sleep than the wake group. Additionally, the Levenshtein distance, a measure of correctness of the typed word compared with the target word, became significantly worse following wake, whereas sleep protected correctness of recall. Polysomnographic data from a subsample of participants suggested that rapid eye movement (REM) sleep may be particularly important for this benefit. These results lend further support for sleep's function on semantic learning even for word/definition pairs within a native language.

  6. The Effects of Visual Attention Span and Phonological Decoding in Reading Comprehension in Dyslexia: A Path Analysis.

    PubMed

    Chen, Chen; Schneps, Matthew H; Masyn, Katherine E; Thomson, Jennifer M

    2016-11-01

    Increasing evidence has shown visual attention span to be a factor, distinct from phonological skills, that explains single-word identification (pseudo-word/word reading) performance in dyslexia. Yet, little is known about how well visual attention span explains text comprehension. Observing reading comprehension in a sample of 105 high school students with dyslexia, we used a pathway analysis to examine the direct and indirect path between visual attention span and reading comprehension while controlling for other factors such as phonological awareness, letter identification, short-term memory, IQ and age. Integrating phonemic decoding efficiency skills in the analytic model, this study aimed to disentangle how visual attention span and phonological skills work together in reading comprehension for readers with dyslexia. We found visual attention span to have a significant direct effect on more difficult reading comprehension but not on an easier level. It also had a significant direct effect on pseudo-word identification but not on word identification. In addition, we found that visual attention span indirectly explains reading comprehension through pseudo-word reading and word reading skills. This study supports the hypothesis that at least part of the dyslexic profile can be explained by visual attention abilities. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  7. The Processing of Visual and Phonological Configurations of Chinese One- and Two-Character Words in a Priming Task of Semantic Categorization.

    PubMed

    Ma, Bosen; Wang, Xiaoyun; Li, Degao

    2015-01-01

    To separate the contribution of phonological from that of visual-orthographic information in the recognition of a Chinese word that is composed of one or two Chinese characters, we conducted two experiments in a priming task of semantic categorization (PTSC), in which length (one- or two-character words), relation, prime (related or unrelated prime-target pairs), and SOA (47, 87, or 187 ms) were manipulated. The prime was similar to the target in meaning or in visual configuration in Experiment A and in meaning or in pronunciation in Experiment B. The results indicate that the two-character words were similar to the one-character words but were less demanding of cognitive resources than the one-character words in the processing of phonological, visual-orthographic, and semantic information. The phonological primes had a facilitating effect at the SOA of 47 ms but an inhibitory effect at the SOA of 187 ms on the participants' reaction times; the visual-orthographic primes only had an inhibitory influence on the participants' reaction times at the SOA of 187 ms. The visual configuration of a Chinese word of one or two Chinese characters has its own contribution in helping retrieve the word's meanings; similarly, the phonological configuration of a one- or two-character word plays its own role in triggering activations of the word's semantic representations.

  8. A Dual-Route Perspective on Brain Activation in Response to Visual Words: Evidence for a Length by Lexicality Interaction in the Visual Word Form Area (VWFA)

    PubMed Central

    Schurz, Matthias; Sturm, Denise; Richlan, Fabio; Kronbichler, Martin; Ladurner, Gunther; Wimmer, Heinz

    2010-01-01

    Based on our previous work, we expected the Visual Word Form Area (VWFA) in the left ventral visual pathway to be engaged by both whole-word recognition and by serial sublexical coding of letter strings. To examine this double function, a phonological lexical decision task (i.e., “Does xxx sound like an existing word?”) presented short and long letter strings of words, pseudohomophones, and pseudowords (e.g., Taxi, Taksi and Tazi). Main findings were that the length effect for words was limited to occipital regions and absent in the VWFA. In contrast, a marked length effect for pseudowords was found throughout the ventral visual pathway including the VWFA, as well as in regions presumably engaged by visual attention and silent-articulatory processes. The length by lexicality interaction on brain activation corresponds to well-established behavioral findings of a length by lexicality interaction on naming latencies and speaks for the engagement of the VWFA by both lexical and sublexical processes. PMID:19896538

  9. A dual-route perspective on brain activation in response to visual words: evidence for a length by lexicality interaction in the visual word form area (VWFA).

    PubMed

    Schurz, Matthias; Sturm, Denise; Richlan, Fabio; Kronbichler, Martin; Ladurner, Gunther; Wimmer, Heinz

    2010-02-01

    Based on our previous work, we expected the Visual Word Form Area (VWFA) in the left ventral visual pathway to be engaged by both whole-word recognition and by serial sublexical coding of letter strings. To examine this double function, a phonological lexical decision task (i.e., "Does xxx sound like an existing word?") presented short and long letter strings of words, pseudohomophones, and pseudowords (e.g., Taxi, Taksi and Tazi). Main findings were that the length effect for words was limited to occipital regions and absent in the VWFA. In contrast, a marked length effect for pseudowords was found throughout the ventral visual pathway including the VWFA, as well as in regions presumably engaged by visual attention and silent-articulatory processes. The length by lexicality interaction on brain activation corresponds to well-established behavioral findings of a length by lexicality interaction on naming latencies and speaks for the engagement of the VWFA by both lexical and sublexical processes. Copyright (c) 2009 Elsevier Inc. All rights reserved.

  10. Subliminal convergence of Kanji and Kana words: further evidence for functional parcellation of the posterior temporal cortex in visual word perception.

    PubMed

    Nakamura, Kimihiro; Dehaene, Stanislas; Jobert, Antoinette; Le Bihan, Denis; Kouider, Sid

    2005-06-01

    Recent evidence has suggested that the human occipitotemporal region comprises several subregions, each sensitive to a distinct processing level of visual words. To further explore the functional architecture of visual word recognition, we employed a subliminal priming method with functional magnetic resonance imaging (fMRI) during semantic judgments of words presented in two different Japanese scripts, Kanji and Kana. Each target word was preceded by a subliminal presentation of either the same or a different word, and in the same or a different script. Behaviorally, word repetition produced significant priming regardless of whether the words were presented in the same or different script. At the neural level, this cross-script priming was associated with repetition suppression in the left inferior temporal cortex anterior and dorsal to the visual word form area hypothesized for alphabetical writing systems, suggesting that cross-script convergence occurred at a semantic level. fMRI also evidenced a shared visual occipito-temporal activation for words in the two scripts, with slightly more mesial and right-predominant activation for Kanji and with greater occipital activation for Kana. These results thus allow us to separate script-specific and script-independent regions in the posterior temporal lobe, while demonstrating that both can be activated subliminally.

  11. Measuring Explicit Word Learning of Preschool Children: A Development Study.

    PubMed

    Kelley, Elizabeth Spencer

    2017-08-15

    The purpose of this article is to present preliminary results related to the development of a new measure of explicit word learning. The measure incorporated elements of explicit vocabulary instruction and dynamic assessment and was designed to be sensitive to differences in word learning skill and to be feasible for use in clinical settings. The explicit word learning measure included brief teaching trials and repeated fine-grained measurement of semantic knowledge and production of 3 novel words (2 verbs and 1 adjective). Preschool children (N = 23) completed the measure of explicit word learning; standardized, norm-referenced measures of expressive and receptive vocabulary; and an incidental word learning task. The measure of explicit word learning provided meaningful information about word learning. Performance on the explicit measure was related to existing vocabulary knowledge and incidental word learning. Findings from this development study indicate that further examination of the measure of explicit word learning is warranted. The measure may have the potential to identify children who are poor word learners. https://doi.org/10.23641/asha.5170738.

  12. Novel-word learning deficits in Mandarin-speaking preschool children with specific language impairments.

    PubMed

    Chen, Yuchun; Liu, Huei-Mei

    2014-01-01

    Children with SLI exhibit overall deficits in novel word learning compared to their age-matched peers. However, the manifestation of the word learning difficulty in SLI was not consistent across tasks and the factors affecting the learning performance were not yet determined. Our aim is to examine the extent of word learning difficulties in Mandarin-speaking preschool children with SLI, and to explore the potent influence of existing lexical knowledge on to the word learning process. Preschool children with SLI (n=37) and typical language development (n=33) were exposed to novel words for unfamiliar objects embedded in stories. Word learning tasks including the initial mapping and short-term repetitive learning were designed. Results revealed that Mandarin-speaking preschool children with SLI performed as well as their age-peers in the initial form-meaning mapping task. Their word learning difficulty was only evidently shown in the short-term repetitive learning task under a production demand, and their learning speed was slower than the control group. Children with SLI learned the novel words with a semantic head better in both the initial mapping and repetitive learning tasks. Moderate correlations between stand word learning performances and scores on standardized vocabulary were found after controlling for children's age and nonverbal IQ. The results suggested that the word learning difficulty in children with SLI occurred in the process of establishing a robust phonological representation at the beginning stage of word learning. Also, implicit compound knowledge is applied to aid word learning process for children with and without SLI. We also provide the empirical data to validate the relationship between preschool children's word learning performance and their existing receptive vocabulary ability. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. Visual word form familiarity and attention in lateral difference during processing Japanese Kana words.

    PubMed

    Nakagawa, A; Sukigara, M

    2000-09-01

    The purpose of this study was to examine the relationship between familiarity and laterality in reading Japanese Kana words. In two divided-visual-field experiments, three- or four-character Hiragana or Katakana words were presented in both familiar and unfamiliar scripts, to which subjects performed lexical decisions. Experiment 1, using three stimulus durations (40, 100, 160 ms), suggested that only in the unfamiliar script condition was increased stimulus presentation time differently affected in each visual field. To examine this lateral difference during the processing of unfamiliar scripts as related to attentional laterality, a concurrent auditory shadowing task was added in Experiment 2. The results suggested that processing words in an unfamiliar script requires attention, which could be left-hemisphere lateralized, while orthographically familiar kana words can be processed automatically on the basis of their word-level orthographic representations or visual word form. Copyright 2000 Academic Press.

  14. A task-dependent causal role for low-level visual processes in spoken word comprehension.

    PubMed

    Ostarek, Markus; Huettig, Falk

    2017-08-01

    It is well established that the comprehension of spoken words referring to object concepts relies on high-level visual areas in the ventral stream that build increasingly abstract representations. It is much less clear whether basic low-level visual representations are also involved. Here we asked in what task situations low-level visual representations contribute functionally to concrete word comprehension using an interference paradigm. We interfered with basic visual processing while participants performed a concreteness task (Experiment 1), a lexical-decision task (Experiment 2), and a word class judgment task (Experiment 3). We found that visual noise interfered more with concrete versus abstract word processing, but only when the task required visual information to be accessed. This suggests that basic visual processes can be causally involved in language comprehension, but that their recruitment is not automatic and rather depends on the type of information that is required in a given task situation. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  15. Image Location Estimation by Salient Region Matching.

    PubMed

    Qian, Xueming; Zhao, Yisi; Han, Junwei

    2015-11-01

    Nowadays, locations of images have been widely used in many application scenarios for large geo-tagged image corpora. As to images which are not geographically tagged, we estimate their locations with the help of the large geo-tagged image set by content-based image retrieval. In this paper, we exploit spatial information of useful visual words to improve image location estimation (or content-based image retrieval performances). We proposed to generate visual word groups by mean-shift clustering. To improve the retrieval performance, spatial constraint is utilized to code the relative position of visual words. We proposed to generate a position descriptor for each visual word and build fast indexing structure for visual word groups. Experiments show the effectiveness of our proposed approach.

  16. Morphable Word Clouds for Time-Varying Text Data Visualization.

    PubMed

    Chi, Ming-Te; Lin, Shih-Syun; Chen, Shiang-Yi; Lin, Chao-Hung; Lee, Tong-Yee

    2015-12-01

    A word cloud is a visual representation of a collection of text documents that uses various font sizes, colors, and spaces to arrange and depict significant words. The majority of previous studies on time-varying word clouds focuses on layout optimization and temporal trend visualization. However, they do not fully consider the spatial shapes and temporal motions of word clouds, which are important factors for attracting people's attention and are also important cues for human visual systems in capturing information from time-varying text data. This paper presents a novel method that uses rigid body dynamics to arrange multi-temporal word-tags in a specific shape sequence under various constraints. Each word-tag is regarded as a rigid body in dynamics. With the aid of geometric, aesthetic, and temporal coherence constraints, the proposed method can generate a temporally morphable word cloud that not only arranges word-tags in their corresponding shapes but also smoothly transforms the shapes of word clouds over time, thus yielding a pleasing time-varying visualization. Using the proposed frame-by-frame and morphable word clouds, people can observe the overall story of a time-varying text data from the shape transition, and people can also observe the details from the word clouds in frames. Experimental results on various data demonstrate the feasibility and flexibility of the proposed method in morphable word cloud generation. In addition, an application that uses the proposed word clouds in a simulated exhibition demonstrates the usefulness of the proposed method.

  17. Orthographic versus semantic matching in visual search for words within lists.

    PubMed

    Léger, Laure; Rouet, Jean-François; Ros, Christine; Vibert, Nicolas

    2012-03-01

    An eye-tracking experiment was performed to assess the influence of orthographic and semantic distractor words on visual search for words within lists. The target word (e.g., "raven") was either shown to participants before the search (literal search) or defined by its semantic category (e.g., "bird", categorical search). In both cases, the type of words included in the list affected visual search times and eye movement patterns. In the literal condition, the presence of orthographic distractors sharing initial and final letters with the target word strongly increased search times. Indeed, the orthographic distractors attracted participants' gaze and were fixated for longer times than other words in the list. The presence of semantic distractors related to the target word also increased search times, which suggests that significant automatic semantic processing of nontarget words took place. In the categorical condition, semantic distractors were expected to have a greater impact on the search task. As expected, the presence in the list of semantic associates of the target word led to target selection errors. However, semantic distractors did not significantly increase search times any more, whereas orthographic distractors still did. Hence, the visual characteristics of nontarget words can be strong predictors of the efficiency of visual search even when the exact target word is unknown. The respective impacts of orthographic and semantic distractors depended more on the characteristics of lists than on the nature of the search task.

  18. Talker and lexical effects on audiovisual word recognition by adults with cochlear implants.

    PubMed

    Kaiser, Adam R; Kirk, Karen Iler; Lachs, Lorin; Pisoni, David B

    2003-04-01

    The present study examined how postlingually deafened adults with cochlear implants combine visual information from lipreading with auditory cues in an open-set word recognition task. Adults with normal hearing served as a comparison group. Word recognition performance was assessed using lexically controlled word lists presented under auditory-only, visual-only, and combined audiovisual presentation formats. Effects of talker variability were studied by manipulating the number of talkers producing the stimulus tokens. Lexical competition was investigated using sets of lexically easy and lexically hard test words. To assess the degree of audiovisual integration, a measure of visual enhancement, R(a), was used to assess the gain in performance provided in the audiovisual presentation format relative to the maximum possible performance obtainable in the auditory-only format. Results showed that word recognition performance was highest for audiovisual presentation followed by auditory-only and then visual-only stimulus presentation. Performance was better for single-talker lists than for multiple-talker lists, particularly under the audiovisual presentation format. Word recognition performance was better for the lexically easy than for the lexically hard words regardless of presentation format. Visual enhancement scores were higher for single-talker conditions compared to multiple-talker conditions and tended to be somewhat better for lexically easy words than for lexically hard words. The pattern of results suggests that information from the auditory and visual modalities is used to access common, multimodal lexical representations in memory. The findings are discussed in terms of the complementary nature of auditory and visual sources of information that specify the same underlying gestures and articulatory events in speech.

  19. Talker and Lexical Effects on Audiovisual Word Recognition by Adults With Cochlear Implants

    PubMed Central

    Kaiser, Adam R.; Kirk, Karen Iler; Lachs, Lorin; Pisoni, David B.

    2012-01-01

    The present study examined how postlingually deafened adults with cochlear implants combine visual information from lipreading with auditory cues in an open-set word recognition task. Adults with normal hearing served as a comparison group. Word recognition performance was assessed using lexically controlled word lists presented under auditory-only, visual-only, and combined audiovisual presentation formats. Effects of talker variability were studied by manipulating the number of talkers producing the stimulus tokens. Lexical competition was investigated using sets of lexically easy and lexically hard test words. To assess the degree of audiovisual integration, a measure of visual enhancement, Ra, was used to assess the gain in performance provided in the audiovisual presentation format relative to the maximum possible performance obtainable in the auditory-only format. Results showed that word recognition performance was highest for audiovisual presentation followed by auditory-only and then visual-only stimulus presentation. Performance was better for single-talker lists than for multiple-talker lists, particularly under the audiovisual presentation format. Word recognition performance was better for the lexically easy than for the lexically hard words regardless of presentation format. Visual enhancement scores were higher for single-talker conditions compared to multiple-talker conditions and tended to be somewhat better for lexically easy words than for lexically hard words. The pattern of results suggests that information from the auditory and visual modalities is used to access common, multimodal lexical representations in memory. The findings are discussed in terms of the complementary nature of auditory and visual sources of information that specify the same underlying gestures and articulatory events in speech. PMID:14700380

  20. Does viotin activate violin more than viocin? On the use of visual cues during visual-word recognition.

    PubMed

    Perea, Manuel; Panadero, Victoria

    2014-01-01

    The vast majority of neural and computational models of visual-word recognition assume that lexical access is achieved via the activation of abstract letter identities. Thus, a word's overall shape should play no role in this process. In the present lexical decision experiment, we compared word-like pseudowords like viotín (same shape as its base word: violín) vs. viocín (different shape) in mature (college-aged skilled readers), immature (normally reading children), and immature/impaired (young readers with developmental dyslexia) word-recognition systems. Results revealed similar response times (and error rates) to consistent-shape and inconsistent-shape pseudowords for both adult skilled readers and normally reading children - this is consistent with current models of visual-word recognition. In contrast, young readers with developmental dyslexia made significantly more errors to viotín-like pseudowords than to viocín-like pseudowords. Thus, unlike normally reading children, young readers with developmental dyslexia are sensitive to a word's visual cues, presumably because of poor letter representations.

  1. Comparison of spatiotemporal cortical activation pattern during visual perception of Korean, English, Chinese words: an event-related potential study.

    PubMed

    Kim, Kyung Hwan; Kim, Ja Hyun

    2006-02-20

    The aim of this study was to compare spatiotemporal cortical activation patterns during the visual perception of Korean, English, and Chinese words. The comparison of these three languages offers an opportunity to study the effect of written forms on cortical processing of visually presented words, because of partial similarity/difference among words of these languages, and the familiarity of native Koreans with these three languages at the word level. Single-character words and pictograms were excluded from the stimuli in order to activate neuronal circuitries that are involved only in word perception. Since a variety of cerebral processes are sequentially evoked during visual word perception, a high-temporal resolution is required and thus we utilized event-related potential (ERP) obtained from high-density electroencephalograms. The differences and similarities observed from statistical analyses of ERP amplitudes, the correlation between ERP amplitudes and response times, and the patterns of current source density, appear to be in line with demands of visual and semantic analysis resulting from the characteristics of each language, and the expected task difficulties for native Korean subjects.

  2. Competition between multiple words for a referent in cross-situational word learning

    PubMed Central

    Benitez, Viridiana L.; Yurovsky, Daniel; Smith, Linda B.

    2016-01-01

    Three experiments investigated competition between word-object pairings in a cross-situational word-learning paradigm. Adults were presented with One-Word pairings, where a single word labeled a single object, and Two-Word pairings, where two words labeled a single object. In addition to measuring learning of these two pairing types, we measured competition between words that refer to the same object. When the word-object co-occurrences were presented intermixed in training (Experiment 1), we found evidence for direct competition between words that label the same referent. Separating the two words for an object in time eliminated any evidence for this competition (Experiment 2). Experiment 3 demonstrated that adding a linguistic cue to the second label for a referent led to different competition effects between adults who self-reported different language learning histories, suggesting both distinctiveness and language learning history affect competition. Finally, in all experiments, competition effects were unrelated to participants’ explicit judgments of learning, suggesting that competition reflects the operating characteristics of implicit learning processes. Together, these results demonstrate that the role of competition between overlapping associations in statistical word-referent learning depends on time, the distinctiveness of word-object pairings, and language learning history. PMID:27087742

  3. First-year medical students prefer multiple learning styles.

    PubMed

    Lujan, Heidi L; DiCarlo, Stephen E

    2006-03-01

    Students have preferences for the ways in which they receive information. The visual, auditory, reading/writing, kinesthetic (VARK) questionnaire identifies student's preferences for particular modes of information presentation. We administered the VARK questionnaire to our first-year medical students, and 166 of 250 students (66%) returned the completed questionnaire. Only 36.1% of the students preferred a single mode of information presentation. Among these students, 5.4% preferred visual (learning from graphs, charts, and flow diagrams), 4.8% preferred auditory (learning from speech), 7.8% preferred printed words (learning from reading and writing), and 18.1% preferred using all their senses (kinesthetics: learning from touch, hearing, smell, taste, and sight). In contrast, most students (63.8%) preferred multiple modes [2 modes (24.5%), 3 modes (32.1%), or 4 modes (43.4%)] of information presentation. Knowing the students preferred modes can 1) help provide instruction tailored to the student's individual preference, 2) overcome the predisposition to treat all students in a similar way, and 3) motivate teachers to move from their preferred mode(s) to using others.

  4. Impairments of Multisensory Integration and Cross-Sensory Learning as Pathways to Dyslexia

    PubMed Central

    Hahn, Noemi; Foxe, John J.; Molholm, Sophie

    2014-01-01

    Two sensory systems are intrinsic to learning to read. Written words enter the brain through the visual system and associated sounds through the auditory system. The task before the beginning reader is quite basic. She must learn correspondences between orthographic tokens and phonemic utterances, and she must do this to the point that there is seamless automatic ‘connection’ between these sensorially distinct units of language. It is self-evident then that learning to read requires formation of cross-sensory associations to the point that deeply encoded multisensory representations are attained. While the majority of individuals manage this task to a high degree of expertise, some struggle to attain even rudimentary capabilities. Why do dyslexic individuals, who learn well in myriad other domains, fail at this particular task? Here, we examine the literature as it pertains to multisensory processing in dyslexia. We find substantial support for multisensory deficits in dyslexia, and make the case that to fully understand its neurological basis, it will be necessary to thoroughly probe the integrity of auditory-visual integration mechanisms. PMID:25265514

  5. Learning new meanings for known words: Biphasic effects of prior knowledge.

    PubMed

    Fang, Xiaoping; Perfetti, Charles; Stafura, Joseph

    2017-01-01

    In acquiring word meanings, learners are often confronted by a single word form that is mapped to two or more meanings. For example, long after how to roller-"skate", one may learn that "skate" is also a kind of fish. Such learning of new meanings for familiar words involves two potentially contrasting processes, relative to new form-new meaning learning: 1) Form-based familiarity may facilitate learning a new meaning, and 2) meaning-based interference may inhibit learning a new meaning. We examined these two processes by having native English speakers learn new, unrelated meanings for familiar (high frequency) and less familiar (low frequency) English words, as well as for unfamiliar (novel or pseudo-) words. Tracking learning with cued-recall tasks at several points during learning revealed a biphasic pattern: higher learning rates and greater learning efficiency for familiar words relative to novel words early in learning and a reversal of this pattern later in learning. Following learning, interference from original meanings for familiar words was detected in a semantic relatedness judgment task. Additionally, lexical access to familiar words with new meanings became faster compared to their exposure controls, but no such effect occurred for less familiar words. Overall, the results suggest a biphasic pattern of facilitating and interfering processes: Familiar word forms facilitate learning earlier, while interference from original meanings becomes more influential later. This biphasic pattern reflects the co-activation of new and old meanings during learning, a process that may play a role in lexicalization of new meanings.

  6. Infants Encode Phonetic Detail during Cross-Situational Word Learning

    PubMed Central

    Escudero, Paola; Mulak, Karen E.; Vlach, Haley A.

    2016-01-01

    Infants often hear new words in the context of more than one candidate referent. In cross-situational word learning (XSWL), word-object mappings are determined by tracking co-occurrences of words and candidate referents across multiple learning events. Research demonstrates that infants can learn words in XSWL paradigms, suggesting that it is a viable model of real-world word learning. However, these studies have all presented infants with words that have no or minimal phonological overlap (e.g., BLICKET and GAX). Words often contain some degree of phonological overlap, and it is unknown whether infants can simultaneously encode fine phonological detail while learning words via XSWL. We tested 12-, 15-, 17-, and 20-month-olds’ XSWL of eight words that, when paired, formed non-minimal pairs (MPs; e.g., BON–DEET) or MPs (e.g., BON–TON, DEET–DIT). The results demonstrated that infants are able to learn word-object mappings and encode them with sufficient phonetic detail as to identify words in both non-minimal and MP contexts. Thus, this work suggests that infants are able to simultaneously discriminate phonetic differences between words and map words to referents in an implicit learning paradigm such as XSWL. PMID:27708605

  7. Neuron recycling for learning the alphabetic principles.

    PubMed

    Scliar-Cabral, Leonor

    2014-01-01

    The main purpose of this paper is to discuss an approach to the phonic method of learning-teaching early literacy development, namely that the visual neurons must be recycled to recognize the small differences among pertinent letter features. In addition to the challenge of segmenting the speech chain and the syllable for learning the alphabetic principles, neuroscience has demonstrated another major challenge: neurons in mammals are programmed to process visual signals symmetrically. In order to develop early literacy, visual neurons must be recycled to overcome this initial programming together with phonological awareness, expanding it with the ability to delimit words, including clitics, as well as assigning stress to words. To achieve this goal, Scliar's Early Literacy Development System was proposed and tested. Sixteen subjects (10 girls and 6 boys) comprised the experimental group (mean age 6.02 years), and 16 subjects (7 girls and 9 boys) formed the control group (mean age 6.10 years). The research instruments were a psychosociolinguistic questionnaire to reveal the subjects' profile and a post-test battery of tests. At the beginning of the experiment, the experimental group was submitted to an intervention program based on Scliar's Early Literacy Development System. One of the tests is discussed in this paper, the grapheme-phoneme test: subjects had to read aloud a pseudoword with 4 graphemes, signaled by the experimenter and designed to assess the subject's ability to convert a grapheme into its correspondent phoneme. The average value for the test group was 25.0 correct answers (SD = 11.4); the control group had an average of 14.3 correct answers (SD = 10.6): The difference was significant. The experimental results validate Scliar's Early Literacy Development System and indicate the need to redesign early literacy development methods. © 2014 S. Karger AG, Basel.

  8. Does Temporal Integration Occur for Unrecognizable Words in Visual Crowding?

    PubMed Central

    Zhou, Jifan; Lee, Chia-Lin; Li, Kuei-An; Tien, Yung-Hsuan; Yeh, Su-Ling

    2016-01-01

    Visual crowding—the inability to see an object when it is surrounded by flankers in the periphery—does not block semantic activation: unrecognizable words due to visual crowding still generated robust semantic priming in subsequent lexical decision tasks. Based on the previous finding, the current study further explored whether unrecognizable crowded words can be temporally integrated into a phrase. By showing one word at a time, we presented Chinese four-word idioms with either a congruent or incongruent ending word in order to examine whether the three preceding crowded words can be temporally integrated to form a semantic context so as to affect the processing of the ending word. Results from both behavioral (Experiment 1) and Event-Related Potential (Experiment 2 and 3) measures showed congruency effect in only the non-crowded condition, which does not support the existence of unconscious multi-word integration. Aside from four-word idioms, we also found that two-word (modifier + adjective combination) integration—the simplest kind of temporal semantic integration—did not occur in visual crowding (Experiment 4). Our findings suggest that integration of temporally separated words might require conscious awareness, at least under the timing conditions tested in the current study. PMID:26890366

  9. Young children's fast mapping and generalization of words, facts, and pictograms.

    PubMed

    Deák, Gedeon O; Toney, Alexis J

    2013-06-01

    To test general and specific processes of symbol learning, 4- and 5-year-old children learned three kinds of abstract associates for novel objects: words, facts, and pictograms. To test fast mapping (i.e., one-trial learning) and subsequent learning, comprehension was tested after each of four exposures. Production was also tested, as was children's tendency to generalize learned items to new objects in the same taxon. To test for a bias toward mutually exclusive associations, children learned either one-to-one or many-to-many mappings. In Experiment 1, children learned words, facts (with or without incidental novel words), or pictograms. In Experiment 2, children learned words or pictograms. In both of these experiments, children learned words slower than facts and pictograms. Pictograms and facts were generalized more systematically than words, but only in Experiment 1. Children learned one-to-one mappings faster only in Experiment 2, when cognitive load was increased. In Experiment 3, 3- and 4-year-olds were taught facts (with novel words), words, and pictograms. Children learned facts faster than words; however, they remembered all items equally well a week later. The results suggest that word learning follows non-specialized memory and associative learning processes. Copyright © 2013 Elsevier Inc. All rights reserved.

  10. The Architecture of Intuition: Fluency and Affect Determine Intuitive Judgments of Semantic and Visual Coherence and Judgments of Grammaticality in Artificial Grammar Learning

    ERIC Educational Resources Information Center

    Topolinski, Sascha; Strack, Fritz

    2009-01-01

    People can intuitively detect whether a word triad has a common remote associate (coherent) or does not have one (incoherent) before and independently of actually retrieving the common associate. The authors argue that semantic coherence increases the processing fluency for coherent triads and that this increased fluency triggers a brief and…

  11. Effects of auditory and visual modalities in recall of words.

    PubMed

    Gadzella, B M; Whitehead, D A

    1975-02-01

    Ten experimental conditions were used to study the effects of auditory and visual (printed words, uncolored and colored pictures) modalities and their various combinations with college students. A recall paradigm was employed in which subjects responded in a written test. Analysis of data showed the auditory modality was superior to visual (pictures) ones but was not significantly different from visual (printed words) modality. In visual modalities, printed words were superior to colored pictures. Generally, conditions with multiple modes of representation of stimuli were significantly higher than for conditions with single modes. Multiple modalities, consisting of two or three modes, did not differ significantly from each other. It was concluded that any two modalities of the stimuli presented simultaneously were just as effective as three in recall of stimulus words.

  12. Dissociating visual form from lexical frequency using Japanese.

    PubMed

    Twomey, Tae; Kawabata Duncan, Keith J; Hogan, John S; Morita, Kenji; Umeda, Kazumasa; Sakai, Katsuyuki; Devlin, Joseph T

    2013-05-01

    In Japanese, the same word can be written in either morphographic Kanji or syllabographic Hiragana and this provides a unique opportunity to disentangle a word's lexical frequency from the frequency of its visual form - an important distinction for understanding the neural information processing in regions engaged by reading. Behaviorally, participants responded more quickly to high than low frequency words and to visually familiar relative to less familiar words, independent of script. Critically, the imaging results showed that visual familiarity, as opposed to lexical frequency, had a strong effect on activation in ventral occipito-temporal cortex. Activation here was also greater for Kanji than Hiragana words and this was not due to their inherent differences in visual complexity. These findings can be understood within a predictive coding framework in which vOT receives bottom-up information encoding complex visual forms and top-down predictions from regions encoding non-visual attributes of the stimulus. Copyright © 2012 Elsevier Inc. All rights reserved.

  13. Acquisition of linguistic procedures for printed words: neuropsychological implications for learning.

    PubMed

    Berninger, V W

    1988-10-01

    A microcomputerized experiment, administered to 45 children in the 2nd, 5th, and 8th month of first grade, manipulated three variables: (a) stimulus unit (whole word or letter-by-letter presentation), (b) nature of stimulus information (phonically regular words, phonically irregular words, nonsense words, and letter strings, which differ in whether phonemic, orthographic, semantic, and/or name codes are available), and (c) linguistic task (lexical decision, naming, and written reproduction). Letter-by-letter presentation resulted in more accurate lexical decision and naming but not more accurate written reproduction. Interactions between nature of stimulus information and linguistic task occurred. Throughout the year, accuracy was greater for lexical decision than for naming or written reproduction. The superiority of lexical decision cannot be attributed to the higher probability of correct responses on a binary choice task because only consistently correct responses on repeated trials were analyzed. The earlier development of lexical decision, a receptive task, than of naming or written reproduction, production tasks, suggests that hidden units (Hinton & Sejnowski, 1986) in tertiary cortical areas may abstract visual-linguistic associations in printed words before production units in primary cortical areas can produce printed words orally or graphically.

  14. A test of the orthographic recoding hypothesis

    NASA Astrophysics Data System (ADS)

    Gaygen, Daniel E.

    2003-04-01

    The Orthographic Recoding Hypothesis [D. E. Gaygen and P. A. Luce, Percept. Psychophys. 60, 465-483 (1998)] was tested. According to this hypothesis, listeners recognize spoken words heard for the first time by mapping them onto stored representations of the orthographic forms of the words. Listeners have a stable orthographic representation of words, but no phonological representation, when those words have been read frequently but never heard or spoken. Such may be the case for low frequency words such as jargon. Three experiments using visually and auditorily presented nonword stimuli tested this hypothesis. The first two experiments were explicit tests of memory (old-new tests) for words presented visually. In the first experiment, the recognition of auditorily presented nonwords was facilitated when they previously appeared on a visually presented list. The second experiment was similar, but included a concurrent articulation task during a visual word list presentation, thus preventing covert rehearsal of the nonwords. The results were similar to the first experiment. The third experiment was an indirect test of memory (auditory lexical decision task) for visually presented nonwords. Auditorily presented nonwords were identified as nonwords significantly more slowly if they had previously appeared on the visually presented list accompanied by a concurrent articulation task.

  15. Learning new meanings for known words: Biphasic effects of prior knowledge

    PubMed Central

    Fang, Xiaoping; Perfetti, Charles; Stafura, Joseph

    2017-01-01

    In acquiring word meanings, learners are often confronted by a single word form that is mapped to two or more meanings. For example, long after how to roller-“skate”, one may learn that “skate” is also a kind of fish. Such learning of new meanings for familiar words involves two potentially contrasting processes, relative to new form-new meaning learning: 1) Form-based familiarity may facilitate learning a new meaning, and 2) meaning-based interference may inhibit learning a new meaning. We examined these two processes by having native English speakers learn new, unrelated meanings for familiar (high frequency) and less familiar (low frequency) English words, as well as for unfamiliar (novel or pseudo-) words. Tracking learning with cued-recall tasks at several points during learning revealed a biphasic pattern: higher learning rates and greater learning efficiency for familiar words relative to novel words early in learning and a reversal of this pattern later in learning. Following learning, interference from original meanings for familiar words was detected in a semantic relatedness judgment task. Additionally, lexical access to familiar words with new meanings became faster compared to their exposure controls, but no such effect occurred for less familiar words. Overall, the results suggest a biphasic pattern of facilitating and interfering processes: Familiar word forms facilitate learning earlier, while interference from original meanings becomes more influential later. This biphasic pattern reflects the co-activation of new and old meanings during learning, a process that may play a role in lexicalization of new meanings. PMID:29399593

  16. Processing of threat-related information outside the focus of visual attention.

    PubMed

    Calvo, Manuel G; Castillo, M Dolores

    2005-05-01

    This study investigates whether threat-related words are especially likely to be perceived in unattended locations of the visual field. Threat-related, positive, and neutral words were presented at fixation as probes in a lexical decision task. The probe word was preceded by 2 simultaneous prime words (1 foveal, i.e., at fixation; 1 parafoveal, i.e., 2.2 deg. of visual angle from fixation), which were presented for 150 ms, one of which was either identical or unrelated to the probe. Results showed significant facilitation in lexical response times only for the probe threat words when primed parafoveally by an identical word presented in the right visual field. We conclude that threat-related words have privileged access to processing outside the focus of attention. This reveals a cognitive bias in the preferential, parallel processing of information that is important for adaptation.

  17. Empowering Students with Word-Learning Strategies: Teach a Child to Fish

    ERIC Educational Resources Information Center

    Graves, Michael F.; Schneider, Steven; Ringstaff, Cathy

    2018-01-01

    This article on word-learning strategies describes a theory- and research-based set of procedures for teaching students to use word-learning strategies--word parts, context clues, the dictionary, and a combined strategy--to infer the meanings of unknown words. The article begins with a rationale for teaching word-learning strategies, particularly…

  18. Word Learning Deficits in Children with Dyslexia

    ERIC Educational Resources Information Center

    Alt, Mary; Hogan, Tiffany; Green, Samuel; Gray, Shelley; Cabbage, Kathryn; Cowan, Nelson

    2017-01-01

    Purpose: The purpose of this study is to investigate word learning in children with dyslexia to ascertain their strengths and weaknesses during the configuration stage of word learning. Method: Children with typical development (N = 116) and dyslexia (N = 68) participated in computer-based word learning games that assessed word learning in 4 sets…

  19. Learning builds on learning: Infants' use of native language sound patterns to learn words

    PubMed Central

    Graf Estes, Katharine

    2014-01-01

    The present research investigated how infants apply prior knowledge of environmental regularities to support new learning. The experiments tested whether infants could exploit experience with native language (English) phonotactic patterns to facilitate associating sounds with meanings during word learning. Fourteen-month-olds heard fluent speech that contained cues for detecting target words; they were embedded in sequences that occur across word boundaries. A separate group heard the target words embedded without word boundary cues. Infants then participated in an object label-learning task. With the opportunity to use native language patterns to segment the target words, infants subsequently learned the labels. Without this experience, infants failed. Novice word learners can take advantage of early learning about sounds scaffold lexical development. PMID:24980741

  20. Statistical learning using real-world scenes: extracting categorical regularities without conscious intent.

    PubMed

    Brady, Timothy F; Oliva, Aude

    2008-07-01

    Recent work has shown that observers can parse streams of syllables, tones, or visual shapes and learn statistical regularities in them without conscious intent (e.g., learn that A is always followed by B). Here, we demonstrate that these statistical-learning mechanisms can operate at an abstract, conceptual level. In Experiments 1 and 2, observers incidentally learned which semantic categories of natural scenes covaried (e.g., kitchen scenes were always followed by forest scenes). In Experiments 3 and 4, category learning with images of scenes transferred to words that represented the categories. In each experiment, the category of the scenes was irrelevant to the task. Together, these results suggest that statistical-learning mechanisms can operate at a categorical level, enabling generalization of learned regularities using existing conceptual knowledge. Such mechanisms may guide learning in domains as disparate as the acquisition of causal knowledge and the development of cognitive maps from environmental exploration.

  1. Evaluating the Benefits of Displaying Word Prediction Lists on a Personal Digital Assistant at the Keyboard Level

    ERIC Educational Resources Information Center

    Tam, Cynthia; Wells, David

    2009-01-01

    Visual-cognitive loads influence the effectiveness of word prediction technology. Adjusting parameters of word prediction programs can lessen visual-cognitive loads. This study evaluated the benefits of WordQ word prediction software for users' performance when the prediction window was moved to a personal digital assistant (PDA) device placed at…

  2. Online learning from input versus offline memory evolution in adult word learning: effects of neighborhood density and phonologically related practice.

    PubMed

    Storkel, Holly L; Bontempo, Daniel E; Pak, Natalie S

    2014-10-01

    In this study, the authors investigated adult word learning to determine how neighborhood density and practice across phonologically related training sets influence online learning from input during training versus offline memory evolution during no-training gaps. Sixty-one adults were randomly assigned to learn low- or high-density nonwords. Within each density condition, participants were trained on one set of words and then were trained on a second set of words, consisting of phonological neighbors of the first set. Learning was measured in a picture-naming test. Data were analyzed using multilevel modeling and spline regression. Steep learning during input was observed, with new words from dense neighborhoods and new words that were neighbors of recently learned words (i.e., second-set words) being learned better than other words. In terms of memory evolution, large and significant forgetting was observed during 1-week gaps in training. Effects of density and practice during memory evolution were opposite of those during input. Specifically, forgetting was greater for high-density and second-set words than for low-density and first-set words. High phonological similarity, regardless of source (i.e., known words or recent training), appears to facilitate online learning from input but seems to impede offline memory evolution.

  3. Got Rhythm...For Better and for Worse. Cross-Modal Effects of Auditory Rhythm on Visual Word Recognition

    ERIC Educational Resources Information Center

    Brochard, Renaud; Tassin, Maxime; Zagar, Daniel

    2013-01-01

    The present research aimed to investigate whether, as previously observed with pictures, background auditory rhythm would also influence visual word recognition. In a lexical decision task, participants were presented with bisyllabic visual words, segmented into two successive groups of letters, while an irrelevant strongly metric auditory…

  4. Visual word ambiguity.

    PubMed

    van Gemert, Jan C; Veenman, Cor J; Smeulders, Arnold W M; Geusebroek, Jan-Mark

    2010-07-01

    This paper studies automatic image classification by modeling soft assignment in the popular codebook model. The codebook model describes an image as a bag of discrete visual words selected from a vocabulary, where the frequency distributions of visual words in an image allow classification. One inherent component of the codebook model is the assignment of discrete visual words to continuous image features. Despite the clear mismatch of this hard assignment with the nature of continuous features, the approach has been successfully applied for some years. In this paper, we investigate four types of soft assignment of visual words to image features. We demonstrate that explicitly modeling visual word assignment ambiguity improves classification performance compared to the hard assignment of the traditional codebook model. The traditional codebook model is compared against our method for five well-known data sets: 15 natural scenes, Caltech-101, Caltech-256, and Pascal VOC 2007/2008. We demonstrate that large codebook vocabulary sizes completely deteriorate the performance of the traditional model, whereas the proposed model performs consistently. Moreover, we show that our method profits in high-dimensional feature spaces and reaps higher benefits when increasing the number of image categories.

  5. Word Learning and Individual Differences in Word Learning Reflected in Event-Related Potentials

    ERIC Educational Resources Information Center

    Perfetti, Charles A.; Wlotko, Edward W.; Hart, Lesley A.

    2005-01-01

    Adults learned the meanings of rare words (e.g., gloaming) and then made meaning judgments on pairs of words. The 1st word was a trained rare word, an untrained rare word, or an untrained familiar word. Event-related potentials distinguished trained rare words from both untrained rare and familiar words, first at 140 ms and again at 400-600 ms…

  6. When canary primes yellow: effects of semantic memory on overt attention.

    PubMed

    Léger, Laure; Chauvet, Elodie

    2015-02-01

    This study explored how overt attention is influenced by the colour that is primed when a target word is read during a lexical visual search task. Prior studies have shown that attention can be influenced by conceptual or perceptual overlap between a target word and distractor pictures: attention is attracted to pictures that have the same form (rope--snake) or colour (green--frog) as the spoken target word or is drawn to an object from the same category as the spoken target word (trumpet--piano). The hypothesis for this study was that attention should be attracted to words displayed in the colour that is primed by reading a target word (for example, yellow for canary). An experiment was conducted in which participants' eye movements were recorded whilst they completed a lexical visual search task. The primary finding was that participants' eye movements were mainly directed towards words displayed in the colour primed by reading the target word, even though this colour was not relevant to completing the visual search task. This result is discussed in terms of top-down guidance of overt attention in visual search for words.

  7. The influence of two cognitive-linguistic variables on incidental word learning in 5-year-olds.

    PubMed

    Abel, Alyson D; Schuele, C Melanie

    2014-08-01

    The relation between incidental word learning and two cognitive-linguistic variables--phonological memory and phonological awareness--is not fully understood. Thirty-five typically developing, 5-year-old, preschool children participated in a study examining the association between phonological memory, phonological awareness, and incidental word learning. Children were exposed to target words in a read-aloud story that accompanied a wordless picture book. Target word comprehension was assessed before and after two readings of the story. Phonological awareness predicted incidental word learning but phonological memory did not. The influence of phonological awareness and phonological memory on word learning may be dependent on the demands of the word learning task.

  8. Stroop effects from newly learned color words: effects of memory consolidation and episodic context

    PubMed Central

    Geukes, Sebastian; Gaskell, M. Gareth; Zwitserlood, Pienie

    2015-01-01

    The Stroop task is an excellent tool to test whether reading a word automatically activates its associated meaning, and it has been widely used in mono- and bilingual contexts. Despite of its ubiquity, the task has not yet been employed to test the automaticity of recently established word-concept links in novel-word-learning studies, under strict experimental control of learning and testing conditions. In three experiments, we thus paired novel words with native language (German) color words via lexical association and subsequently tested these words in a manual version of the Stroop task. Two crucial findings emerged: When novel word Stroop trials appeared intermixed among native-word trials, the novel-word Stroop effect was observed immediately after the learning phase. If no native color words were present in a Stroop block, the novel-word Stroop effect only emerged 24 h later. These results suggest that the automatic availability of a novel word's meaning depends either on supportive context from the learning episode and/or on sufficient time for memory consolidation. We discuss how these results can be reconciled with the complementary learning systems account of word learning. PMID:25814973

  9. Adult Word Recognition and Visual Sequential Memory

    ERIC Educational Resources Information Center

    Holmes, V. M.

    2012-01-01

    Two experiments were conducted investigating the role of visual sequential memory skill in the word recognition efficiency of undergraduate university students. Word recognition was assessed in a lexical decision task using regularly and strangely spelt words, and nonwords that were either standard orthographically legal strings or items made from…

  10. A Critical Boundary to the Left-Hemisphere Advantage in Visual-Word Processing

    ERIC Educational Resources Information Center

    Deason, R.G.; Marsolek, C.J.

    2005-01-01

    Two experiments explored boundary conditions for the ubiquitous left-hemisphere advantage in visual-word recognition. Subjects perceptually identified words presented directly to the left or right hemisphere. Strong left-hemisphere advantages were observed for UPPERCASE and lowercase words. However, only a weak effect was observed for…

  11. Visual attention shift to printed words during spoken word recognition in Chinese: The role of phonological information.

    PubMed

    Shen, Wei; Qu, Qingqing; Tong, Xiuhong

    2018-05-01

    The aim of this study was to investigate the extent to which phonological information mediates the visual attention shift to printed Chinese words in spoken word recognition by using an eye-movement technique with a printed-word paradigm. In this paradigm, participants are visually presented with four printed words on a computer screen, which include a target word, a phonological competitor, and two distractors. Participants are then required to select the target word using a computer mouse, and the eye movements are recorded. In Experiment 1, phonological information was manipulated at the full-phonological overlap; in Experiment 2, phonological information at the partial-phonological overlap was manipulated; and in Experiment 3, the phonological competitors were manipulated to share either fulloverlap or partial-overlap with targets directly. Results of the three experiments showed that the phonological competitor effects were observed at both the full-phonological overlap and partial-phonological overlap conditions. That is, phonological competitors attracted more fixations than distractors, which suggested that phonological information mediates the visual attention shift during spoken word recognition. More importantly, we found that the mediating role of phonological information varies as a function of the phonological similarity between target words and phonological competitors.

  12. Enhanced recognition memory in grapheme-color synaesthesia for different categories of visual stimuli

    PubMed Central

    Ward, Jamie; Hovard, Peter; Jones, Alicia; Rothen, Nicolas

    2013-01-01

    Memory has been shown to be enhanced in grapheme-color synaesthesia, and this enhancement extends to certain visual stimuli (that don't induce synaesthesia) as well as stimuli comprised of graphemes (which do). Previous studies have used a variety of testing procedures to assess memory in synaesthesia (e.g., free recall, recognition, associative learning) making it hard to know the extent to which memory benefits are attributable to the stimulus properties themselves, the testing method, participant strategies, or some combination of these factors. In the first experiment, we use the same testing procedure (recognition memory) for a variety of stimuli (written words, non-words, scenes, and fractals) and also check which memorization strategies were used. We demonstrate that grapheme-color synaesthetes show enhanced memory across all these stimuli, but this is not found for a non-visual type of synaesthesia (lexical-gustatory). In the second experiment, the memory advantage for scenes is explored further by manipulating the properties of the old and new images (changing color, orientation, or object presence). Again, grapheme-color synaesthetes show a memory advantage for scenes across all manipulations. Although recognition memory is generally enhanced in this study, the largest effects were found for abstract visual images (fractals) and scenes for which color can be used to discriminate old/new status. PMID:24187542

  13. Enhanced recognition memory in grapheme-color synaesthesia for different categories of visual stimuli.

    PubMed

    Ward, Jamie; Hovard, Peter; Jones, Alicia; Rothen, Nicolas

    2013-01-01

    Memory has been shown to be enhanced in grapheme-color synaesthesia, and this enhancement extends to certain visual stimuli (that don't induce synaesthesia) as well as stimuli comprised of graphemes (which do). Previous studies have used a variety of testing procedures to assess memory in synaesthesia (e.g., free recall, recognition, associative learning) making it hard to know the extent to which memory benefits are attributable to the stimulus properties themselves, the testing method, participant strategies, or some combination of these factors. In the first experiment, we use the same testing procedure (recognition memory) for a variety of stimuli (written words, non-words, scenes, and fractals) and also check which memorization strategies were used. We demonstrate that grapheme-color synaesthetes show enhanced memory across all these stimuli, but this is not found for a non-visual type of synaesthesia (lexical-gustatory). In the second experiment, the memory advantage for scenes is explored further by manipulating the properties of the old and new images (changing color, orientation, or object presence). Again, grapheme-color synaesthetes show a memory advantage for scenes across all manipulations. Although recognition memory is generally enhanced in this study, the largest effects were found for abstract visual images (fractals) and scenes for which color can be used to discriminate old/new status.

  14. Can Writing a New Word Detract from Learning It? More Negative Effects of Forced Output during Vocabulary Learning

    ERIC Educational Resources Information Center

    Barcroft, Joe

    2006-01-01

    This study examined effects of word writing on second language vocabulary learning. In two experiments, English-speaking learners of Spanish attempted to learn 24 Spanish nouns while viewing word-picture pairs. The participants copied 12 target words and wrote nothing for the other 12 target words being studied. Productive vocabulary learning on…

  15. When a Picture Isn't Worth 1000 Words: Learners Struggle to Find Meaning in Data Visualizations

    ERIC Educational Resources Information Center

    Stofer, Kathryn A.

    2016-01-01

    The oft-repeated phrase "a picture is worth a thousand words" supposes that an image can replace a profusion of words to more easily express complex ideas. For scientific visualizations that represent profusions of numerical data, however, an untranslated academic visualization suffers the same pitfalls untranslated jargon does. Previous…

  16. What Does Brave Look Like? How an Arts-Integrated Poetry Unit Provokes Imaginative and Thoughtful Work from Fifth-Grade Writers

    ERIC Educational Resources Information Center

    George, Alice

    2012-01-01

    This article describes a project in poetry and visual art that leads students to explore metaphor in generative and novel ways. The author shares what she and her teaching partner Ronna Pritikin have learned about fostering brave and joyful student artist and poets. "Be an Artist With Your Words" is a twelve-session residency, in which the author…

  17. Musical Emotions: Functions, Origins, Evolution

    DTIC Science & Technology

    2010-01-01

    might be contentious) neural mechanisms added to our perception of originally mechanical properties of ear. I’ll add that Helmholtz did not touch the main...significant part of conceptual perception is an unconscious process ; for example, visual perception takes about 150 ms, which is a long time when measured...missing in terms of neural mechanisms? How do children learn which words and sentences correspond to which objects and situations? Many psychologists

  18. The Power of Effective Design in e-Learning: A Study of the "Mayo Effect" Video

    ERIC Educational Resources Information Center

    Fan, Jiang Ping

    2014-01-01

    When the Mayo Effect video went live on the Mayo intranet in June 2010, it was very well received at Mayo Clinic. The message in the video was so effectively delivered that it became an instant sensation across the institution. The video contains about 461 words. In such a short video, every part of its architectural design, whether it is visual,…

  19. Exploring Metacogntive Visual Literacy Tasks for Teaching Astronomy

    NASA Astrophysics Data System (ADS)

    Slater, Timothy F.; Slater, S.; Dwyer, W.

    2010-01-01

    Undoubtedly, astronomy is a scientific enterprise which often results in colorful and inspirational images of the cosmos that naturally capture our attention. Students encountering astronomy in the college classroom are often bombarded with images, movies, simulations, conceptual cartoons, graphs, and charts intended to convey the substance and technological advancement inherent in astronomy. For students who self-identify themselves as visual learners, this aspect can make the science of astronomy come alive. For students who naturally attend to visual aesthetics, this aspect can make astronomy seem relevant. In other words, the visual nature that accompanies much of the scientific realm of astronomy has the ability to connect a wide range of students to science, not just those few who have great abilities and inclinations toward the mathematical analysis world. Indeed, this is fortunate for teachers of astronomy, who actively try to find ways to connect and build astronomical understanding with a broad range of student interests, motivations, and abilities. In the context of learning science, metacognition describes students’ self-monitoring, -regulation, and -awareness when thinking about learning. As such, metacognition is one of the foundational pillars supporting what we know about how people learn. Yet, the astronomy teaching and learning community knows very little about how to operationalize and support students’ metacognition in the classroom. In response, the Conceptual Astronomy, Physics and Earth sciences Research (CAPER) Team is developing and pilot-testing metacogntive tasks in the context of astronomy that focus on visual literacy of astronomical phenomena. In the initial versions, students are presented with a scientifically inaccurate narrative supposedly describing visual information, including images and graphical information, and asked to assess and correct the narrative, in the form of peer evaluation. To guide student thinking, students are provided with a scaffolded series of multiple-choice questions highlighting conceptual aspects of the prompt.

  20. Developmental amnesia: Fractionation of developing memory systems.

    PubMed

    Temple, Christine M; Richardson, Paul

    2006-07-01

    Study of the developmental amnesias utilizing a cognitive neuropsychological methodology has highlighted the dissociations that may occur between the development of components of memory. M.M., a new case of developmental amnesia, was identified after screening from the normal population on cognitive and memory measures. Retrospective investigation found that he was of low birthweight. M.M. had impaired semantic memory for knowledge of facts and words. There was impaired episodic memory for words and stories but intact episodic memory for visual designs and features. This forms a double dissociation with Dr S. (Temple, 1992), who had intact verbal but impaired visual episodic memory. M.M. also had impaired autobiographical episodic memory. Nevertheless, learning over repeated trials occurred, consistent with previous theorizing that learning is not simply the effect of recurrent episodic memory. Nor is it the same as establishing semantic memory, since for M.M. semantic memory is also impaired. Within reading, there was an impaired lexico-semantic system, elevated levels of homophone confusion, but intact phonological reading, consistent with surface dyslexia and raising issues about the interrelationship of the semantic system and literacy development. The results are compatible with discrete semi-independent components within memory development, whereby deficits are associated with residual normality, but there may also be an explicit relationship between the semantic memory system and both vocabulary and reading acquisition.

  1. Artful terms: A study on aesthetic word usage for visual art versus film and music.

    PubMed

    Augustin, M Dorothee; Carbon, Claus-Christian; Wagemans, Johan

    2012-01-01

    Despite the importance of the arts in human life, psychologists still know relatively little about what characterises their experience for the recipient. The current research approaches this problem by studying people's word usage in aesthetics, with a focus on three important art forms: visual art, film, and music. The starting point was a list of 77 words known to be useful to describe aesthetic impressions of visual art (Augustin et al 2012, Acta Psychologica139 187-201). Focusing on ratings of likelihood of use, we examined to what extent word usage in aesthetic descriptions of visual art can be generalised to film and music. The results support the claim of an interplay of generality and specificity in aesthetic word usage. Terms with equal likelihood of use for all art forms included beautiful, wonderful, and terms denoting originality. Importantly, emotion-related words received higher ratings for film and music than for visual art. To our knowledge this is direct evidence that aesthetic experiences of visual art may be less affectively loaded than, for example, experiences of music. The results render important information about aesthetic word usage in the realm of the arts and may serve as a starting point to develop tailored measurement instruments for different art forms.

  2. Artful terms: A study on aesthetic word usage for visual art versus film and music

    PubMed Central

    Augustin, M Dorothee; Carbon, Claus-Christian; Wagemans, Johan

    2012-01-01

    Despite the importance of the arts in human life, psychologists still know relatively little about what characterises their experience for the recipient. The current research approaches this problem by studying people's word usage in aesthetics, with a focus on three important art forms: visual art, film, and music. The starting point was a list of 77 words known to be useful to describe aesthetic impressions of visual art (Augustin et al 2012, Acta Psychologica 139 187–201). Focusing on ratings of likelihood of use, we examined to what extent word usage in aesthetic descriptions of visual art can be generalised to film and music. The results support the claim of an interplay of generality and specificity in aesthetic word usage. Terms with equal likelihood of use for all art forms included beautiful, wonderful, and terms denoting originality. Importantly, emotion-related words received higher ratings for film and music than for visual art. To our knowledge this is direct evidence that aesthetic experiences of visual art may be less affectively loaded than, for example, experiences of music. The results render important information about aesthetic word usage in the realm of the arts and may serve as a starting point to develop tailored measurement instruments for different art forms. PMID:23145287

  3. [Neurophysiological correlates of learning disabilities in Japan].

    PubMed

    Miyao, M

    1999-05-01

    In the present study, we developed a new event-related potentials (ERPs) stimulator system applicable to simultaneous audio visual stimuli, and tested it clinically on healthy adults and patients with learning disabilities (LD), using Japanese language task stimuli: hiragana letters, kanji letters, and kanji letters with spoken words. (1) The origins of the P300 component were identified in these tasks. The sources in the former two tasks were located in different areas. In the simultaneous task stimuli, a combination of the two P300 sources was observed with dominance in the left posterior inferior temporal area. (2) In patients with learning disabilities, those with reading and writing disability showed low amplitudes in the left hemisphere in response to visual language task stimuli with kanji and hiragana letters, in contrast to healthy children and LD patients with arithmetic disability. (3) To evaluate the effect of methylphenidate (10 mg) on ADD, paired-associate ERPs were recorded. Methylphenidate increased the amplitude of P300.

  4. Direct comparison of four implicit memory tests.

    PubMed

    Rajaram, S; Roediger, H L

    1993-07-01

    Four verbal implicit memory tests, word identification, word stem completion, word fragment completion, and anagram solution, were directly compared in one experiment and were contrasted with free recall. On all implicit tests, priming was greatest from prior visual presentation of words, less (but significant) from auditory presentation, and least from pictorial presentations. Typefont did not affect priming. In free recall, pictures were recalled better than words. The four implicit tests all largely index perceptual (lexical) operations in recognizing words, or visual word form representations.

  5. Short-term retention of pictures and words: evidence for dual coding systems.

    PubMed

    Pellegrino, J W; Siegel, A W; Dhawan, M

    1975-03-01

    The recall of picture and word triads was examined in three experiments that manipulated the type of distraction in a Brown-Peterson short-term retention task. In all three experiments recall of pictures was superior to words under auditory distraction conditions. Visual distraction produced high performance levels with both types of stimuli, whereas combined auditory and visual distraction significantly reduced picture recall without further affecting word recall. The results were interpreted in terms of the dual coding hypothesis and indicated that pictures are encoded into separate visual and acoustic processing systems while words are primarily acoustically encoded.

  6. Evidence for the activation of sensorimotor information during visual word recognition: the body-object interaction effect.

    PubMed

    Siakaluk, Paul D; Pexman, Penny M; Aguilera, Laura; Owen, William J; Sears, Christopher R

    2008-01-01

    We examined the effects of sensorimotor experience in two visual word recognition tasks. Body-object interaction (BOI) ratings were collected for a large set of words. These ratings assess perceptions of the ease with which a human body can physically interact with a word's referent. A set of high BOI words (e.g., mask) and a set of low BOI words (e.g., ship) were created, matched on imageability and concreteness. Facilitatory BOI effects were observed in lexical decision and phonological lexical decision tasks: responses were faster for high BOI words than for low BOI words. We discuss how our findings may be accounted for by (a) semantic feedback within the visual word recognition system, and (b) an embodied view of cognition (e.g., Barsalou's perceptual symbol systems theory), which proposes that semantic knowledge is grounded in sensorimotor interactions with the environment.

  7. Sensory experience ratings (SERs) for 1,659 French words: Relationships with other psycholinguistic variables and visual word recognition.

    PubMed

    Bonin, Patrick; Méot, Alain; Ferrand, Ludovic; Bugaïska, Aurélia

    2015-09-01

    We collected sensory experience ratings (SERs) for 1,659 French words in adults. Sensory experience for words is a recently introduced variable that corresponds to the degree to which words elicit sensory and perceptual experiences (Juhasz & Yap Behavior Research Methods, 45, 160-168, 2013; Juhasz, Yap, Dicke, Taylor, & Gullick Quarterly Journal of Experimental Psychology, 64, 1683-1691, 2011). The relationships of the sensory experience norms with other psycholinguistic variables (e.g., imageability and age of acquisition) were analyzed. We also investigated the degree to which SER predicted performance in visual word recognition tasks (lexical decision, word naming, and progressive demasking). The analyses indicated that SER reliably predicted response times in lexical decision, but not in word naming or progressive demasking. The findings are discussed in relation to the status of SER, the role of semantic code activation in visual word recognition, and the embodied view of cognition.

  8. Orthographic learning, fast and slow: Lexical competition effects reveal the time course of word learning in developing readers.

    PubMed

    Tamura, Niina; Castles, Anne; Nation, Kate

    2017-06-01

    Children learn new words via their everyday reading experience but little is known about how this learning happens. We addressed this by focusing on the conditions needed for new words to become familiar to children, drawing a distinction between lexical configuration (the acquisition of word knowledge) and lexical engagement (the emergence of interactive processes between newly learned words and existing words). In Experiment 1, 9-11-year-olds saw unfamiliar words in one of two storybook conditions, differing in degree of focus on the new words but matched for frequency of exposure. Children showed good learning of the novel words in terms of both configuration (form and meaning) and engagement (lexical competition). A frequency manipulation under incidental learning conditions in Experiment 2 revealed different time-courses of learning: a fast lexical configuration process, indexed by explicit knowledge, and a slower lexicalization process, indexed by lexical competition. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  9. Uninformative contexts support word learning for high-skill spellers.

    PubMed

    Eskenazi, Michael A; Swischuk, Natascha K; Folk, Jocelyn R; Abraham, Ashley N

    2018-04-30

    The current study investigated how high-skill spellers and low-skill spellers incidentally learn words during reading. The purpose of the study was to determine whether readers can use uninformative contexts to support word learning after forming a lexical representation for a novel word, consistent with instance-based resonance processes. Previous research has found that uninformative contexts damage word learning; however, there may have been insufficient exposure to informative contexts (only one) prior to exposure to uninformative contexts (Webb, 2007; Webb, 2008). In Experiment 1, participants read sentences with one novel word (i.e., blaph, clurge) embedded in them in three different conditions: Informative (six informative contexts to support word learning), Mixed (three informative contexts followed by three uninformative contexts), and Uninformative (six uninformative contexts). Experiment 2 added a new condition with only three informative contexts to further clarify the conclusions of Experiment 1. Results indicated that uninformative contexts can support word learning, but only for high-skill spellers. Further, when participants learned the spelling of the novel word, they were more likely to learn the meaning of that word. This effect was much larger for high-skill spellers than for low-skill spellers. Results are consistent with the Lexical Quality Hypothesis (LQH) in that high-skill spellers form stronger orthographic representations which support word learning (Perfetti, 2007). Results also support an instance-based resonance process of word learning in that prior informative contexts can be reactivated to support word learning in future contexts (Bolger, Balass, Landen, & Perfetti, 2008; Balass, Nelson, & Perfetti, 2010; Reichle & Perfetti, 2003). (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  10. Using speakers' referential intentions to model early cross-situational word learning.

    PubMed

    Frank, Michael C; Goodman, Noah D; Tenenbaum, Joshua B

    2009-05-01

    Word learning is a "chicken and egg" problem. If a child could understand speakers' utterances, it would be easy to learn the meanings of individual words, and once a child knows what many words mean, it is easy to infer speakers' intended meanings. To the beginning learner, however, both individual word meanings and speakers' intentions are unknown. We describe a computational model of word learning that solves these two inference problems in parallel, rather than relying exclusively on either the inferred meanings of utterances or cross-situational word-meaning associations. We tested our model using annotated corpus data and found that it inferred pairings between words and object concepts with higher precision than comparison models. Moreover, as the result of making probabilistic inferences about speakers' intentions, our model explains a variety of behavioral phenomena described in the word-learning literature. These phenomena include mutual exclusivity, one-trial learning, cross-situational learning, the role of words in object individuation, and the use of inferred intentions to disambiguate reference.

  11. The Inhibitory Mechanism in Learning Ambiguous Words in a Second Language

    PubMed Central

    Lu, Yao; Wu, Junjie; Dunlap, Susan; Chen, Baoguo

    2017-01-01

    Ambiguous words are hard to learn, yet little is known about what causes this difficulty. The current study aimed to investigate the relationship between the representations of new and prior meanings of ambiguous words in second language (L2) learning, and to explore the function of inhibitory control on L2 ambiguous word learning at the initial stage of learning. During a 4-day learning phase, Chinese–English bilinguals learned 30 novel English words for 30 min per day using bilingual flashcards. Half of the words to be learned were unambiguous (had one meaning) and half were ambiguous (had two semantically unrelated meanings learned in sequence). Inhibitory control was introduced as a subject variable measured by a Stroop task. The semantic representations established for the studied items were probed using a cross-language semantic relatedness judgment task, in which the learned English words served as the prime, and the targets were either semantically related or unrelated to the prime. Results showed that response latencies for the second meaning of ambiguous words were slower than for the first meaning and for unambiguous words, and that performance on only the second meaning of ambiguous words was predicted by inhibitory control ability. These results suggest that, at the initial stage of L2 ambiguous word learning, the representation of the second meaning is weak, probably interfered with by the representation of the prior learned meaning. Moreover, inhibitory control may modulate learning of the new meanings, such that individuals with better inhibitory control may more effectively suppress interference from the first meaning, and thus learn the new meaning more quickly. PMID:28496423

  12. The Inhibitory Mechanism in Learning Ambiguous Words in a Second Language.

    PubMed

    Lu, Yao; Wu, Junjie; Dunlap, Susan; Chen, Baoguo

    2017-01-01

    Ambiguous words are hard to learn, yet little is known about what causes this difficulty. The current study aimed to investigate the relationship between the representations of new and prior meanings of ambiguous words in second language (L2) learning, and to explore the function of inhibitory control on L2 ambiguous word learning at the initial stage of learning. During a 4-day learning phase, Chinese-English bilinguals learned 30 novel English words for 30 min per day using bilingual flashcards. Half of the words to be learned were unambiguous (had one meaning) and half were ambiguous (had two semantically unrelated meanings learned in sequence). Inhibitory control was introduced as a subject variable measured by a Stroop task. The semantic representations established for the studied items were probed using a cross-language semantic relatedness judgment task, in which the learned English words served as the prime, and the targets were either semantically related or unrelated to the prime. Results showed that response latencies for the second meaning of ambiguous words were slower than for the first meaning and for unambiguous words, and that performance on only the second meaning of ambiguous words was predicted by inhibitory control ability. These results suggest that, at the initial stage of L2 ambiguous word learning, the representation of the second meaning is weak, probably interfered with by the representation of the prior learned meaning. Moreover, inhibitory control may modulate learning of the new meanings, such that individuals with better inhibitory control may more effectively suppress interference from the first meaning, and thus learn the new meaning more quickly.

  13. N400 Response Indexes Word Learning from Linguistic Context in Children

    ERIC Educational Resources Information Center

    Abel, Alyson D.; Schneider, Julie; Maguire, Mandy J

    2018-01-01

    Word learning from linguistic context is essential for vocabulary growth from grade school onward; however, little is known about the mechanisms underlying successful word learning in children. Current methods for studying word learning development require children to identify the meaning of the word after each exposure, a method that interacts…

  14. Fast Brain Plasticity during Word Learning in Musically-Trained Children.

    PubMed

    Dittinger, Eva; Chobert, Julie; Ziegler, Johannes C; Besson, Mireille

    2017-01-01

    Children learn new words every day and this ability requires auditory perception, phoneme discrimination, attention, associative learning and semantic memory. Based on previous results showing that some of these functions are enhanced by music training, we investigated learning of novel words through picture-word associations in musically-trained and control children (8-12 year-old) to determine whether music training would positively influence word learning. Results showed that musically-trained children outperformed controls in a learning paradigm that included picture-sound matching and semantic associations. Moreover, the differences between unexpected and expected learned words, as reflected by the N200 and N400 effects, were larger in children with music training compared to controls after only 3 min of learning the meaning of novel words. In line with previous results in adults, these findings clearly demonstrate a correlation between music training and better word learning. It is argued that these benefits reflect both bottom-up and top-down influences. The present learning paradigm might provide a useful dynamic diagnostic tool to determine which perceptive and cognitive functions are impaired in children with learning difficulties.

  15. Fast Brain Plasticity during Word Learning in Musically-Trained Children

    PubMed Central

    Dittinger, Eva; Chobert, Julie; Ziegler, Johannes C.; Besson, Mireille

    2017-01-01

    Children learn new words every day and this ability requires auditory perception, phoneme discrimination, attention, associative learning and semantic memory. Based on previous results showing that some of these functions are enhanced by music training, we investigated learning of novel words through picture-word associations in musically-trained and control children (8–12 year-old) to determine whether music training would positively influence word learning. Results showed that musically-trained children outperformed controls in a learning paradigm that included picture-sound matching and semantic associations. Moreover, the differences between unexpected and expected learned words, as reflected by the N200 and N400 effects, were larger in children with music training compared to controls after only 3 min of learning the meaning of novel words. In line with previous results in adults, these findings clearly demonstrate a correlation between music training and better word learning. It is argued that these benefits reflect both bottom-up and top-down influences. The present learning paradigm might provide a useful dynamic diagnostic tool to determine which perceptive and cognitive functions are impaired in children with learning difficulties. PMID:28553213

  16. The Effect of the Balance of Orthographic Neighborhood Distribution in Visual Word Recognition

    ERIC Educational Resources Information Center

    Robert, Christelle; Mathey, Stephanie; Zagar, Daniel

    2007-01-01

    The present study investigated whether the balance of neighborhood distribution (i.e., the way orthographic neighbors are spread across letter positions) influences visual word recognition. Three word conditions were compared. Word neighbors were either concentrated on one letter position (e.g.,nasse/basse-lasse-tasse-masse) or were unequally…

  17. Identifiable Orthographically Similar Word Primes Interfere in Visual Word Identification

    ERIC Educational Resources Information Center

    Burt, Jennifer S.

    2009-01-01

    University students participated in five experiments concerning the effects of unmasked, orthographically similar, primes on visual word recognition in the lexical decision task (LDT) and naming tasks. The modal prime-target stimulus onset asynchrony (SOA) was 350 ms. When primes were words that were orthographic neighbors of the targets, and…

  18. Evidence for Early Morphological Decomposition in Visual Word Recognition

    ERIC Educational Resources Information Center

    Solomyak, Olla; Marantz, Alec

    2010-01-01

    We employ a single-trial correlational MEG analysis technique to investigate early processing in the visual recognition of morphologically complex words. Three classes of affixed words were presented in a lexical decision task: free stems (e.g., taxable), bound roots (e.g., tolerable), and unique root words (e.g., vulnerable, the root of which…

  19. Intrusive effects of implicitly processed information on explicit memory.

    PubMed

    Sentz, Dustin F; Kirkhart, Matthew W; LoPresto, Charles; Sobelman, Steven

    2002-02-01

    This study described the interference of implicitly processed information on the memory for explicitly processed information. Participants studied a list of words either auditorily or visually under instructions to remember the words (explicit study). They were then visually presented another word list under instructions which facilitate implicit but not explicit processing. Following a distractor task, memory for the explicit study list was tested with either a visual or auditory recognition task that included new words, words from the explicit study list, and words implicitly processed. Analysis indicated participants both failed to recognize words from the explicit study list and falsely recognized words that were implicitly processed as originating from the explicit study list. However, this effect only occurred when the testing modality was visual, thereby matching the modality for the implicitly processed information, regardless of the modality of the explicit study list. This "modality effect" for explicit memory was interpreted as poor source memory for implicitly processed information and in light of the procedures used. as well as illustrating an example of "remembering causing forgetting."

  20. Category learning in the color-word contingency learning paradigm.

    PubMed

    Schmidt, James R; Augustinova, Maria; De Houwer, Jan

    2018-04-01

    In the typical color-word contingency learning paradigm, participants respond to the print color of words where each word is presented most often in one color. Learning is indicated by faster and more accurate responses when a word is presented in its usual color, relative to another color. To eliminate the possibility that this effect is driven exclusively by the familiarity of item-specific word-color pairings, we examine whether contingency learning effects can be observed also when colors are related to categories of words rather than to individual words. To this end, the reported experiments used three categories of words (animals, verbs, and professions) that were each predictive of one color. Importantly, each individual word was presented only once, thus eliminating individual color-word contingencies. Nevertheless, for the first time, a category-based contingency effect was observed, with faster and more accurate responses when a category item was presented in the color in which most of the other items of that category were presented. This finding helps to constrain episodic learning models and sets the stage for new research on category-based contingency learning.

  1. The audiovisual structure of onomatopoeias: An intrusion of real-world physics in lexical creation.

    PubMed

    Taitz, Alan; Assaneo, M Florencia; Elisei, Natalia; Trípodi, Mónica; Cohen, Laurent; Sitt, Jacobo D; Trevisan, Marcos A

    2018-01-01

    Sound-symbolic word classes are found in different cultures and languages worldwide. These words are continuously produced to code complex information about events. Here we explore the capacity of creative language to transport complex multisensory information in a controlled experiment, where our participants improvised onomatopoeias from noisy moving objects in audio, visual and audiovisual formats. We found that consonants communicate movement types (slide, hit or ring) mainly through the manner of articulation in the vocal tract. Vowels communicate shapes in visual stimuli (spiky or rounded) and sound frequencies in auditory stimuli through the configuration of the lips and tongue. A machine learning model was trained to classify movement types and used to validate generalizations of our results across formats. We implemented the classifier with a list of cross-linguistic onomatopoeias simple actions were correctly classified, while different aspects were selected to build onomatopoeias of complex actions. These results show how the different aspects of complex sensory information are coded and how they interact in the creation of novel onomatopoeias.

  2. The impact of inverted text on visual word processing: An fMRI study.

    PubMed

    Sussman, Bethany L; Reddigari, Samir; Newman, Sharlene D

    2018-06-01

    Visual word recognition has been studied for decades. One question that has received limited attention is how different text presentation orientations disrupt word recognition. By examining how word recognition processes may be disrupted by different text orientations it is hoped that new insights can be gained concerning the process. Here, we examined the impact of rotating and inverting text on the neural network responsible for visual word recognition focusing primarily on a region of the occipto-temporal cortex referred to as the visual word form area (VWFA). A lexical decision task was employed in which words and pseudowords were presented in one of three orientations (upright, rotated or inverted). The results demonstrate that inversion caused the greatest disruption of visual word recognition processes. Both rotated and inverted text elicited increased activation in spatial attention regions within the right parietal cortex. However, inverted text recruited phonological and articulatory processing regions within the left inferior frontal and left inferior parietal cortices. Finally, the VWFA was found to not behave similarly to the fusiform face area in that unusual text orientations resulted in increased activation and not decreased activation. It is hypothesized here that the VWFA activation is modulated by feedback from linguistic processes. Copyright © 2018 Elsevier Inc. All rights reserved.

  3. Resting state neural networks for visual Chinese word processing in Chinese adults and children.

    PubMed

    Li, Ling; Liu, Jiangang; Chen, Feiyan; Feng, Lu; Li, Hong; Tian, Jie; Lee, Kang

    2013-07-01

    This study examined the resting state neural networks for visual Chinese word processing in Chinese children and adults. Both the functional connectivity (FC) and amplitude of low frequency fluctuation (ALFF) approaches were used to analyze the fMRI data collected when Chinese participants were not engaged in any specific explicit tasks. We correlated time series extracted from the visual word form area (VWFA) with those in other regions in the brain. We also performed ALFF analysis in the resting state FC networks. The FC results revealed that, regarding the functionally connected brain regions, there exist similar intrinsically organized resting state networks for visual Chinese word processing in adults and children, suggesting that such networks may already be functional after 3-4 years of informal exposure to reading plus 3-4 years formal schooling. The ALFF results revealed that children appear to recruit more neural resources than adults in generally reading-irrelevant brain regions. Differences between child and adult ALFF results suggest that children's intrinsic word processing network during the resting state, though similar in functional connectivity, is still undergoing development. Further exposure to visual words and experience with reading are needed for children to develop a mature intrinsic network for word processing. The developmental course of the intrinsically organized word processing network may parallel that of the explicit word processing network. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Event-Related Potential Evidence in Chinese Children: Type of Literacy Training Modulates Neural Orthographic Sensitivity

    ERIC Educational Resources Information Center

    Zhao, Pei; Zhao, Jing; Weng, Xuchu; Li, Su

    2018-01-01

    Visual word N170 is an index of perceptual expertise for visual words across different writing systems. Recent developmental studies have shown the early emergence of visual word N170 and its close association with individual's reading ability. In the current study, we investigated whether fine-tuning N170 for Chinese characters could emerge after…

  5. Word meaning and the control of eye fixation: semantic competitor effects and the visual world paradigm.

    PubMed

    Huettig, Falk; Altmann, Gerry T M

    2005-05-01

    When participants are presented simultaneously with spoken language and a visual display depicting objects to which that language refers, participants spontaneously fixate the visual referents of the words being heard [Cooper, R. M. (1974). The control of eye fixation by the meaning of spoken language: A new methodology for the real-time investigation of speech perception, memory, and language processing. Cognitive Psychology, 6(1), 84-107; ]. We demonstrate here that such spontaneous fixation can be driven by partial semantic overlap between a word and a visual object. Participants heard the word 'piano' when (a) a piano was depicted amongst unrelated distractors; (b) a trumpet was depicted amongst those same distractors; and (c), both the piano and trumpet were depicted. The probability of fixating the piano and the trumpet in the first two conditions rose as the word 'piano' unfolded. In the final condition, only fixations to the piano rose, although the trumpet was fixated more than the distractors. We conclude that eye movements are driven by the degree of match, along various dimensions that go beyond simple visual form, between a word and the mental representations of objects in the concurrent visual field.

  6. Dynamic Influence of Emotional States on Novel Word Learning

    PubMed Central

    Guo, Jingjing; Zou, Tiantian; Peng, Danling

    2018-01-01

    Many researchers realize that it's unrealistic to isolate language learning and processing from emotions. However, few studies on language learning have taken emotions into consideration so far, so that the probable influences of emotions on language learning are unclear. The current study thereby aimed to examine the effects of emotional states on novel word learning and their dynamic changes with learning continuing and task varying. Positive, negative or neutral pictures were employed to induce a given emotional state, and then participants learned the novel words through association with line-drawing pictures in four successive learning phases. At the end of each learning phase, participants were instructed to fulfill a semantic category judgment task (in Experiment 1) or a word-picture semantic consistency judgment task (in Experiment 2) to explore the effects of emotional states on different depths of word learning. Converging results demonstrated that negative emotional state led to worse performance compared with neutral condition; however, how positive emotional state affected learning varied with learning task. Specifically, a facilitative role of positive emotional state in semantic category learning was observed but disappeared in word specific meaning learning. Moreover, the emotional modulation on novel word learning was quite dynamic and changeable with learning continuing, and the final attainment of the learned words tended to be similar under different emotional states. The findings suggest that the impact of emotion can be offset when novel words became more and more familiar and a part of existent lexicon. PMID:29695994

  7. Dynamic Influence of Emotional States on Novel Word Learning.

    PubMed

    Guo, Jingjing; Zou, Tiantian; Peng, Danling

    2018-01-01

    Many researchers realize that it's unrealistic to isolate language learning and processing from emotions. However, few studies on language learning have taken emotions into consideration so far, so that the probable influences of emotions on language learning are unclear. The current study thereby aimed to examine the effects of emotional states on novel word learning and their dynamic changes with learning continuing and task varying. Positive, negative or neutral pictures were employed to induce a given emotional state, and then participants learned the novel words through association with line-drawing pictures in four successive learning phases. At the end of each learning phase, participants were instructed to fulfill a semantic category judgment task (in Experiment 1) or a word-picture semantic consistency judgment task (in Experiment 2) to explore the effects of emotional states on different depths of word learning. Converging results demonstrated that negative emotional state led to worse performance compared with neutral condition; however, how positive emotional state affected learning varied with learning task. Specifically, a facilitative role of positive emotional state in semantic category learning was observed but disappeared in word specific meaning learning. Moreover, the emotional modulation on novel word learning was quite dynamic and changeable with learning continuing, and the final attainment of the learned words tended to be similar under different emotional states. The findings suggest that the impact of emotion can be offset when novel words became more and more familiar and a part of existent lexicon.

  8. Visual Word Recognition Across the Adult Lifespan

    PubMed Central

    Cohen-Shikora, Emily R.; Balota, David A.

    2016-01-01

    The current study examines visual word recognition in a large sample (N = 148) across the adult lifespan and across a large set of stimuli (N = 1187) in three different lexical processing tasks (pronunciation, lexical decision, and animacy judgments). Although the focus of the present study is on the influence of word frequency, a diverse set of other variables are examined as the system ages and acquires more experience with language. Computational models and conceptual theories of visual word recognition and aging make differing predictions for age-related changes in the system. However, these have been difficult to assess because prior studies have produced inconsistent results, possibly due to sample differences, analytic procedures, and/or task-specific processes. The current study confronts these potential differences by using three different tasks, treating age and word variables as continuous, and exploring the influence of individual differences such as vocabulary, vision, and working memory. The primary finding is remarkable stability in the influence of a diverse set of variables on visual word recognition across the adult age spectrum. This pattern is discussed in reference to previous inconsistent findings in the literature and implications for current models of visual word recognition. PMID:27336629

  9. More Limitations to Monolingualism: Bilinguals Outperform Monolinguals in Implicit Word Learning.

    PubMed

    Escudero, Paola; Mulak, Karen E; Fu, Charlene S L; Singh, Leher

    2016-01-01

    To succeed at cross-situational word learning, learners must infer word-object mappings by attending to the statistical co-occurrences of novel objects and labels across multiple encounters. While past studies have investigated this as a learning mechanism for infants and monolingual adults, bilinguals' cross-situational word learning abilities have yet to be tested. Here, we compared monolinguals' and bilinguals' performance on a cross-situational word learning paradigm that featured phonologically distinct word pairs (e.g., BON-DEET) and phonologically similar word pairs that varied by a single consonant or vowel segment (e.g., BON-TON, DEET-DIT, respectively). Both groups learned the novel word-referent mappings, providing evidence that cross-situational word learning is a learning strategy also available to bilingual adults. Furthermore, bilinguals were overall more accurate than monolinguals. This supports that bilingualism fosters a wide range of cognitive advantages that may benefit implicit word learning. Additionally, response patterns to the different trial types revealed a relative difficulty for vowel minimal pairs than consonant minimal pairs, replicating the pattern found in monolinguals by Escudero et al. (2016) in a different English accent. Specifically, all participants failed to learn vowel contrasts differentiated by vowel height. We discuss evidence for this bilingual advantage as a language-specific or general advantage.

  10. More Limitations to Monolingualism: Bilinguals Outperform Monolinguals in Implicit Word Learning

    PubMed Central

    Escudero, Paola; Mulak, Karen E.; Fu, Charlene S. L.; Singh, Leher

    2016-01-01

    To succeed at cross-situational word learning, learners must infer word-object mappings by attending to the statistical co-occurrences of novel objects and labels across multiple encounters. While past studies have investigated this as a learning mechanism for infants and monolingual adults, bilinguals’ cross-situational word learning abilities have yet to be tested. Here, we compared monolinguals’ and bilinguals’ performance on a cross-situational word learning paradigm that featured phonologically distinct word pairs (e.g., BON-DEET) and phonologically similar word pairs that varied by a single consonant or vowel segment (e.g., BON-TON, DEET-DIT, respectively). Both groups learned the novel word-referent mappings, providing evidence that cross-situational word learning is a learning strategy also available to bilingual adults. Furthermore, bilinguals were overall more accurate than monolinguals. This supports that bilingualism fosters a wide range of cognitive advantages that may benefit implicit word learning. Additionally, response patterns to the different trial types revealed a relative difficulty for vowel minimal pairs than consonant minimal pairs, replicating the pattern found in monolinguals by Escudero et al. (2016) in a different English accent. Specifically, all participants failed to learn vowel contrasts differentiated by vowel height. We discuss evidence for this bilingual advantage as a language-specific or general advantage. PMID:27574513

  11. MEG masked priming evidence for form-based decomposition of irregular verbs

    PubMed Central

    Fruchter, Joseph; Stockall, Linnaea; Marantz, Alec

    2013-01-01

    To what extent does morphological structure play a role in early processing of visually presented English past tense verbs? Previous masked priming studies have demonstrated effects of obligatory form-based decomposition for genuinely affixed words (teacher-TEACH) and pseudo-affixed words (corner-CORN), but not for orthographic controls (brothel-BROTH). Additionally, MEG single word reading studies have demonstrated that the transition probability from stem to affix (in genuinely affixed words) modulates an early evoked response known as the M170; parallel findings have been shown for the transition probability from stem to pseudo-affix (in pseudo-affixed words). Here, utilizing the M170 as a neural index of visual form-based morphological decomposition, we ask whether the M170 demonstrates masked morphological priming effects for irregular past tense verbs (following a previous study which obtained behavioral masked priming effects for irregulars). Dual mechanism theories of the English past tense predict a rule-based decomposition for regulars but not for irregulars, while certain single mechanism theories predict rule-based decomposition even for irregulars. MEG data was recorded for 16 subjects performing a visual masked priming lexical decision task. Using a functional region of interest (fROI) defined on the basis of repetition priming and regular morphological priming effects within the left fusiform and inferior temporal regions, we found that activity in this fROI was modulated by the masked priming manipulation for irregular verbs, during the time window of the M170. We also found effects of the scores generated by the learning model of Albright and Hayes (2003) on the degree of priming for irregular verbs. The results favor a single mechanism account of the English past tense, in which even irregulars are decomposed into stems and affixes prior to lexical access, as opposed to a dual mechanism model, in which irregulars are recognized as whole forms. PMID:24319420

  12. Quantitative learning strategies based on word networks

    NASA Astrophysics Data System (ADS)

    Zhao, Yue-Tian-Yi; Jia, Zi-Yang; Tang, Yong; Xiong, Jason Jie; Zhang, Yi-Cheng

    2018-02-01

    Learning English requires a considerable effort, but the way that vocabulary is introduced in textbooks is not optimized for learning efficiency. With the increasing population of English learners, learning process optimization will have significant impact and improvement towards English learning and teaching. The recent developments of big data analysis and complex network science provide additional opportunities to design and further investigate the strategies in English learning. In this paper, quantitative English learning strategies based on word network and word usage information are proposed. The strategies integrate the words frequency with topological structural information. By analyzing the influence of connected learned words, the learning weights for the unlearned words and dynamically updating of the network are studied and analyzed. The results suggest that quantitative strategies significantly improve learning efficiency while maintaining effectiveness. Especially, the optimized-weight-first strategy and segmented strategies outperform other strategies. The results provide opportunities for researchers and practitioners to reconsider the way of English teaching and designing vocabularies quantitatively by balancing the efficiency and learning costs based on the word network.

  13. Phonological and Semantic Knowledge Are Causal Influences on Learning to Read Words in Chinese

    ERIC Educational Resources Information Center

    Zhou, Lulin; Duff, Fiona J.; Hulme, Charles

    2015-01-01

    We report a training study that assesses whether teaching the pronunciation and meaning of spoken words improves Chinese children's subsequent attempts to learn to read the words. Teaching the pronunciations of words helps children to learn to read those same words, and teaching the pronunciations and meanings improves learning still further.…

  14. Character Decomposition and Transposition Processes of Chinese Compound Words in Rapid Serial Visual Presentation.

    PubMed

    Cao, Hong-Wen; Yang, Ke-Yu; Yan, Hong-Mei

    2017-01-01

    Character order information is encoded at the initial stage of Chinese word processing, however, its time course remains underspecified. In this study, we assess the exact time course of the character decomposition and transposition processes of two-character Chinese compound words (canonical, transposed, or reversible words) compared with pseudowords using dual-target rapid serial visual presentation (RSVP) of stimuli appearing at 30 ms per character with no inter-stimulus interval. The results indicate that Chinese readers can identify words with character transpositions in rapid succession; however, a transposition cost is involved in identifying transposed words compared to canonical words. In RSVP reading, character order of words is more likely to be reversed during the period from 30 to 180 ms for canonical and reversible words, but the period from 30 to 240 ms for transposed words. Taken together, the findings demonstrate that the holistic representation of the base word is activated, however, the order of the two constituent characters is not strictly processed during the very early stage of visual word processing.

  15. Latency of modality-specific reactivation of auditory and visual information during episodic memory retrieval.

    PubMed

    Ueno, Daisuke; Masumoto, Kouhei; Sutani, Kouichi; Iwaki, Sunao

    2015-04-15

    This study used magnetoencephalography (MEG) to examine the latency of modality-specific reactivation in the visual and auditory cortices during a recognition task to determine the effects of reactivation on episodic memory retrieval. Nine right-handed healthy young adults participated in the experiment. The experiment consisted of a word-encoding phase and two recognition phases. Three encoding conditions were included: encoding words alone (word-only) and encoding words presented with either related pictures (visual) or related sounds (auditory). The recognition task was conducted in the MEG scanner 15 min after the completion of the encoding phase. After the recognition test, a source-recognition task was given, in which participants were required to choose whether each recognition word was not presented or was presented with which information during the encoding phase. Word recognition in the auditory condition was higher than that in the word-only condition. Confidence-of-recognition scores (d') and the source-recognition test showed superior performance in both the visual and the auditory conditions compared with the word-only condition. An equivalent current dipoles analysis of MEG data indicated that higher equivalent current dipole amplitudes in the right fusiform gyrus occurred during the visual condition and in the superior temporal auditory cortices during the auditory condition, both 450-550 ms after onset of the recognition stimuli. Results suggest that reactivation of visual and auditory brain regions during recognition binds language with modality-specific information and that reactivation enhances confidence in one's recognition performance.

  16. A Positive Generation Effect on Memory for Auditory Context.

    PubMed

    Overman, Amy A; Richard, Alison G; Stephens, Joseph D W

    2017-06-01

    Self-generation of information during memory encoding has large positive effects on subsequent memory for items, but mixed effects on memory for contextual information associated with items. A processing account of generation effects on context memory (Mulligan in Journal of Experimental Psychology: Learning, Memory, and Cognition, 30(4), 838-855, 2004; Mulligan, Lozito, & Rosner in Journal of Experimental Psychology: Learning, Memory, and Cognition, 32(4), 836-846, 2006) proposes that these effects depend on whether the generation task causes any shift in processing of the type of context features for which memory is being tested. Mulligan and colleagues have used this account to predict various negative effects of generation on context memory, but the account also predicts positive generation effects under certain circumstances. The present experiment provided a critical test of the processing account by examining how generation affected memory for auditory rather than visual context. Based on the processing account, we predicted that generation of rhyme words should enhance processing of auditory information associated with the words (i.e., voice gender), whereas generation of antonym words should have no effect. These predictions were confirmed, providing support to the processing account.

  17. A model linking immediate serial recall, the Hebb repetition effect and the learning of phonological word forms

    PubMed Central

    Page, M. P. A.; Norris, D.

    2009-01-01

    We briefly review the considerable evidence for a common ordering mechanism underlying both immediate serial recall (ISR) tasks (e.g. digit span, non-word repetition) and the learning of phonological word forms. In addition, we discuss how recent work on the Hebb repetition effect is consistent with the idea that learning in this task is itself a laboratory analogue of the sequence-learning component of phonological word-form learning. In this light, we present a unifying modelling framework that seeks to account for ISR and Hebb repetition effects, while being extensible to word-form learning. Because word-form learning is performed in the service of later word recognition, our modelling framework also subsumes a mechanism for word recognition from continuous speech. Simulations of a computational implementation of the modelling framework are presented and are shown to be in accordance with data from the Hebb repetition paradigm. PMID:19933143

  18. Processing of visual semantic information to concrete words: temporal dynamics and neural mechanisms indicated by event-related brain potentials( ).

    PubMed

    van Schie, Hein T; Wijers, Albertus A; Mars, Rogier B; Benjamins, Jeroen S; Stowe, Laurie A

    2005-05-01

    Event-related brain potentials were used to study the retrieval of visual semantic information to concrete words, and to investigate possible structural overlap between visual object working memory and concreteness effects in word processing. Subjects performed an object working memory task that involved 5 s retention of simple 4-angled polygons (load 1), complex 10-angled polygons (load 2), and a no-load baseline condition. During the polygon retention interval subjects were presented with a lexical decision task to auditory presented concrete (imageable) and abstract (nonimageable) words, and pseudowords. ERP results are consistent with the use of object working memory for the visualisation of concrete words. Our data indicate a two-step processing model of visual semantics in which visual descriptive information of concrete words is first encoded in semantic memory (indicated by an anterior N400 and posterior occipital positivity), and is subsequently visualised via the network for object working memory (reflected by a left frontal positive slow wave and a bilateral occipital slow wave negativity). Results are discussed in the light of contemporary models of semantic memory.

  19. Evidence for the Activation of Sensorimotor Information during Visual Word Recognition: The Body-Object Interaction Effect

    ERIC Educational Resources Information Center

    Siakaluk, Paul D.; Pexman, Penny M.; Aguilera, Laura; Owen, William J.; Sears, Christopher R.

    2008-01-01

    We examined the effects of sensorimotor experience in two visual word recognition tasks. Body-object interaction (BOI) ratings were collected for a large set of words. These ratings assess perceptions of the ease with which a human body can physically interact with a word's referent. A set of high BOI words (e.g., "mask") and a set of low BOI…

  20. Searching for the right word: Hybrid visual and memory search for words

    PubMed Central

    Boettcher, Sage E. P.; Wolfe, Jeremy M.

    2016-01-01

    In “Hybrid Search” (Wolfe 2012) observers search through visual space for any of multiple targets held in memory. With photorealistic objects as stimuli, response times (RTs) increase linearly with the visual set size and logarithmically with memory set size even when over 100 items are committed to memory. It is well established that pictures of objects are particularly easy to memorize (Brady, Konkle, Alvarez, & Olivia, 2008). Would hybrid search performance be similar if the targets were words or phrases where word order can be important and where the processes of memorization might be different? In Experiment One, observers memorized 2, 4, 8, or 16 words in 4 different blocks. After passing a memory test, confirming memorization of the list, observers searched for these words in visual displays containing 2 to 16 words. Replicating Wolfe (2012), RTs increased linearly with the visual set size and logarithmically with the length of the word list. The word lists of Experiment One were random. In Experiment Two, words were drawn from phrases that observers reported knowing by heart (E.G. “London Bridge is falling down”). Observers were asked to provide four phrases ranging in length from 2 words to a phrase of no less than 20 words (range 21–86). Words longer than 2 characters from the phrase constituted the target list. Distractor words were matched for length and frequency. Even with these strongly ordered lists, results again replicated the curvilinear function of memory set size seen in hybrid search. One might expect serial position effects; perhaps reducing RTs for the first (primacy) and/or last (recency) members of a list (Atkinson & Shiffrin 1968; Murdock, 1962). Surprisingly we showed no reliable effects of word order. Thus, in “London Bridge is falling down”, “London” and “down” are found no faster than “falling”. PMID:25788035

  1. Reading laterally: the cerebral hemispheric use of spatial frequencies in visual word recognition.

    PubMed

    Tadros, Karine; Dupuis-Roy, Nicolas; Fiset, Daniel; Arguin, Martin; Gosselin, Frédéric

    2013-01-04

    It is generally accepted that the left hemisphere (LH) is more capable for reading than the right hemisphere (RH). Left hemifield presentations (initially processed by the RH) lead to a globally higher error rate, slower word identification, and a significantly stronger word length effect (i.e., slower reaction times for longer words). Because the visuo-perceptual mechanisms of the brain for word recognition are primarily localized in the LH (Cohen et al., 2003), it is possible that this part of the brain possesses better spatial frequency (SF) tuning for processing the visual properties of words than the RH. The main objective of this study is to determine the SF tuning functions of the LH and RH for word recognition. Each word image was randomly sampled in the SF domain using the SF bubbles method (Willenbockel et al., 2010) and was presented laterally to the left or right visual hemifield. As expected, the LH requires less visual information than the RH to reach the same level of performance, illustrating the well-known LH advantage for word recognition. Globally, the SF tuning of both hemispheres is similar. However, these seemingly identical tuning functions hide important differences. Most importantly, we argue that the RH requires higher SFs to identify longer words because of crowding.

  2. Interference Effects on the Recall of Pictures, Printed Words, and Spoken Words.

    ERIC Educational Resources Information Center

    Burton, John K.; Bruning, Roger H.

    1982-01-01

    Nouns were presented in triads as pictures, printed words, or spoken words and followed by various types of interference. Measures of short- and long-term memory were obtained. In short-term memory, pictorial superiority occurred with acoustic, and visual and acoustic, but not visual interference. Long-term memory showed superior recall for…

  3. Teaching the Meaning of Words to Children with Visual Impairments

    ERIC Educational Resources Information Center

    Vervloed, Mathijs P. J.; Loijens, Nancy E. A.; Waller, Sarah E.

    2014-01-01

    In the report presented here, the authors describe a pilot intervention study that was intended to teach children with visual impairments the meaning of far-away words, and that used their mothers as mediators. The aim was to teach both labels and deep word knowledge, which is the comprehension of the full meaning of words, illustrated through…

  4. Eye Movement Behaviour during Reading of Japanese Sentences: Effects of Word Length and Visual Complexity

    ERIC Educational Resources Information Center

    White, Sarah J.; Hirotani, Masako; Liversedge, Simon P.

    2012-01-01

    Two experiments are presented that examine how the visual characteristics of Japanese words influence eye movement behaviour during reading. In Experiment 1, reading behaviour was compared for words comprising either one or two kanji characters. The one-character words were significantly less likely to be fixated on first-pass, and had…

  5. Developmental Differences for Word Processing in the Ventral Stream

    ERIC Educational Resources Information Center

    Olulade, Olumide A.; Flowers, D. Lynn; Napoliello, Eileen M.; Eden, Guinevere F.

    2013-01-01

    The visual word form system (VWFS), located in the occipito-temporal cortex, is involved in orthographic processing of visually presented words (Cohen et al., 2002). Recent fMRI studies in children and adults have demonstrated a gradient of increasing word-selectivity along the posterior-to-anterior axis of this system (Vinckier et al., 2007), yet…

  6. The Neural Basis of Obligatory Decomposition of Suffixed Words

    ERIC Educational Resources Information Center

    Lewis, Gwyneth; Solomyak, Olla; Marantz, Alec

    2011-01-01

    Recent neurolinguistic studies present somewhat conflicting evidence concerning the role of the inferior temporal cortex (IT) in visual word recognition within the first 200 ms after presentation. On the one hand, fMRI studies of the Visual Word Form Area (VWFA) suggest that the IT might recover representations of the orthographic form of words.…

  7. The Role of Derivative Suffix Productivity in the Visual Word Recognition of Complex Words

    ERIC Educational Resources Information Center

    Lázaro, Miguel; Sainz, Javier; Illera, Víctor

    2015-01-01

    In this article we present two lexical decision experiments that examine the role of base frequency and of derivative suffix productivity in visual recognition of Spanish words. In the first experiment we find that complex words with productive derivative suffixes result in lower response times than those with unproductive derivative suffixes.…

  8. Effects of Visual and Auditory Perceptual Aptitudes and Letter Discrimination Pretraining on Word Recognition.

    ERIC Educational Resources Information Center

    Janssen, David Rainsford

    This study investigated alternate methods of letter discrimination pretraining and word recognition training in young children. Seventy kindergarten children were trained to recognize eight printed words in a vocabulary list by a mixed-list paired-associate method. Four of the stimulus words had visual response choices (pictures) and four had…

  9. Morphological Processing during Visual Word Recognition in Hebrew as a First and a Second Language

    ERIC Educational Resources Information Center

    Norman, Tal; Degani, Tamar; Peleg, Orna

    2017-01-01

    The present study examined whether sublexical morphological processing takes place during visual word-recognition in Hebrew, and whether morphological decomposition of written words depends on lexical activation of the complete word. Furthermore, it examined whether morphological processing is similar when reading Hebrew as a first language (L1)…

  10. Functions of graphemic and phonemic codes in visual word-recognition.

    PubMed

    Meyer, D E; Schvaneveldt, R W; Ruddy, M G

    1974-03-01

    Previous investigators have argued that printed words are recognized directly from visual representations and/or phonological representations obtained through phonemic recoding. The present research tested these hypotheses by manipulating graphemic and phonemic relations within various pairs of letter strings. Ss in two experiments classified the pairs as words or nonwords. Reaction times and error rates were relatively small for word pairs (e.g., BRIBE-TRIBE) that were both graphemically, and phonemically similar. Graphemic similarity alone inhibited performance on other word pairs (e.g., COUCH-TOUCH). These and other results suggest that phonological representations play a significant role in visual word recognition and that there is a dependence between successive phonemic-encoding operations. An encoding-bias model is proposed to explain the data.

  11. Semantic mapping reveals distinct patterns in descriptions of social relations in adults with autism spectrum disorder.

    PubMed

    Luo, Sean X; Shinall, Jacqueline A; Peterson, Bradley S; Gerber, Andrew J

    2016-08-01

    Adults with autism spectrum disorder (ASD) may describe other individuals differently compared with typical adults. In this study, we first asked participants to describe closely related individuals such as parents and close friends with 10 positive and 10 negative characteristics. We then used standard natural language processing methods to digitize and visualize these descriptions. The complex patterns of these descriptive sentences exhibited a difference in semantic space between individuals with ASD and control participants. Machine learning algorithms were able to automatically detect and discriminate between these two groups. Furthermore, we showed that these descriptive sentences from adults with ASD exhibited fewer connections as defined by word-word co-occurrences in descriptions, and these connections in words formed a less "small-world" like network. Autism Res 2016, 9: 846-853. © 2015 International Society for Autism Research, Wiley Periodicals, Inc. © 2015 International Society for Autism Research, Wiley Periodicals, Inc.

  12. Using the Biodatamation(TM) strategy to learn introductory college biology: Value-added effects on selected students' conceptual understanding and conceptual integration of the processes of photosynthesis and cellular respiration

    NASA Astrophysics Data System (ADS)

    Reuter, Jewel Jurovich

    The purpose of this exploratory research was to study how students learn photosynthesis and cellular respiration and to determine the value added to the student's learning by each of the three technology-scaffolded learning strategy components (animated concept presentations and WebQuest-style activities, data collection, and student-constructed animations) of the BioDatamation(TM) (BDM) Program. BDM learning strategies utilized the Theory of Interacting Visual Fields(TM) (TIVF) (Reuter & Wandersee, 2002a, 2002b; 2003a, 2003b) which holds that meaningful knowledge is hierarchically constructed using the past, present, and future visual fields, with visual metacognitive components that are derived from the principles of Visual Behavior (Jones, 1995), Human Constructivist Theory (Mintzes & Wandersee, 1998a), and Visual Information Design Theory (Tufte, 1990, 1997, 2001). Student alternative conceptions of photosynthesis and cellular respiration were determined by the item analysis of 263,267 Biology Advanced Placement Examinations and were used to develop the BDM instructional strategy and interview questions. The subjects were 24 undergraduate students of high and low biology prior knowledge enrolled in an introductory-level General Biology course at a major research university in the Deep South. Fifteen participants received BDM instruction which included original and innovative learning materials and laboratories in 6 phases; 8 of the 15 participants were the subject of in depth, extended individual analysis. The other 9 participants received traditional, non-BDM instruction. Interviews which included participants' creation of concept maps and visual field diagrams were conducted after each phase. Various content analyses, including Chi's Verbal Analysis and quantitizing/qualitizing were used for data analysis. The total value added to integrative knowledge during BDM instruction with the three visual fields was an average increase of 56% for cellular respiration and 62% increase for photosynthesis knowledge, improved long-term memory of concepts, and enhanced biological literacy to the multidimensional level, as determined by the BSCS literacy model. WebQuest-style activities and data collection provided for animated prior knowledge in the past visual field, and detailed content knowledge construction in the present visual field. During student construction of animated presentations, layering required participants to think by rearranging words and images for improved hierarchical organization of knowledge with real-life applications.

  13. Word Learning from Videos: More Evidence from 2-Year-Olds

    ERIC Educational Resources Information Center

    Allen, Rebekah; Scofield, Jason

    2010-01-01

    Young children are frequently exposed to examples of screen media like videos. The current studies asked whether videos would support word learning and whether word learning from videos might resemble word learning from a live speaker. In Study 1, 2-year-olds saw a video of a target image being labelled with a novel word and were later asked to…

  14. Separating the influences of prereading skills on early word and nonword reading.

    PubMed

    Shapiro, Laura R; Carroll, Julia M; Solity, Jonathan E

    2013-10-01

    The essential first step for a beginning reader is to learn to match printed forms to phonological representations. For a new word, this is an effortful process where each grapheme must be translated individually (serial decoding). The role of phonological awareness in developing a decoding strategy is well known. We examined whether beginning readers recruit different skills depending on the nature of the words being read (familiar words vs. nonwords). Print knowledge, phoneme and rhyme awareness, rapid automatized naming (RAN), phonological short-term memory (STM), nonverbal reasoning, vocabulary, auditory skills, and visual attention were measured in 392 prereaders 4 and 5 years of age. Word and nonword reading were measured 9 months later. We used structural equation modeling to examine the skills-reading relationship and modeled correlations between our two reading outcomes and among all prereading skills. We found that a broad range of skills were associated with reading outcomes: early print knowledge, phonological STM, phoneme awareness and RAN. Whereas all of these skills were directly predictive of nonword reading, early print knowledge was the only direct predictor of word reading. Our findings suggest that beginning readers draw most heavily on their existing print knowledge to read familiar words. Copyright © 2013 Elsevier Inc. All rights reserved.

  15. Visual words for lip-reading

    NASA Astrophysics Data System (ADS)

    Hassanat, Ahmad B. A.; Jassim, Sabah

    2010-04-01

    In this paper, the automatic lip reading problem is investigated, and an innovative approach to providing solutions to this problem has been proposed. This new VSR approach is dependent on the signature of the word itself, which is obtained from a hybrid feature extraction method dependent on geometric, appearance, and image transform features. The proposed VSR approach is termed "visual words". The visual words approach consists of two main parts, 1) Feature extraction/selection, and 2) Visual speech feature recognition. After localizing face and lips, several visual features for the lips where extracted. Such as the height and width of the mouth, mutual information and the quality measurement between the DWT of the current ROI and the DWT of the previous ROI, the ratio of vertical to horizontal features taken from DWT of ROI, The ratio of vertical edges to horizontal edges of ROI, the appearance of the tongue and the appearance of teeth. Each spoken word is represented by 8 signals, one of each feature. Those signals maintain the dynamic of the spoken word, which contains a good portion of information. The system is then trained on these features using the KNN and DTW. This approach has been evaluated using a large database for different people, and large experiment sets. The evaluation has proved the visual words efficiency, and shown that the VSR is a speaker dependent problem.

  16. Do handwritten words magnify lexical effects in visual word recognition?

    PubMed

    Perea, Manuel; Gil-López, Cristina; Beléndez, Victoria; Carreiras, Manuel

    2016-01-01

    An examination of how the word recognition system is able to process handwritten words is fundamental to formulate a comprehensive model of visual word recognition. Previous research has revealed that the magnitude of lexical effects (e.g., the word-frequency effect) is greater with handwritten words than with printed words. In the present lexical decision experiments, we examined whether the quality of handwritten words moderates the recruitment of top-down feedback, as reflected in word-frequency effects. Results showed a reading cost for difficult-to-read and easy-to-read handwritten words relative to printed words. But the critical finding was that difficult-to-read handwritten words, but not easy-to-read handwritten words, showed a greater word-frequency effect than printed words. Therefore, the inherent physical variability of handwritten words does not necessarily boost the magnitude of lexical effects.

  17. Visual Testing: An Experimental Assessment of the Encoding Specificity Hypothesis.

    ERIC Educational Resources Information Center

    DeMelo, Hermes T.; And Others

    This study of 96 high school biology students investigates the effectiveness of visual instruction composed of simple line drawings and printed words as compared to printed-words-only instruction, visual tests, and the interaction between visual or non-visual mode of instruction and mode of testing. The subjects were randomly assigned to be given…

  18. Interactive Book Reading to Accelerate Word Learning by Kindergarten Children With Specific Language Impairment: Identifying Adequate Progress and Successful Learning Patterns.

    PubMed

    Storkel, Holly L; Komesidou, Rouzana; Fleming, Kandace K; Romine, Rebecca Swinburne

    2017-04-20

    The goal of this study was to provide guidance to clinicians on early benchmarks of successful word learning in an interactive book reading treatment and to examine how encoding and memory evolution during treatment contribute to word learning outcomes by kindergarten children with specific language impairment (SLI). Twenty-seven kindergarten children with SLI participated in a preliminary clinical trial using interactive book reading to teach 30 new words. Word learning was assessed at 4 points during treatment through a picture naming test. The results indicate that the following performance during treatment was cause for concern, indicating a need to modify the treatment: naming 0-1 treated words correctly at Naming Test 1; naming 0-2 treated words correctly at Naming Test 2; naming 0-3 treated words correctly at Naming Test 3. In addition, the results showed that encoding was the primary limiting factor in word learning, but rmemory evolution also contributed (albeit to a lesser degree) to word learning success. Case illustrations demonstrate how a clinician's understanding of a child's word learning strengths and weaknesses develop over the course of treatment, substantiating the importance of regular data collection and clinical decision-making to ensure the best possible outcomes for each individual child.

  19. Goodnight book: sleep consolidation improves word learning via storybooks

    PubMed Central

    Williams, Sophie E.; Horst, Jessica S.

    2014-01-01

    Reading the same storybooks repeatedly helps preschool children learn words. In addition, sleeping shortly after learning also facilitates memory consolidation and aids learning in older children and adults. The current study explored how sleep promotes word learning in preschool children using a shared storybook reading task. Children were either read the same story repeatedly or different stories and either napped after the stories or remained awake. Children's word retention were tested 2.5 h later, 24 h later, and 7 days later. Results demonstrate strong, persistent effects for both repeated readings and sleep consolidation on young children's word learning. A key finding is that children who read different stories before napping learned words as well as children who had the advantage of hearing the same story. In contrast, children who read different stories and remained awake never caught up to their peers on later word learning tests. Implications for educational practices are discussed. PMID:24624111

  20. Quality Evaluation Tool for Computer-and Web-Delivered Instruction

    DTIC Science & Technology

    2005-06-01

    Bryman , A ., Mars, R., & Tapangco, L. (1996). When less is more: Meaningful learning from visual and verbal summaries of science textbook lessons...is unlimited. " A " 13. ABSTRACT (Maximum 200 words) The objective of this effort was to develop an Instructional Quality Evaluation Tool to help...developed for each rating point on all scales. This report includes these anchored Likert scales, which can serve as a "stand-alone" Tool. The

  1. The Effects of Seductive Details on Recognition Tests and Transfer Tasks

    DTIC Science & Technology

    2008-06-01

    Mayer, R. E., Bove, W., Bryman , A ., Mars, R., & Tapangco, L. (1996). When less is more: Meaningful learning from visual and verbal summaries of textbook...unlimited. 20080709 276 U.S. Army Research Institute for the Behavioral and Social Sciences A Directorate of the Department of the Army Deputy Chief...unlimited. 13. SUPPLEMENTARY NOTES Contractor Officer’s Representative and Subject Matter POC: Dr. Paul A . Gade 14. ABSTRACT (Maximum 200 words): This

  2. Does Hearing Several Speakers Reduce Foreign Word Learning?

    ERIC Educational Resources Information Center

    Ludington, Jason Darryl

    2016-01-01

    Learning spoken word forms is a vital part of second language learning, and CALL lends itself well to this training. Not enough is known, however, about how auditory variation across speech tokens may affect receptive word learning. To find out, 144 Thai university students with no knowledge of the Patani Malay language learned 24 foreign words in…

  3. Independent Deficits of Visual Word and Motion Processing in Aging and Early Alzheimer's Disease

    PubMed Central

    Velarde, Carla; Perelstein, Elizabeth; Ressmann, Wendy; Duffy, Charles J.

    2013-01-01

    We tested whether visual processing impairments in aging and Alzheimer's disease (AD) reflect uniform posterior cortical decline, or independent disorders of visual processing for reading and navigation. Young and older normal controls were compared to early AD patients using psychophysical measures of visual word and motion processing. We find elevated perceptual thresholds for letters and word discrimination from young normal controls, to older normal controls, to early AD patients. Across subject groups, visual motion processing showed a similar pattern of increasing thresholds, with the greatest impact on radial pattern motion perception. Combined analyses show that letter, word, and motion processing impairments are independent of each other. Aging and AD may be accompanied by independent impairments of visual processing for reading and navigation. This suggests separate underlying disorders and highlights the need for comprehensive evaluations to detect early deficits. PMID:22647256

  4. Strengthening the Visual Element in Visual Media Materials.

    ERIC Educational Resources Information Center

    Wilhelm, R. Dwight

    1996-01-01

    Describes how to more effectively communicate the visual element in video and audiovisual materials. Discusses identifying a central topic, developing the visual content without words, preparing a storyboard, testing its effectiveness on people who are unacquainted with the production, and writing the script with as few words as possible. (AEF)

  5. Rehearsal Effects in Adult Word Learning

    ERIC Educational Resources Information Center

    Kaushanskaya, Margarita; Yoo, Jeewon

    2011-01-01

    The goal of this research was to examine the effects of phonological familiarity and rehearsal method (vocal vs. subvocal) on novel word learning. In Experiment 1, English-speaking adults learned phonologically familiar novel words that followed English phonological structure. Participants learned half the words via vocal rehearsal (saying the…

  6. Simulating single word processing in the classic aphasia syndromes based on the Wernicke-Lichtheim-Geschwind theory.

    PubMed

    Weems, Scott A; Reggia, James A

    2006-09-01

    The Wernicke-Lichtheim-Geschwind (WLG) theory of the neurobiological basis of language is of great historical importance, and it continues to exert a substantial influence on most contemporary theories of language in spite of its widely recognized limitations. Here, we suggest that neurobiologically grounded computational models based on the WLG theory can provide a deeper understanding of which of its features are plausible and where the theory fails. As a first step in this direction, we created a model of the interconnected left and right neocortical areas that are most relevant to the WLG theory, and used it to study visual-confrontation naming, auditory repetition, and auditory comprehension performance. No specific functionality is assigned a priori to model cortical regions, other than that implicitly present due to their locations in the cortical network and a higher learning rate in left hemisphere regions. Following learning, the model successfully simulates confrontation naming and word repetition, and acquires a unique internal representation in parietal regions for each named object. Simulated lesions to the language-dominant cortical regions produce patterns of single word processing impairment reminiscent of those postulated historically in the classic aphasia syndromes. These results indicate that WLG theory, instantiated as a simple interconnected network of model neocortical regions familiar to any neuropsychologist/neurologist, captures several fundamental "low-level" aspects of neurobiological word processing and their impairment in aphasia.

  7. What you say matters: exploring visual-verbal interactions in visual working memory.

    PubMed

    Mate, Judit; Allen, Richard J; Baqués, Josep

    2012-01-01

    The aim of this study was to explore whether the content of a simple concurrent verbal load task determines the extent of its interference on memory for coloured shapes. The task consisted of remembering four visual items while repeating aloud a pair of words that varied in terms of imageability and relatedness to the task set. At test, a cue appeared that was either the colour or the shape of one of the previously seen objects, with participants required to select the object's other feature from a visual array. During encoding and retention, there were four verbal load conditions: (a) a related, shape-colour pair (from outside the experimental set, i.e., "pink square"); (b) a pair of unrelated but visually imageable, concrete, words (i.e., "big elephant"); (c) a pair of unrelated and abstract words (i.e., "critical event"); and (d) no verbal load. Results showed differential effects of these verbal load conditions. In particular, imageable words (concrete and related conditions) interfered to a greater degree than abstract words. Possible implications for how visual working memory interacts with verbal memory and long-term memory are discussed.

  8. Observational Word Learning: Beyond Propose-But-Verify and Associative Bean Counting.

    PubMed

    Roembke, Tanja; McMurray, Bob

    2016-04-01

    Learning new words is difficult. In any naming situation, there are multiple possible interpretations of a novel word. Recent approaches suggest that learners may solve this problem by tracking co-occurrence statistics between words and referents across multiple naming situations (e.g. Yu & Smith, 2007), overcoming the ambiguity in any one situation. Yet, there remains debate around the underlying mechanisms. We conducted two experiments in which learners acquired eight word-object mappings using cross-situational statistics while eye-movements were tracked. These addressed four unresolved questions regarding the learning mechanism. First, eye-movements during learning showed evidence that listeners maintain multiple hypotheses for a given word and bring them all to bear in the moment of naming. Second, trial-by-trial analyses of accuracy suggested that listeners accumulate continuous statistics about word/object mappings, over and above prior hypotheses they have about a word. Third, consistent, probabilistic context can impede learning, as false associations between words and highly co-occurring referents are formed. Finally, a number of factors not previously considered in prior analysis impact observational word learning: knowledge of the foils, spatial consistency of the target object, and the number of trials between presentations of the same word. This evidence suggests that observational word learning may derive from a combination of gradual statistical or associative learning mechanisms and more rapid real-time processes such as competition, mutual exclusivity and even inference or hypothesis testing.

  9. Words, Shape, Visual Search and Visual Working Memory in 3-Year-Old Children

    ERIC Educational Resources Information Center

    Vales, Catarina; Smith, Linda B.

    2015-01-01

    Do words cue children's visual attention, and if so, what are the relevant mechanisms? Across four experiments, 3-year-old children (N = 163) were tested in visual search tasks in which targets were cued with only a visual preview versus a visual preview and a spoken name. The experiments were designed to determine whether labels facilitated…

  10. Word learning in adults with second-language experience: effects of phonological and referent familiarity.

    PubMed

    Kaushanskaya, Margarita; Yoo, Jeewon; Van Hecke, Stephanie

    2013-04-01

    The goal of this research was to examine whether phonological familiarity exerts different effects on novel word learning for familiar versus unfamiliar referents and whether successful word learning is associated with increased second-language experience. Eighty-one adult native English speakers with various levels of Spanish knowledge learned phonologically familiar novel words (constructed using English sounds) or phonologically unfamiliar novel words (constructed using non-English and non-Spanish sounds) in association with either familiar or unfamiliar referents. Retention was tested via a forced-choice recognition task. A median-split procedure identified high-ability and low-ability word learners in each condition, and the two groups were compared on measures of second-language experience. Findings suggest that the ability to accurately match newly learned novel names to their appropriate referents is facilitated by phonological familiarity only for familiar referents but not for unfamiliar referents. Moreover, more extensive second-language learning experience characterized superior learners primarily in one word-learning condition: in which phonologically unfamiliar novel words were paired with familiar referents. Together, these findings indicate that phonological familiarity facilitates novel word learning only for familiar referents and that experience with learning a second language may have a specific impact on novel vocabulary learning in adults.

  11. Word learning in adults with second language experience: Effects of phonological and referent familiarity

    PubMed Central

    Kaushanskaya, Margarita; Yoo, Jeewon; Van Hecke, Stephanie

    2014-01-01

    Purpose The goal of this research was to examine whether phonological familiarity exerts different effects on novel word learning for familiar vs. unfamiliar referents, and whether successful word-learning is associated with increased second-language experience. Method Eighty-one adult native English speakers with various levels of Spanish knowledge learned phonologically-familiar novel words (constructed using English sounds) or phonologically-unfamiliar novel words (constructed using non-English and non-Spanish sounds) in association with either familiar or unfamiliar referents. Retention was tested via a forced-choice recognition-task. A median-split procedure identified high-ability and low-ability word-learners in each condition, and the two groups were compared on measures of second-language experience. Results Findings suggest that the ability to accurately match newly-learned novel names to their appropriate referents is facilitated by phonological familiarity only for familiar referents but not for unfamiliar referents. Moreover, more extensive second-language learning experience characterized superior learners primarily in one word-learning condition: Where phonologically-unfamiliar novel words were paired with familiar referents. Conclusions Together, these findings indicate that phonological familiarity facilitates novel word learning only for familiar referents, and that experience with learning a second language may have a specific impact on novel vocabulary learning in adults. PMID:22992709

  12. Reading impairment in schizophrenia: dysconnectivity within the visual system.

    PubMed

    Vinckier, Fabien; Cohen, Laurent; Oppenheim, Catherine; Salvador, Alexandre; Picard, Hernan; Amado, Isabelle; Krebs, Marie-Odile; Gaillard, Raphaël

    2014-01-01

    Patients with schizophrenia suffer from perceptual visual deficits. It remains unclear whether those deficits result from an isolated impairment of a localized brain process or from a more diffuse long-range dysconnectivity within the visual system. We aimed to explore, with a reading paradigm, the functioning of both ventral and dorsal visual pathways and their interaction in schizophrenia. Patients with schizophrenia and control subjects were studied using event-related functional MRI (fMRI) while reading words that were progressively degraded through word rotation or letter spacing. Reading intact or minimally degraded single words involves mainly the ventral visual pathway. Conversely, reading in non-optimal conditions involves both the ventral and the dorsal pathway. The reading paradigm thus allowed us to study the functioning of both pathways and their interaction. Behaviourally, patients with schizophrenia were selectively impaired at reading highly degraded words. While fMRI activation level was not different between patients and controls, functional connectivity between the ventral and dorsal visual pathways increased with word degradation in control subjects, but not in patients. Moreover, there was a negative correlation between the patients' behavioural sensitivity to stimulus degradation and dorso-ventral connectivity. This study suggests that perceptual visual deficits in schizophrenia could be related to dysconnectivity between dorsal and ventral visual pathways. © 2013 Published by Elsevier Ltd.

  13. Word Writing vs. Meaning Inferencing in Contextualized L2 Vocabulary Learning: Assessing the Effect of Different Vocabulary Learning Strategies

    ERIC Educational Resources Information Center

    Candry, Sarah; Elgort, Irina; Deconinck, Julie; Eyckmans, June

    2017-01-01

    The majority of L2 vocabulary studies concentrate on learning word meaning and provide learners with opportunities for semantic elaboration (i.e., focus on word meaning). However, in initial vocabulary learning, engaging in structural elaboration (i.e., focus on word form) with a view to acquiring L2 word form is equally important. The present…

  14. Left-lateralized N170 Effects of Visual Expertise in Reading: Evidence from Japanese Syllabic and Logographic Scripts

    PubMed Central

    Maurer, Urs; Zevin, Jason D.; McCandliss, Bruce D.

    2015-01-01

    The N170 component of the event-related potential (ERP) reflects experience-dependent neural changes in several forms of visual expertise, including expertise for visual words. Readers skilled in writing systems that link characters to phonemes (i.e., alphabetic writing) typically produce a left-lateralized N170 to visual word forms. This study examined the N170 in three Japanese scripts that link characters to larger phonological units. Participants were monolingual English speakers (EL1) and native Japanese speakers (JL1) who were also proficient in English. ERPs were collected using a 129-channel array, as participants performed a series of experiments viewing words or novel control stimuli in a repetition detection task. The N170 was strongly left-lateralized for all three Japanese scripts (including logographic Kanji characters) in JL1 participants, but bilateral in EL1 participants viewing these same stimuli. This demonstrates that left-lateralization of the N170 is dependent on specific reading expertise and is not limited to alphabetic scripts. Additional contrasts within the moraic Katakana script revealed equivalent N170 responses in JL1 speakers for familiar Katakana words and for Kanji words transcribed into novel Katakana words, suggesting that the N170 expertise effect is driven by script familiarity rather than familiarity with particular visual word forms. Finally, for English words and novel symbol string stimuli, both EL1 and JL1 subjects produced equivalent responses for the novel symbols, and more left-lateralized N170 responses for the English words, indicating that such effects are not limited to the first language. Taken together, these cross-linguistic results suggest that similar neural processes underlie visual expertise for print in very different writing systems. PMID:18370600

  15. Learning and Consolidation of Novel Spoken Words

    ERIC Educational Resources Information Center

    Davis, Matthew H.; Di Betta, Anna Maria; Macdonald, Mark J. E.; Gaskell, Gareth

    2009-01-01

    Two experiments explored the neural mechanisms underlying the learning and consolidation of novel spoken words. In Experiment 1, participants learned two sets of novel words on successive days. A subsequent recognition test revealed high levels of familiarity for both sets. However, a lexical decision task showed that only novel words learned on…

  16. Visual hallucinations in schizophrenia: confusion between imagination and perception.

    PubMed

    Brébion, Gildas; Ohlsen, Ruth I; Pilowsky, Lyn S; David, Anthony S

    2008-05-01

    An association between hallucinations and reality-monitoring deficit has been repeatedly observed in patients with schizophrenia. Most data concern auditory/verbal hallucinations. The aim of this study was to investigate the association between visual hallucinations and a specific type of reality-monitoring deficit, namely confusion between imagined and perceived pictures. Forty-one patients with schizophrenia and 43 healthy control participants completed a reality-monitoring task. Thirty-two items were presented either as written words or as pictures. After the presentation phase, participants had to recognize the target words and pictures among distractors, and then remember their mode of presentation. All groups of participants recognized the pictures better than the words, except the patients with visual hallucinations, who presented the opposite pattern. The participants with visual hallucinations made more misattributions to pictures than did the others, and higher ratings of visual hallucinations were correlated with increased tendency to remember words as pictures. No association with auditory hallucinations was revealed. Our data suggest that visual hallucinations are associated with confusion between visual mental images and perception.

  17. The (lack of) effect of dynamic visual noise on the concreteness effect in short-term memory.

    PubMed

    Castellà, Judit; Campoy, Guillermo

    2018-05-17

    It has been suggested that the concreteness effect in short-term memory (STM) is a consequence of concrete words having more distinctive and richer semantic representations. The generation and storage of visual codes in STM could also play a crucial role on the effect because concrete words are more imaginable than abstract words. If this were the case, the introduction of a visual interference task would be expected to disrupt recall of concrete words. A Dynamic Visual Noise (DVN) display, which has been proven to eliminate the concreteness effect on long-term memory (LTM), was presented along encoding of concrete and abstract words in a STM serial recall task. Results showed a main effect of word type, with more item errors in abstract words, a main effect of DVN, which impaired global performance due to more order errors, but no interaction, suggesting that DVN did not have any impact on the concreteness effect. These findings are discussed in terms of LTM participation through redintegration processes and in terms of the language-based models of verbal STM.

  18. Sizing up the competition: quantifying the influence of the mental lexicon on auditory and visual spoken word recognition.

    PubMed

    Strand, Julia F; Sommers, Mitchell S

    2011-09-01

    Much research has explored how spoken word recognition is influenced by the architecture and dynamics of the mental lexicon (e.g., Luce and Pisoni, 1998; McClelland and Elman, 1986). A more recent question is whether the processes underlying word recognition are unique to the auditory domain, or whether visually perceived (lipread) speech may also be sensitive to the structure of the mental lexicon (Auer, 2002; Mattys, Bernstein, and Auer, 2002). The current research was designed to test the hypothesis that both aurally and visually perceived spoken words are isolated in the mental lexicon as a function of their modality-specific perceptual similarity to other words. Lexical competition (the extent to which perceptually similar words influence recognition of a stimulus word) was quantified using metrics that are well-established in the literature, as well as a statistical method for calculating perceptual confusability based on the phi-square statistic. Both auditory and visual spoken word recognition were influenced by modality-specific lexical competition as well as stimulus word frequency. These findings extend the scope of activation-competition models of spoken word recognition and reinforce the hypothesis (Auer, 2002; Mattys et al., 2002) that perceptual and cognitive properties underlying spoken word recognition are not specific to the auditory domain. In addition, the results support the use of the phi-square statistic as a better predictor of lexical competition than metrics currently used in models of spoken word recognition. © 2011 Acoustical Society of America

  19. Perception of Words and Non-Words in the Upper and Lower Visual Fields

    ERIC Educational Resources Information Center

    Darker, Iain T.; Jordan, Timothy R.

    2004-01-01

    The findings of previous investigations into word perception in the upper and the lower visual field (VF) are variable and may have incurred non-perceptual biases caused by the asymmetric distribution of information within a word, an advantage for saccadic eye-movements to targets in the upper VF and the possibility that stimuli were not projected…

  20. Phonological Contribution during Visual Word Recognition in Child Readers. An Intermodal Priming Study in Grades 3 and 5

    ERIC Educational Resources Information Center

    Sauval, Karinne; Casalis, Séverine; Perre, Laetitia

    2017-01-01

    This study investigated the phonological contribution during visual word recognition in child readers as a function of general reading expertise (third and fifth grades) and specific word exposure (frequent and less-frequent words). An intermodal priming in lexical decision task was performed. Auditory primes (identical and unrelated) were used in…

  1. The effect of compression and attention allocation on speech intelligibility. II

    NASA Astrophysics Data System (ADS)

    Choi, Sangsook; Carrell, Thomas

    2004-05-01

    Previous investigations of the effects of amplitude compression on measures of speech intelligibility have shown inconsistent results. Recently, a novel paradigm was used to investigate the possibility of more consistent findings with a measure of speech perception that is not based entirely on intelligibility (Choi and Carrell, 2003). That study exploited a dual-task paradigm using a pursuit rotor online visual-motor tracking task (Dlhopolsky, 2000) along with a word repetition task. Intensity-compressed words caused reduced performance on the tracking task as compared to uncompressed words when subjects engaged in a simultaneous word repetition task. This suggested an increased cognitive load when listeners processed compressed words. A stronger result might be obtained if a single resource (linguistic) is required rather than two (linguistic and visual-motor) resources. In the present experiment a visual lexical decision task and an auditory word repetition task were used. The visual stimuli for the lexical decision task were blurred and presented in a noise background. The compressed and uncompressed words for repetition were placed in speech-shaped noise. Participants with normal hearing and vision conducted word repetition and lexical decision tasks both independently and simultaneously. The pattern of results is discussed and compared to the previous study.

  2. [Medical Image Registration Method Based on a Semantic Model with Directional Visual Words].

    PubMed

    Jin, Yufei; Ma, Meng; Yang, Xin

    2016-04-01

    Medical image registration is very challenging due to the various imaging modality,image quality,wide inter-patients variability,and intra-patient variability with disease progressing of medical images,with strict requirement for robustness.Inspired by semantic model,especially the recent tremendous progress in computer vision tasks under bag-of-visual-word framework,we set up a novel semantic model to match medical images.Since most of medical images have poor contrast,small dynamic range,and involving only intensities and so on,the traditional visual word models do not perform very well.To benefit from the advantages from the relative works,we proposed a novel visual word model named directional visual words,which performs better on medical images.Then we applied this model to do medical registration.In our experiment,the critical anatomical structures were first manually specified by experts.Then we adopted the directional visual word,the strategy of spatial pyramid searching from coarse to fine,and the k-means algorithm to help us locating the positions of the key structures accurately.Sequentially,we shall register corresponding images by the areas around these positions.The results of the experiments which were performed on real cardiac images showed that our method could achieve high registration accuracy in some specific areas.

  3. Unraveling Vocabulary Learning: Reader and Item-Level Predictors of Vocabulary Learning within Comprehension Instruction for Fifth and Sixth Graders

    ERIC Educational Resources Information Center

    Goodwin, Amanda P.; Cho, Sun-Joo

    2016-01-01

    This study explores reader, word, and learning activity characteristics related to vocabulary learning for 202 fifth and sixth graders (N = 118 and 84, respectively) learning 16 words. Three measures of word knowledge were used: multiple-choice definition knowledge, self-report of meaning knowledge, and production of morphologically related words.…

  4. Syllable Transposition Effects in Korean Word Recognition

    ERIC Educational Resources Information Center

    Lee, Chang H.; Kwon, Youan; Kim, Kyungil; Rastle, Kathleen

    2015-01-01

    Research on the impact of letter transpositions in visual word recognition has yielded important clues about the nature of orthographic representations. This study investigated the impact of syllable transpositions on the recognition of Korean multisyllabic words. Results showed that rejection latencies in visual lexical decision for…

  5. Modeling Learning in Doubly Multilevel Binary Longitudinal Data Using Generalized Linear Mixed Models: An Application to Measuring and Explaining Word Learning.

    PubMed

    Cho, Sun-Joo; Goodwin, Amanda P

    2016-04-01

    When word learning is supported by instruction in experimental studies for adolescents, word knowledge outcomes tend to be collected from complex data structure, such as multiple aspects of word knowledge, multilevel reader data, multilevel item data, longitudinal design, and multiple groups. This study illustrates how generalized linear mixed models can be used to measure and explain word learning for data having such complexity. Results from this application provide deeper understanding of word knowledge than could be attained from simpler models and show that word knowledge is multidimensional and depends on word characteristics and instructional contexts.

  6. When a hit sounds like a kiss: An electrophysiological exploration of semantic processing in visual narrative.

    PubMed

    Manfredi, Mirella; Cohn, Neil; Kutas, Marta

    2017-06-01

    Researchers have long questioned whether information presented through different sensory modalities involves distinct or shared semantic systems. We investigated uni-sensory cross-modal processing by recording event-related brain potentials to words replacing the climactic event in a visual narrative sequence (comics). We compared Onomatopoeic words, which phonetically imitate action sounds (Pow!), with Descriptive words, which describe an action (Punch!), that were (in)congruent within their sequence contexts. Across two experiments, larger N400s appeared to Anomalous Onomatopoeic or Descriptive critical panels than to their congruent counterparts, reflecting a difficulty in semantic access/retrieval. Also, Descriptive words evinced a greater late frontal positivity compared to Onomatopoetic words, suggesting that, though plausible, they may be less predictable/expected in visual narratives. Our results indicate that uni-sensory cross-model integration of word/letter-symbol strings within visual narratives elicit ERP patterns typically observed for written sentence processing, thereby suggesting the engagement of similar domain-independent integration/interpretation mechanisms. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. When a hit sounds like a kiss: an electrophysiological exploration of semantic processing in visual narrative

    PubMed Central

    Manfredi, Mirella; Cohn, Neil; Kutas, Marta

    2017-01-01

    Researchers have long questioned whether information presented through different sensory modalities involves distinct or shared semantic systems. We investigated uni-sensory cross-modal processing by recording event-related brain potentials to words replacing the climactic event in a visual narrative sequence (comics). We compared Onomatopoeic words, which phonetically imitate action sounds (Pow!), with Descriptive words, which describe an action (Punch!), that were (in)congruent within their sequence contexts. Across two experiments, larger N400s appeared to Anomalous Onomatopoeic or Descriptive critical panels than to their congruent counterparts, reflecting a difficulty in semantic access/retrieval. Also, Descriptive words evinced a greater late frontal positivity compared to Onomatopoetic words, suggesting that, though plausible, they may be less predictable/expected in visual narratives. Our results indicate that uni-sensory cross-model integration of word/letter-symbol strings within visual narratives elicit ERP patterns typically observed for written sentence processing, thereby suggesting the engagement of similar domain-independent integration/interpretation mechanisms. PMID:28242517

  8. Music reading expertise modulates hemispheric lateralization in English word processing but not in Chinese character processing.

    PubMed

    Li, Sara Tze Kwan; Hsiao, Janet Hui-Wen

    2018-07-01

    Music notation and English word reading both involve mapping horizontally arranged visual components to components in sound, in contrast to reading in logographic languages such as Chinese. Accordingly, music-reading expertise may influence English word processing more than Chinese character processing. Here we showed that musicians named English words significantly faster than non-musicians when words were presented in the left visual field/right hemisphere (RH) or the center position, suggesting an advantage of RH processing due to music reading experience. This effect was not observed in Chinese character naming. A follow-up ERP study showed that in a sequential matching task, musicians had reduced RH N170 responses to English non-words under the processing of musical segments as compared with non-musicians, suggesting a shared visual processing mechanism in the RH between music notation and English non-word reading. This shared mechanism may be related to the letter-by-letter, serial visual processing that characterizes RH English word recognition (e.g., Lavidor & Ellis, 2001), which may consequently facilitate English word processing in the RH in musicians. Thus, music reading experience may have differential influences on the processing of different languages, depending on their similarities in the cognitive processes involved. Copyright © 2018 Elsevier B.V. All rights reserved.

  9. Rapid extraction of gist from visual text and its influence on word recognition.

    PubMed

    Asano, Michiko; Yokosawa, Kazuhiko

    2011-01-01

    Two experiments explored rapid extraction of gist from a visual text and its influence on word recognition. In both, a short text (sentence) containing a target word was presented for 200 ms and was followed by a target recognition task. Results showed that participants recognized contextually anomalous word targets less frequently than contextually consistent counterparts (Experiment 1). This context effect was obtained when sentences contained the same semantic content but with disrupted syntactic structure (Experiment 2). Results demonstrate that words in a briefly presented visual sentence are processed in parallel and that rapid extraction of sentence gist relies on a primitive representation of sentence context (termed protocontext) that is semantically activated by the simultaneous presentation of multiple words (i.e., a sentence) before syntactic processing.

  10. Interactive Book Reading to Accelerate Word Learning by Kindergarten Children With Specific Language Impairment: Identifying Adequate Progress and Successful Learning Patterns

    PubMed Central

    Komesidou, Rouzana; Fleming, Kandace K.; Romine, Rebecca Swinburne

    2017-01-01

    Purpose The goal of this study was to provide guidance to clinicians on early benchmarks of successful word learning in an interactive book reading treatment and to examine how encoding and memory evolution during treatment contribute to word learning outcomes by kindergarten children with specific language impairment (SLI). Method Twenty-seven kindergarten children with SLI participated in a preliminary clinical trial using interactive book reading to teach 30 new words. Word learning was assessed at 4 points during treatment through a picture naming test. Results The results indicate that the following performance during treatment was cause for concern, indicating a need to modify the treatment: naming 0–1 treated words correctly at Naming Test 1; naming 0–2 treated words correctly at Naming Test 2; naming 0–3 treated words correctly at Naming Test 3. In addition, the results showed that encoding was the primary limiting factor in word learning, but rmemory evolution also contributed (albeit to a lesser degree) to word learning success. Conclusion Case illustrations demonstrate how a clinician's understanding of a child's word learning strengths and weaknesses develop over the course of treatment, substantiating the importance of regular data collection and clinical decision-making to ensure the best possible outcomes for each individual child. PMID:28419188

  11. Acquiring concepts and features of novel words by two types of learning: direct mapping and inference.

    PubMed

    Chen, Shuang; Wang, Lin; Yang, Yufang

    2014-04-01

    This study examined the semantic representation of novel words learnt in two conditions: directly mapping a novel word to a concept (Direct mapping: DM) and inferring the concept from provided features (Inferred learning: IF). A condition where no definite concept could be inferred (No basic-level meaning: NM) served as a baseline. The semantic representation of the novel word was assessed via a semantic-relatedness judgment task. In this task, the learned novel word served as a prime, while the corresponding concept, an unlearned feature of the concept, and an unrelated word served as targets. ERP responses to the targets, primed by the novel words in the three learning conditions, were compared. For the corresponding concept, smaller N400s were elicited in the DM and IF conditions than in the NM condition, indicating that the concept could be obtained in both learning conditions. However, for the unlearned feature, the targets in the IF condition produced an N400 effect while in the DM condition elicited an LPC effect relative to the NM learning condition. No ERP difference was observed among the three learning conditions for the unrelated words. The results indicate that conditions of learning affect the semantic representation of novel word, and that the unlearned feature was only activated by the novel word in the IF learning condition. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Effect of word familiarity on visually evoked magnetic fields.

    PubMed

    Harada, N; Iwaki, S; Nakagawa, S; Yamaguchi, M; Tonoike, M

    2004-11-30

    This study investigated the effect of word familiarity of visual stimuli on the word recognizing function of the human brain. Word familiarity is an index of the relative ease of word perception, and is characterized by facilitation and accuracy on word recognition. We studied the effect of word familiarity, using "Hiragana" (phonetic characters in Japanese orthography) characters as visual stimuli, on the elicitation of visually evoked magnetic fields with a word-naming task. The words were selected from a database of lexical properties of Japanese. The four "Hiragana" characters used were grouped and presented in 4 classes of degree of familiarity. The three components were observed in averaged waveforms of the root mean square (RMS) value on latencies at about 100 ms, 150 ms and 220 ms. The RMS value of the 220 ms component showed a significant positive correlation (F=(3/36); 5.501; p=0.035) with the value of familiarity. ECDs of the 220 ms component were observed in the intraparietal sulcus (IPS). Increments in the RMS value of the 220 ms component, which might reflect ideographical word recognition, retrieving "as a whole" were enhanced with increments of the value of familiarity. The interaction of characters, which increased with the value of familiarity, might function "as a large symbol"; and enhance a "pop-out" function with an escaping character inhibiting other characters and enhancing the segmentation of the character (as a figure) from the ground.

  13. Cross-situational statistical word learning in young children.

    PubMed

    Suanda, Sumarga H; Mugwanya, Nassali; Namy, Laura L

    2014-10-01

    Recent empirical work has highlighted the potential role of cross-situational statistical word learning in children's early vocabulary development. In the current study, we tested 5- to 7-year-old children's cross-situational learning by presenting children with a series of ambiguous naming events containing multiple words and multiple referents. Children rapidly learned word-to-object mappings by attending to the co-occurrence regularities across these ambiguous naming events. The current study begins to address the mechanisms underlying children's learning by demonstrating that the diversity of learning contexts affects performance. The implications of the current findings for the role of cross-situational word learning at different points in development are discussed along with the methodological implications of employing school-aged children to test hypotheses regarding the mechanisms supporting early word learning. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. The role of reference in cross-situational word learning.

    PubMed

    Wang, Felix Hao; Mintz, Toben H

    2018-01-01

    Word learning involves massive ambiguity, since in a particular encounter with a novel word, there are an unlimited number of potential referents. One proposal for how learners surmount the problem of ambiguity is that learners use cross-situational statistics to constrain the ambiguity: When a word and its referent co-occur across multiple situations, learners will associate the word with the correct referent. Yu and Smith (2007) propose that these co-occurrence statistics are sufficient for word-to-referent mapping. Alternative accounts hold that co-occurrence statistics alone are insufficient to support learning, and that learners are further guided by knowledge that words are referential (e.g., Waxman & Gelman, 2009). However, no behavioral word learning studies we are aware of explicitly manipulate subjects' prior assumptions about the role of the words in the experiments in order to test the influence of these assumptions. In this study, we directly test whether, when faced with referential ambiguity, co-occurrence statistics are sufficient for word-to-referent mappings in adult word-learners. Across a series of cross-situational learning experiments, we varied the degree to which there was support for the notion that the words were referential. At the same time, the statistical information about the words' meanings was held constant. When we overrode support for the notion that words were referential, subjects failed to learn the word-to-referent mappings, but otherwise they succeeded. Thus, cross-situational statistics were useful only when learners had the goal of discovering mappings between words and referents. We discuss the implications of these results for theories of word learning in children's language acquisition. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Effects of Word and Fragment Writing during L2 Vocabulary Learning

    ERIC Educational Resources Information Center

    Barcroft, Joe

    2007-01-01

    This study examined how writing (copying) target words and word fragments affects intentional second language (L2) vocabulary learning. English-speaking first-semester learners of Spanish attempted to learn 24 Spanish nouns via word-picture repetition in three conditions: (1) word writing, (2) fragment writing, and (3) no writing. After the…

  16. Strength of Temporal White Matter Pathways Predicts Semantic Learning.

    PubMed

    Ripollés, Pablo; Biel, Davina; Peñaloza, Claudia; Kaufmann, Jörn; Marco-Pallarés, Josep; Noesselt, Toemme; Rodríguez-Fornells, Antoni

    2017-11-15

    Learning the associations between words and meanings is a fundamental human ability. Although the language network is cortically well defined, the role of the white matter pathways supporting novel word-to-meaning mappings remains unclear. Here, by using contextual and cross-situational word learning, we tested whether learning the meaning of a new word is related to the integrity of the language-related white matter pathways in 40 adults (18 women). The arcuate, uncinate, inferior-fronto-occipital and inferior-longitudinal fasciculi were virtually dissected using manual and automatic deterministic fiber tracking. Critically, the automatic method allowed assessing the white matter microstructure along the tract. Results demonstrate that the microstructural properties of the left inferior-longitudinal fasciculus predict contextual learning, whereas the left uncinate was associated with cross-situational learning. In addition, we identified regions of special importance within these pathways: the posterior middle temporal gyrus, thought to serve as a lexical interface and specifically related to contextual learning; the anterior temporal lobe, known to be an amodal hub for semantic processing and related to cross-situational learning; and the white matter near the hippocampus, a structure fundamental for the initial stages of new-word learning and, remarkably, related to both types of word learning. No significant associations were found for the inferior-fronto-occipital fasciculus or the arcuate. While previous results suggest that learning new phonological word forms is mediated by the arcuate fasciculus, these findings show that the temporal pathways are the crucial neural substrate supporting one of the most striking human abilities: our capacity to identify correct associations between words and meanings under referential indeterminacy. SIGNIFICANCE STATEMENT The language-processing network is cortically (i.e., gray matter) well defined. However, the role of the white matter pathways that support novel word learning within this network remains unclear. In this work, we dissected language-related (arcuate, uncinate, inferior-fronto-occipital, and inferior-longitudinal) fasciculi using manual and automatic tracking. We found the left inferior-longitudinal fasciculus to be predictive of word-learning success in two word-to-meaning tasks: contextual and cross-situational learning paradigms. The left uncinate was predictive of cross-situational word learning. No significant correlations were found for the arcuate or the inferior-fronto-occipital fasciculus. While previous results showed that learning new phonological word forms is supported by the arcuate fasciculus, these findings demonstrate that learning new word-to-meaning associations is mainly dependent on temporal white matter pathways. Copyright © 2017 the authors 0270-6474/17/3711102-13$15.00/0.

  17. Literacy learning in users of AAC: A neurocognitive perspective.

    PubMed

    Van Balkom, Hans; Verhoeven, Ludo

    2010-09-01

    The understanding of written or printed text or discourse - depicted either in orthographical, graphic-visual or tactile symbols - calls upon both bottom-up word recognition processes and top-down comprehension processes. Different architectures have been proposed to account for literacy processes. Research has shown that the first steps in perceiving, processing and deriving conceptual meaning from words, graphic symbols, manual signs, and co-speech gestures or tactile manual signing and tangible symbols can be seen as identical and collectively (sub)activated. Results from recent brain research and neurolinguistics have revealed new insights in the reading process of typical and atypical readers and may provide verifiable evidence for improved literacy assessment and the validation of early intervention programs for AAC users.

  18. Statistical word learning in children with autism spectrum disorder and specific language impairment.

    PubMed

    Haebig, Eileen; Saffran, Jenny R; Ellis Weismer, Susan

    2017-11-01

    Word learning is an important component of language development that influences child outcomes across multiple domains. Despite the importance of word knowledge, word-learning mechanisms are poorly understood in children with specific language impairment (SLI) and children with autism spectrum disorder (ASD). This study examined underlying mechanisms of word learning, specifically, statistical learning and fast-mapping, in school-aged children with typical and atypical development. Statistical learning was assessed through a word segmentation task and fast-mapping was examined in an object-label association task. We also examined children's ability to map meaning onto newly segmented words in a third task that combined exposure to an artificial language and a fast-mapping task. Children with SLI had poorer performance on the word segmentation and fast-mapping tasks relative to the typically developing and ASD groups, who did not differ from one another. However, when children with SLI were exposed to an artificial language with phonemes used in the subsequent fast-mapping task, they successfully learned more words than in the isolated fast-mapping task. There was some evidence that word segmentation abilities are associated with word learning in school-aged children with typical development and ASD, but not SLI. Follow-up analyses also examined performance in children with ASD who did and did not have a language impairment. Children with ASD with language impairment evidenced intact statistical learning abilities, but subtle weaknesses in fast-mapping abilities. As the Procedural Deficit Hypothesis (PDH) predicts, children with SLI have impairments in statistical learning. However, children with SLI also have impairments in fast-mapping. Nonetheless, they are able to take advantage of additional phonological exposure to boost subsequent word-learning performance. In contrast to the PDH, children with ASD appear to have intact statistical learning, regardless of language status; however, fast-mapping abilities differ according to broader language skills. © 2017 Association for Child and Adolescent Mental Health.

  19. The effects of acute hypoglycaemia on memory acquisition and recall and prospective memory in type 1 diabetes.

    PubMed

    Warren, R E; Zammitt, N N; Deary, I J; Frier, B M

    2007-01-01

    Global memory performance is impaired during acute hypoglycaemia. This study assessed whether moderate hypoglycaemia disrupts learning and recall in isolation, and utilised a novel test of prospective memory which may better reflect the role of memory in daily life than conventional tests. Thirty-six subjects with type 1 diabetes participated, 20 with normal hypoglycaemia awareness (NHA) and 16 with impaired hypoglycaemia awareness (IHA). Each underwent a hypoglycaemic clamp with target blood glucose 2.5 mmol/l. Prior to hypoglycaemia, subjects attempted to memorise instructions for a prospective memory task, and recall was assessed during hypoglycaemia. Subjects then completed the learning and immediate recall stages of three conventional memory tasks (word recall, story recall, visual recall) during hypoglycaemia. Euglycaemia was restored and delayed memory for the conventional tasks was tested. The same procedures were completed in euglycaemic control studies (blood glucose 4.5 mmol/l). Hypoglycaemia impaired performance significantly on the prospective memory task (p = 0.004). Hypoglycaemia also significantly impaired both immediate and delayed recall for the word and story recall tasks (p < 0.01 in each case). There was no significant deterioration of performance on the visual memory task. The effect of hypoglycaemia did not differ significantly between subjects with NHA and IHA. Impaired performance on the prospective memory task during hypoglycaemia demonstrates that recall is disrupted by hypoglycaemia. Impaired performance on the conventional memory tasks demonstrates that learning is also disrupted by hypoglycaemia. Results of the prospective memory task support the relevance of these findings to the everyday lives of people with diabetes.

  20. Associative vocabulary learning: development and testing of two paradigms for the (re-) acquisition of action- and object-related words.

    PubMed

    Freundlieb, Nils; Ridder, Volker; Dobel, Christian; Enriquez-Geppert, Stefanie; Baumgaertner, Annette; Zwitserlood, Pienie; Gerloff, Christian; Hummel, Friedhelm C; Liuzzi, Gianpiero

    2012-01-01

    Despite a growing number of studies, the neurophysiology of adult vocabulary acquisition is still poorly understood. One reason is that paradigms that can easily be combined with neuroscientfic methods are rare. Here, we tested the efficiency of two paradigms for vocabulary (re-) acquisition, and compared the learning of novel words for actions and objects. Cortical networks involved in adult native-language word processing are widespread, with differences postulated between words for objects and actions. Words and what they stand for are supposed to be grounded in perceptual and sensorimotor brain circuits depending on their meaning. If there are specific brain representations for different word categories, we hypothesized behavioural differences in the learning of action-related and object-related words. Paradigm A, with the learning of novel words for body-related actions spread out over a number of days, revealed fast learning of these new action words, and stable retention up to 4 weeks after training. The single-session Paradigm B employed objects and actions. Performance during acquisition did not differ between action-related and object-related words (time*word category: p = 0.01), but the translation rate was clearly better for object-related (79%) than for action-related words (53%, p = 0.002). Both paradigms yielded robust associative learning of novel action-related words, as previously demonstrated for object-related words. Translation success differed for action- and object-related words, which may indicate different neural mechanisms. The paradigms tested here are well suited to investigate such differences with neuroscientific means. Given the stable retention and minimal requirements for conscious effort, these learning paradigms are promising for vocabulary re-learning in brain-lesioned people. In combination with neuroimaging, neuro-stimulation or pharmacological intervention, they may well advance the understanding of language learning to optimize therapeutic strategies.

  1. Rapid modulation of spoken word recognition by visual primes.

    PubMed

    Okano, Kana; Grainger, Jonathan; Holcomb, Phillip J

    2016-02-01

    In a masked cross-modal priming experiment with ERP recordings, spoken Japanese words were primed with words written in one of the two syllabary scripts of Japanese. An early priming effect, peaking at around 200ms after onset of the spoken word target, was seen in left lateral electrode sites for Katakana primes, and later effects were seen for both Hiragana and Katakana primes on the N400 ERP component. The early effect is thought to reflect the efficiency with which words in Katakana script make contact with sublexical phonological representations involved in spoken language comprehension, due to the particular way this script is used by Japanese readers. This demonstrates fast-acting influences of visual primes on the processing of auditory target words, and suggests that briefly presented visual primes can influence sublexical processing of auditory target words. The later N400 priming effects, on the other hand, most likely reflect cross-modal influences on activity at the level of whole-word phonology and semantics.

  2. Rapid modulation of spoken word recognition by visual primes

    PubMed Central

    Okano, Kana; Grainger, Jonathan; Holcomb, Phillip J.

    2015-01-01

    In a masked cross-modal priming experiment with ERP recordings, spoken Japanese words were primed with words written in one of the two syllabary scripts of Japanese. An early priming effect, peaking at around 200ms after onset of the spoken word target, was seen in left lateral electrode sites for Katakana primes, and later effects were seen for both Hiragana and Katakana primes on the N400 ERP component. The early effect is thought to reflect the efficiency with which words in Katakana script make contact with sublexical phonological representations involved in spoken language comprehension, due to the particular way this script is used by Japanese readers. This demonstrates fast-acting influences of visual primes on the processing of auditory target words, and suggests that briefly presented visual primes can influence sublexical processing of auditory target words. The later N400 priming effects, on the other hand, most likely reflect cross-modal influences on activity at the level of whole-word phonology and semantics. PMID:26516296

  3. Word Learning Emerges from the Interaction of Online Referent Selection and Slow Associative Learning

    ERIC Educational Resources Information Center

    McMurray, Bob; Horst, Jessica S.; Samuelson, Larissa K.

    2012-01-01

    Classic approaches to word learning emphasize referential ambiguity: In naming situations, a novel word could refer to many possible objects, properties, actions, and so forth. To solve this, researchers have posited constraints, and inference strategies, but assume that determining the referent of a novel word is isomorphic to learning. We…

  4. Observing Iconic Gestures Enhances Word Learning in Typically Developing Children and Children with Specific Language Impairment

    ERIC Educational Resources Information Center

    Vogt, Susanne; Kauschke, Christina

    2017-01-01

    Research has shown that observing iconic gestures helps typically developing children (TD) and children with specific language impairment (SLI) learn new words. So far, studies mostly compared word learning with and without gestures. The present study investigated word learning under two gesture conditions in children with and without language…

  5. Brief report: Do children with autism gather information from social contexts to aid their word learning?

    PubMed

    Jing, Wei; Fang, Junming

    2014-06-01

    Typically developing (TD) infants could capitalize on social eye gaze and social contexts to aid word learning. Although children with autism disorder (AD) are known to exhibit atypicality in word learning via social eye gaze, their ability to utilize social contexts for word learning is not well understood. We investigated whether verbal AD children exhibit word learning ability via social contextual cues by late childhood. We found that AD children, unlike TD controls, failed to infer the speaker’s referential intention through information gathered from the social context. This suggests that TD children can learn words in diverse social pragmatic contexts in as early as toddlerhood whereas AD children are still unable to do so by late childhood.

  6. Searching for the right word: Hybrid visual and memory search for words.

    PubMed

    Boettcher, Sage E P; Wolfe, Jeremy M

    2015-05-01

    In "hybrid search" (Wolfe Psychological Science, 23(7), 698-703, 2012), observers search through visual space for any of multiple targets held in memory. With photorealistic objects as the stimuli, response times (RTs) increase linearly with the visual set size and logarithmically with the memory set size, even when over 100 items are committed to memory. It is well-established that pictures of objects are particularly easy to memorize (Brady, Konkle, Alvarez, & Oliva Proceedings of the National Academy of Sciences, 105, 14325-14329, 2008). Would hybrid-search performance be similar if the targets were words or phrases, in which word order can be important, so that the processes of memorization might be different? In Experiment 1, observers memorized 2, 4, 8, or 16 words in four different blocks. After passing a memory test, confirming their memorization of the list, the observers searched for these words in visual displays containing two to 16 words. Replicating Wolfe (Psychological Science, 23(7), 698-703, 2012), the RTs increased linearly with the visual set size and logarithmically with the length of the word list. The word lists of Experiment 1 were random. In Experiment 2, words were drawn from phrases that observers reported knowing by heart (e.g., "London Bridge is falling down"). Observers were asked to provide four phrases, ranging in length from two words to no less than 20 words (range 21-86). All words longer than two characters from the phrase, constituted the target list. Distractor words were matched for length and frequency. Even with these strongly ordered lists, the results again replicated the curvilinear function of memory set size seen in hybrid search. One might expect to find serial position effects, perhaps reducing the RTs for the first (primacy) and/or the last (recency) members of a list (Atkinson & Shiffrin, 1968; Murdock Journal of Experimental Psychology, 64, 482-488, 1962). Surprisingly, we showed no reliable effects of word order. Thus, in "London Bridge is falling down," "London" and "down" were found no faster than "falling."

  7. Evidence for highly selective neuronal tuning to whole words in the "visual word form area".

    PubMed

    Glezer, Laurie S; Jiang, Xiong; Riesenhuber, Maximilian

    2009-04-30

    Theories of reading have posited the existence of a neural representation coding for whole real words (i.e., an orthographic lexicon), but experimental support for such a representation has proved elusive. Using fMRI rapid adaptation techniques, we provide evidence that the human left ventral occipitotemporal cortex (specifically the "visual word form area," VWFA) contains a representation based on neurons highly selective for individual real words, in contrast to current theories that posit a sublexical representation in the VWFA.

  8. Comparison of credible patients of very low intelligence and non-credible patients on neurocognitive performance validity indicators.

    PubMed

    Smith, Klayton; Boone, Kyle; Victor, Tara; Miora, Deborah; Cottingham, Maria; Ziegler, Elizabeth; Zeller, Michelle; Wright, Matthew

    2014-01-01

    The purpose of this archival study was to identify performance validity tests (PVTs) and standard IQ and neurocognitive test scores, which singly or in combination, differentiate credible patients of low IQ (FSIQ ≤ 75; n = 55) from non-credible patients. We compared the credible participants against a sample of 74 non-credible patients who appeared to have been attempting to feign low intelligence specifically (FSIQ ≤ 75), as well as a larger non-credible sample (n = 383) unselected for IQ. The entire non-credible group scored significantly higher than the credible participants on measures of verbal crystallized intelligence/semantic memory and manipulation of overlearned information, while the credible group performed significantly better on many processing speed and memory tests. Additionally, credible women showed faster finger-tapping speeds than non-credible women. The credible group also scored significantly higher than the non-credible subgroup with low IQ scores on measures of attention, visual perceptual/spatial tasks, processing speed, verbal learning/list learning, and visual memory, and credible women continued to outperform non-credible women on finger tapping. When cut-offs were selected to maintain approximately 90% specificity in the credible group, sensitivity rates were highest for verbal and visual memory measures (i.e., TOMM trials 1 and 2; Warrington Words correct and time; Rey Word Recognition Test total; RAVLT Effort Equation, Trial 5, total across learning trials, short delay, recognition, and RAVLT/RO discriminant function; and Digit Symbol recognition), followed by select attentional PVT scores (i.e., b Test omissions and time to recite four digits forward). When failure rates were tabulated across seven most sensitive scores, a cut-off of ≥ 2 failures was associated with 85.4% specificity and 85.7% sensitivity, while a cut-off of ≥ 3 failures resulted in 95.1% specificity and 66.0% sensitivity. Results are discussed in light of extant literature and directions for future research.

  9. Temporal and visual source memory deficits among ecstasy/polydrug users.

    PubMed

    Fisk, John E; Gallagher, Denis T; Hadjiefthyvoulou, Florentia; Montgomery, Catharine

    2014-03-01

    We wished to investigate whether source memory judgements are adversely affected by recreational illicit drug use. Sixty-two ecstasy/polydrug users and 75 non ecstasy users completed a source memory task, in which they tried to determine whether or not a word had been previously presented and if so, attempted to recall the format, location and temporal position in which the word had occurred. While not differing in terms of the number of hits and false positive responses, ecstasy/polydrug users adopted a more liberal decision criterion when judging if a word had been presented previously. With regard to source memory, users were less able to determine the format in which words had been presented (upper versus lower case). Female users did worse than female nonusers in determining which list (first or second) a word was from. Unexpectedly, the current frequency of cocaine use was negative associated with list and case source memory performance. Given the role that source memory plays in everyday cognition, those who use cocaine more frequently might have more difficulty in everyday tasks such as recalling the sources of crucial information or making use of contextual information as an aid to learning.

  10. What can graph theory tell us about word learning and lexical retrieval?

    PubMed

    Vitevitch, Michael S

    2008-04-01

    Graph theory and the new science of networks provide a mathematically rigorous approach to examine the development and organization of complex systems. These tools were applied to the mental lexicon to examine the organization of words in the lexicon and to explore how that structure might influence the acquisition and retrieval of phonological word-forms. Pajek, a program for large network analysis and visualization (V. Batagelj & A. Mvrar, 1998), was used to examine several characteristics of a network derived from a computerized database of the adult lexicon. Nodes in the network represented words, and a link connected two nodes if the words were phonological neighbors. The average path length and clustering coefficient suggest that the phonological network exhibits small-world characteristics. The degree distribution was fit better by an exponential rather than a power-law function. Finally, the network exhibited assortative mixing by degree. Some of these structural characteristics were also found in graphs that were formed by 2 simple stochastic processes suggesting that similar processes might influence the development of the lexicon. The graph theoretic perspective may provide novel insights about the mental lexicon and lead to future studies that help us better understand language development and processing.

  11. Children value informativity over logic in word learning.

    PubMed

    Ramscar, Michael; Dye, Melody; Klein, Joseph

    2013-06-01

    The question of how children learn the meanings of words has long puzzled philosophers and psychologists. As Quine famously pointed out, simply hearing a word in context reveals next to nothing about its meaning. How then do children learn to understand and use words correctly? Here, we show how learning theory can offer an elegant solution to this seemingly intractable puzzle in language acquisition. From it, we derived formal predictions about word learning in situations of Quinean ambiguity, and subsequently tested our predictions on toddlers, undergraduates, and developmental psychologists. The toddlers' performance was consistent both with our predictions and with the workings of implicit mechanisms that can facilitate the learning of meaningful lexical systems. Adults adopted a markedly different and likely suboptimal strategy. These results suggest one explanation for why early word learning can appear baffling: Adult intuitions may be a poor source of insight into how children learn.

  12. The Modulation of Visual and Task Characteristics of a Writing System on Hemispheric Lateralization in Visual Word Recognition--A Computational Exploration

    ERIC Educational Resources Information Center

    Hsiao, Janet H.; Lam, Sze Man

    2013-01-01

    Through computational modeling, here we examine whether visual and task characteristics of writing systems alone can account for lateralization differences in visual word recognition between different languages without assuming influence from left hemisphere (LH) lateralized language processes. We apply a hemispheric processing model of face…

  13. The Effect of Hearing Loss on Novel Word Learning in Infant- and Adult-Directed Speech.

    PubMed

    Robertson, V Susie; von Hapsburg, Deborah; Hay, Jessica S

    Relatively little is known about how young children with hearing impairment (HI) learn novel words in infant- and adult-directed speech (ADS). Infant-directed speech (IDS) supports word learning in typically developing infants relative to ADS. This study examined how children with normal hearing (NH) and children with HI learn novel words in IDS and ADS. It was predicted that IDS would support novel word learning in both groups of children. In addition, children with HI were expected to be less proficient word learners as compared with their NH peers. A looking-while-listening paradigm was used to measure novel word learning in 16 children with sensorineural HI (age range 23.2 to 42.1 months) who wore either bilateral hearing aids (n = 10) or bilateral cochlear implants (n = 6) and 16 children with NH (age range 23.1 to 42.1 months) who were matched for gender, chronological age, and maternal education level. Two measures of word learning were assessed (accuracy and reaction time). Each child participated in two experiments approximately 1 week apart, one in IDS and one in ADS. Both groups successfully learned the novel words in both speech type conditions, as evidenced by children looking at the correct picture significantly above chance. As a group, children with NH outperformed children with HI in the novel word learning task; however, there were no significant differences between performance on IDS versus ADS. More fine-grained time course analyses revealed that children with HI, and particularly children who use hearing aids, had more difficulty learning novel words in ADS, compared with children with NH. The pattern of results observed in the children with HI suggests that they may need extended support from clinicians and caregivers, through the use of IDS, during novel word learning. Future research should continue to focus on understanding the factors (e.g., device type and use, age of intervention, audibility, acoustic characteristics of input, etc.) that may influence word learning in children with HI in both IDS and ADS.

  14. Neural competition as a developmental process: Early hemispheric specialization for word processing delays specialization for face processing

    PubMed Central

    Li, Su; Lee, Kang; Zhao, Jing; Yang, Zhi; He, Sheng; Weng, Xuchu

    2013-01-01

    Little is known about the impact of learning to read on early neural development for word processing and its collateral effects on neural development in non-word domains. Here, we examined the effect of early exposure to reading on neural responses to both word and face processing in preschool children with the use of the Event Related Potential (ERP) methodology. We specifically linked children’s reading experience (indexed by their sight vocabulary) to two major neural markers: the amplitude differences between the left and right N170 on the bilateral posterior scalp sites and the hemispheric spectrum power differences in the γ band on the same scalp sites. The results showed that the left-lateralization of both the word N170 and the spectrum power in the γ band were significantly positively related to vocabulary. In contrast, vocabulary and the word left-lateralization both had a strong negative direct effect on the face right-lateralization. Also, vocabulary negatively correlated with the right-lateralized face spectrum power in the γ band even after the effects of age and the word spectrum power were partialled out. The present study provides direct evidence regarding the role of reading experience in the neural specialization of word and face processing above and beyond the effect of maturation. The present findings taken together suggest that the neural development of visual word processing competes with that of face processing before the process of neural specialization has been consolidated. PMID:23462239

  15. Music and words in the visual cortex: The impact of musical expertise.

    PubMed

    Mongelli, Valeria; Dehaene, Stanislas; Vinckier, Fabien; Peretz, Isabelle; Bartolomeo, Paolo; Cohen, Laurent

    2017-01-01

    How does the human visual system accommodate expertise for two simultaneously acquired symbolic systems? We used fMRI to compare activations induced in the visual cortex by musical notation, written words and other classes of objects, in professional musicians and in musically naïve controls. First, irrespective of expertise, selective activations for music were posterior and lateral to activations for words in the left occipitotemporal cortex. This indicates that symbols characterized by different visual features engage distinct cortical areas. Second, musical expertise increased the volume of activations for music and led to an anterolateral displacement of word-related activations. In musicians, there was also a dramatic increase of the brain-scale networks connected to the music-selective visual areas. Those findings reveal that acquiring a double visual expertise involves an expansion of category-selective areas, the development of novel long-distance functional connectivity, and possibly some competition between categories for the colonization of cortical space. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. A Novel Image Retrieval Based on Visual Words Integration of SIFT and SURF

    PubMed Central

    Ali, Nouman; Bajwa, Khalid Bashir; Sablatnig, Robert; Chatzichristofis, Savvas A.; Iqbal, Zeshan; Rashid, Muhammad; Habib, Hafiz Adnan

    2016-01-01

    With the recent evolution of technology, the number of image archives has increased exponentially. In Content-Based Image Retrieval (CBIR), high-level visual information is represented in the form of low-level features. The semantic gap between the low-level features and the high-level image concepts is an open research problem. In this paper, we present a novel visual words integration of Scale Invariant Feature Transform (SIFT) and Speeded-Up Robust Features (SURF). The two local features representations are selected for image retrieval because SIFT is more robust to the change in scale and rotation, while SURF is robust to changes in illumination. The visual words integration of SIFT and SURF adds the robustness of both features to image retrieval. The qualitative and quantitative comparisons conducted on Corel-1000, Corel-1500, Corel-2000, Oliva and Torralba and Ground Truth image benchmarks demonstrate the effectiveness of the proposed visual words integration. PMID:27315101

  17. An associative account of the development of word learning.

    PubMed

    Sloutsky, Vladimir M; Yim, Hyungwook; Yao, Xin; Dennis, Simon

    2017-09-01

    Word learning is a notoriously difficult induction problem because meaning is underdetermined by positive examples. How do children solve this problem? Some have argued that word learning is achieved by means of inference: young word learners rely on a number of assumptions that reduce the overall hypothesis space by favoring some meanings over others. However, these approaches have difficulty explaining how words are learned from conversations or text, without pointing or explicit instruction. In this research, we propose an associative mechanism that can account for such learning. In a series of experiments, 4-year-olds and adults were presented with sets of words that included a single nonsense word (e.g. dax). Some lists were taxonomic (i.,e., all items were members of a given category), some were associative (i.e., all items were associates of a given category, but not members), and some were mixed. Participants were asked to indicate whether the nonsense word was an animal or an artifact. Adults exhibited evidence of learning when lists consisted of either associatively or taxonomically related items. In contrast, children exhibited evidence of word learning only when lists consisted of associatively related items. These results present challenges to several extant models of word learning, and a new model based on the distinction between syntagmatic and paradigmatic associations is proposed. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Assessing the reading comprehension of adults with learning disabilities.

    PubMed

    Jones, F W; Long, K; Finlay, W M L

    2006-06-01

    This study's aim was to begin the process of measuring the reading comprehension of adults with mild and borderline learning disabilities, in order to generate information to help clinicians and other professionals to make written material for adults with learning disabilities more comprehensible. The Test for the Reception of Grammar (TROG), with items presented visually rather than orally, and the Reading Comprehension sub-test of the Wechsler Objective Reading Dimensions (WORD) battery were given to 24 service-users of a metropolitan community learning disability team who had an estimated IQ in the range 50-79. These tests were demonstrated to have satisfactory split-half reliability and convergent validity with this population, supporting both their use in this study and in clinical work. Data are presented concerning the distribution across the sample of reading-ages and the comprehension of written grammatical constructions. These data should be useful to those who are preparing written material for adults with learning disabilities.

  19. Word Learning Deficits in Children With Dyslexia

    PubMed Central

    Hogan, Tiffany; Green, Samuel; Gray, Shelley; Cabbage, Kathryn; Cowan, Nelson

    2017-01-01

    Purpose The purpose of this study is to investigate word learning in children with dyslexia to ascertain their strengths and weaknesses during the configuration stage of word learning. Method Children with typical development (N = 116) and dyslexia (N = 68) participated in computer-based word learning games that assessed word learning in 4 sets of games that manipulated phonological or visuospatial demands. All children were monolingual English-speaking 2nd graders without oral language impairment. The word learning games measured children's ability to link novel names with novel objects, to make decisions about the accuracy of those names and objects, to recognize the semantic features of the objects, and to produce the names of the novel words. Accuracy data were analyzed using analyses of covariance with nonverbal intelligence scores as a covariate. Results Word learning deficits were evident for children with dyslexia across every type of manipulation and on 3 of 5 tasks, but not for every combination of task/manipulation. Deficits were more common when task demands taxed phonology. Visuospatial manipulations led to both disadvantages and advantages for children with dyslexia. Conclusion Children with dyslexia evidence spoken word learning deficits, but their performance is highly dependent on manipulations and task demand, suggesting a processing trade-off between visuospatial and phonological demands. PMID:28388708

  20. Difficulty in learning similar-sounding words: a developmental stage or a general property of learning?

    PubMed Central

    Pajak, Bozena; Creel, Sarah C.; Levy, Roger

    2016-01-01

    How are languages learned, and to what extent are learning mechanisms similar in infant native-language (L1) and adult second-language (L2) acquisition? In terms of vocabulary acquisition, we know from the infant literature that the ability to discriminate similar-sounding words at a particular age does not guarantee successful word-meaning mapping at that age (Stager & Werker, 1997). However, it is unclear whether this difficulty arises from developmental limitations of young infants (e.g., poorer working memory) or whether it is an intrinsic part of the initial word learning, L1 and L2 alike. Here we show that adults of particular L1 backgrounds—just like young infants—have difficulty learning similar-sounding L2 words that they can nevertheless discriminate perceptually. This suggests that the early stages of word learning, whether L1 or L2, intrinsically involve difficulty in mapping similar-sounding words onto referents. We argue that this is due to an interaction between two main factors: (1) memory limitations that pose particular challenges for highly similar-sounding words, and (2) uncertainty regarding the language's phonetic categories, as these are being learned concurrently with words. Overall, our results show that vocabulary acquisition in infancy and in adulthood share more similarities than previously thought, thus supporting the existence of common learning mechanisms that operate throughout the lifespan. PMID:26962959

  1. Developmental changes in the inferior frontal cortex for selecting semantic representations

    PubMed Central

    Lee, Shu-Hui; Booth, James R.; Chen, Shiou-Yuan; Chou, Tai-Li

    2012-01-01

    Functional magnetic resonance imaging (fMRI) was used to examine the neural correlates of semantic judgments to Chinese words in a group of 10–15 year old Chinese children. Two semantic tasks were used: visual–visual versus visual–auditory presentation. The first word was visually presented (i.e. character) and the second word was either visually or auditorily presented, and the participant had to determine if these two words were related in meaning. Different from English, Chinese has many homophones in which each spoken word corresponds to many characters. The visual–auditory task, therefore, required greater engagement of cognitive control for the participants to select a semantically appropriate answer for the second homophonic word. Weaker association pairs produced greater activation in the mid-ventral region of left inferior frontal gyrus (BA 45) for both tasks. However, this effect was stronger for the visual–auditory task than for the visual–visual task and this difference was stronger for older compared to younger children. The findings suggest greater involvement of semantic selection mechanisms in the cross-modal task requiring the access of the appropriate meaning of homophonic spoken words, especially for older children. PMID:22337757

  2. Early Visual Word Processing Is Flexible: Evidence from Spatiotemporal Brain Dynamics.

    PubMed

    Chen, Yuanyuan; Davis, Matthew H; Pulvermüller, Friedemann; Hauk, Olaf

    2015-09-01

    Visual word recognition is often described as automatic, but the functional locus of top-down effects is still a matter of debate. Do task demands modulate how information is retrieved, or only how it is used? We used EEG/MEG recordings to assess whether, when, and how task contexts modify early retrieval of specific psycholinguistic information in occipitotemporal cortex, an area likely to contribute to early stages of visual word processing. Using a parametric approach, we analyzed the spatiotemporal response patterns of occipitotemporal cortex for orthographic, lexical, and semantic variables in three psycholinguistic tasks: silent reading, lexical decision, and semantic decision. Task modulation of word frequency and imageability effects occurred simultaneously in ventral occipitotemporal regions-in the vicinity of the putative visual word form area-around 160 msec, following task effects on orthographic typicality around 100 msec. Frequency and typicality also produced task-independent effects in anterior temporal lobe regions after 200 msec. The early task modulation for several specific psycholinguistic variables indicates that occipitotemporal areas integrate perceptual input with prior knowledge in a task-dependent manner. Still, later task-independent effects in anterior temporal lobes suggest that word recognition eventually leads to retrieval of semantic information irrespective of task demands. We conclude that even a highly overlearned visual task like word recognition should be described as flexible rather than automatic.

  3. The relationship between novel word learning and anomia treatment success in adults with chronic aphasia.

    PubMed

    Dignam, Jade; Copland, David; Rawlings, Alicia; O'Brien, Kate; Burfein, Penni; Rodriguez, Amy D

    2016-01-29

    Learning capacity may influence an individual's response to aphasia rehabilitation. However, investigations into the relationship between novel word learning ability and response to anomia therapy are lacking. The aim of the present study was to evaluate the novel word learning ability in post-stroke aphasia and to establish the relationship between learning ability and anomia treatment outcomes. We also explored the influence of locus of language breakdown on novel word learning ability and anomia treatment response. 30 adults (6F; 24M) with chronic, post-stroke aphasia were recruited to the study. Prior to treatment, participants underwent an assessment of language, which included the Comprehensive Aphasia Test and three baseline confrontation naming probes in order to develop sets of treated and untreated items. We also administered the novel word learning paradigm, in which participants learnt novel names associated with unfamiliar objects and were immediately tested on recall (expressive) and recognition (receptive) tasks. Participants completed 48 h of Aphasia Language Impairment and Functioning Therapy (Aphasia LIFT) over a 3 week (intensive) or 8 week (distributed) schedule. Therapy primarily targeted the remediation of word retrieval deficits, so naming of treated and untreated items immediately post-therapy and at 1 month follow-up was used to determine therapeutic response. Performance on recall and recognition tasks demonstrated that participants were able to learn novel words; however, performance was variable and was influenced by participants' aphasia severity, lexical-semantic processing and locus of language breakdown. Novel word learning performance was significantly correlated with participants' response to therapy for treated items at post-therapy. In contrast, participants' novel word learning performance was not correlated with therapy gains for treated items at 1 month follow-up or for untreated items at either time point. Therapy intensity did not influence treatment outcomes. This is the first group study to directly examine the relationship between novel word learning and therapy outcomes for anomia rehabilitation in adults with aphasia. Importantly, we found that novel word learning performance was correlated with therapy outcomes. We propose that novel word learning ability may contribute to the initial acquisition of treatment gains in anomia rehabilitation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Adults' Self-Directed Learning of an Artificial Lexicon: The Dynamics of Neighborhood Reorganization

    ERIC Educational Resources Information Center

    Bardhan, Neil Prodeep

    2010-01-01

    Artificial lexicons have previously been used to examine the time course of the learning and recognition of spoken words, the role of segment type in word learning, and the integration of context during spoken word recognition. However, in all of these studies the experimenter determined the frequency and order of the words to be learned. In three…

  5. Nonword Repetition and Vocabulary Knowledge as Predictors of Children's Phonological and Semantic Word Learning.

    PubMed

    Adlof, Suzanne M; Patten, Hannah

    2017-03-01

    This study examined the unique and shared variance that nonword repetition and vocabulary knowledge contribute to children's ability to learn new words. Multiple measures of word learning were used to assess recall and recognition of phonological and semantic information. Fifty children, with a mean age of 8 years (range 5-12 years), completed experimental assessments of word learning and norm-referenced assessments of receptive and expressive vocabulary knowledge and nonword repetition skills. Hierarchical multiple regression analyses examined the variance in word learning that was explained by vocabulary knowledge and nonword repetition after controlling for chronological age. Together with chronological age, nonword repetition and vocabulary knowledge explained up to 44% of the variance in children's word learning. Nonword repetition was the stronger predictor of phonological recall, phonological recognition, and semantic recognition, whereas vocabulary knowledge was the stronger predictor of verbal semantic recall. These findings extend the results of past studies indicating that both nonword repetition skill and existing vocabulary knowledge are important for new word learning, but the relative influence of each predictor depends on the way word learning is measured. Suggestions for further research involving typically developing children and children with language or reading impairments are discussed.

  6. The neurobiological basis of seeing words

    PubMed Central

    Wandell, Brian A.

    2011-01-01

    This review summarizes recent ideas about the cortical circuits for seeing words, an important part of the brain system for reading. Historically, the link between the visual cortex and reading has been contentious. One influential position is that the visual cortex plays a minimal role, limited to identifying contours, and that information about these contours is delivered to cortical regions specialized for reading and language. An alternative position is that specializations for seeing words develop within the visual cortex itself. Modern neuroimaging measurements—including both functional magnetic resonance imaging (fMRI) and diffusion weighted imaging with tractography data—support the position that circuitry for seeing the statistical regularities of word forms develops within the ventral occipitotemporal cortex, which also contains important circuitry for seeing faces, colors, and forms. The review explains new findings about the visual pathways, including visual field maps, as well as new findings about how we see words. The measurements from the two fields are in close cortical proximity, and there are good opportunities for coordinating theoretical ideas about function in the ventral occipitotemporal cortex. PMID:21486296

  7. The neurobiological basis of seeing words.

    PubMed

    Wandell, Brian A

    2011-04-01

    This review summarizes recent ideas about the cortical circuits for seeing words, an important part of the brain system for reading. Historically, the link between the visual cortex and reading has been contentious. One influential position is that the visual cortex plays a minimal role, limited to identifying contours, and that information about these contours is delivered to cortical regions specialized for reading and language. An alternative position is that specializations for seeing words develop within the visual cortex itself. Modern neuroimaging measurements-including both functional magnetic resonance imaging (fMRI) and diffusion weighted imaging with tractography (DTI) data-support the position that circuitry for seeing the statistical regularities of word forms develops within the ventral occipitotemporal cortex, which also contains important circuitry for seeing faces, colors, and forms. This review explains new findings about the visual pathways, including visual field maps, as well as new findings about how we see words. The measurements from the two fields are in close cortical proximity, and there are good opportunities for coordinating theoretical ideas about function in the ventral occipitotemporal cortex. © 2011 New York Academy of Sciences.

  8. Auditory Emotional Cues Enhance Visual Perception

    ERIC Educational Resources Information Center

    Zeelenberg, Rene; Bocanegra, Bruno R.

    2010-01-01

    Recent studies show that emotional stimuli impair performance to subsequently presented neutral stimuli. Here we show a cross-modal perceptual enhancement caused by emotional cues. Auditory cue words were followed by a visually presented neutral target word. Two-alternative forced-choice identification of the visual target was improved by…

  9. Learning linear transformations between counting-based and prediction-based word embeddings

    PubMed Central

    Hayashi, Kohei; Kawarabayashi, Ken-ichi

    2017-01-01

    Despite the growing interest in prediction-based word embedding learning methods, it remains unclear as to how the vector spaces learnt by the prediction-based methods differ from that of the counting-based methods, or whether one can be transformed into the other. To study the relationship between counting-based and prediction-based embeddings, we propose a method for learning a linear transformation between two given sets of word embeddings. Our proposal contributes to the word embedding learning research in three ways: (a) we propose an efficient method to learn a linear transformation between two sets of word embeddings, (b) using the transformation learnt in (a), we empirically show that it is possible to predict distributed word embeddings for novel unseen words, and (c) empirically it is possible to linearly transform counting-based embeddings to prediction-based embeddings, for frequent words, different POS categories, and varying degrees of ambiguities. PMID:28926629

  10. Semantic Coherence Facilitates Distributional Learning.

    PubMed

    Ouyang, Long; Boroditsky, Lera; Frank, Michael C

    2017-04-01

    Computational models have shown that purely statistical knowledge about words' linguistic contexts is sufficient to learn many properties of words, including syntactic and semantic category. For example, models can infer that "postman" and "mailman" are semantically similar because they have quantitatively similar patterns of association with other words (e.g., they both tend to occur with words like "deliver," "truck," "package"). In contrast to these computational results, artificial language learning experiments suggest that distributional statistics alone do not facilitate learning of linguistic categories. However, experiments in this paradigm expose participants to entirely novel words, whereas real language learners encounter input that contains some known words that are semantically organized. In three experiments, we show that (a) the presence of familiar semantic reference points facilitates distributional learning and (b) this effect crucially depends both on the presence of known words and the adherence of these known words to some semantic organization. Copyright © 2016 Cognitive Science Society, Inc.

  11. Modulation of brain activity by multiple lexical and word form variables in visual word recognition: A parametric fMRI study.

    PubMed

    Hauk, Olaf; Davis, Matthew H; Pulvermüller, Friedemann

    2008-09-01

    Psycholinguistic research has documented a range of variables that influence visual word recognition performance. Many of these variables are highly intercorrelated. Most previous studies have used factorial designs, which do not exploit the full range of values available for continuous variables, and are prone to skewed stimulus selection as well as to effects of the baseline (e.g. when contrasting words with pseudowords). In our study, we used a parametric approach to study the effects of several psycholinguistic variables on brain activation. We focussed on the variable word frequency, which has been used in numerous previous behavioural, electrophysiological and neuroimaging studies, in order to investigate the neuronal network underlying visual word processing. Furthermore, we investigated the variable orthographic typicality as well as a combined variable for word length and orthographic neighbourhood size (N), for which neuroimaging results are still either scarce or inconsistent. Data were analysed using multiple linear regression analysis of event-related fMRI data acquired from 21 subjects in a silent reading paradigm. The frequency variable correlated negatively with activation in left fusiform gyrus, bilateral inferior frontal gyri and bilateral insulae, indicating that word frequency can affect multiple aspects of word processing. N correlated positively with brain activity in left and right middle temporal gyri as well as right inferior frontal gyrus. Thus, our analysis revealed multiple distinct brain areas involved in visual word processing within one data set.

  12. Using Wordle as a Supplementary Research Tool

    ERIC Educational Resources Information Center

    McNaught, Carmel; Lam, Paul

    2010-01-01

    A word cloud is a special visualization of text in which the more frequently used words are effectively highlighted by occupying more prominence in the representation. We have used Wordle to produce word-cloud analyses of the spoken and written responses of informants in two research projects. The product demonstrates a fast and visually rich way…

  13. Age-of-Acquisition Effects in Visual Word Recognition: Evidence from Expert Vocabularies

    ERIC Educational Resources Information Center

    Stadthagen-Gonzalez, Hans; Bowers, Jeffrey S.; Damian, Markus F.

    2004-01-01

    Three experiments assessed the contributions of age-of-acquisition (AoA) and frequency to visual word recognition. Three databases were created from electronic journals in chemistry, psychology and geology in order to identify technical words that are extremely frequent in each discipline but acquired late in life. In Experiment 1, psychologists…

  14. Is Syntactic-Category Processing Obligatory in Visual Word Recognition? Evidence from Chinese

    ERIC Educational Resources Information Center

    Wong, Andus Wing-Kuen; Chen, Hsuan-Chih

    2012-01-01

    Three experiments were conducted to investigate how syntactic-category and semantic information is processed in visual word recognition. The stimuli were two-character Chinese words in which semantic and syntactic-category ambiguities were factorially manipulated. A lexical decision task was employed in Experiment 1, whereas a semantic relatedness…

  15. Individual Differences in Visual Word Recognition: Insights from the English Lexicon Project

    ERIC Educational Resources Information Center

    Yap, Melvin J.; Balota, David A.; Sibley, Daragh E.; Ratcliff, Roger

    2012-01-01

    Empirical work and models of visual word recognition have traditionally focused on group-level performance. Despite the emphasis on the prototypical reader, there is clear evidence that variation in reading skill modulates word recognition performance. In the present study, we examined differences among individuals who contributed to the English…

  16. Utterance-final position and pitch marking aid word learning in school-age children.

    PubMed

    Filippi, Piera; Laaha, Sabine; Fitch, W Tecumseh

    2017-08-01

    We investigated the effects of word order and prosody on word learning in school-age children. Third graders viewed photographs belonging to one of three semantic categories while hearing four-word nonsense utterances containing a target word. In the control condition, all words had the same pitch and, across trials, the position of the target word was varied systematically within each utterance. The only cue to word-meaning mapping was the co-occurrence of target words and referents. This cue was present in all conditions. In the Utterance-final condition, the target word always occurred in utterance-final position, and at the same fundamental frequency as all the other words of the utterance. In the Pitch peak condition, the position of the target word was varied systematically within each utterance across trials, and produced with pitch contrasts typical of infant-directed speech (IDS). In the Pitch peak + Utterance-final condition, the target word always occurred in utterance-final position, and was marked with a pitch contrast typical of IDS. Word learning occurred in all conditions except the control condition. Moreover, learning performance was significantly higher than that observed with simple co-occurrence ( control condition) only for the Pitch peak + Utterance-final condition. We conclude that, for school-age children, the combination of words' utterance-final alignment and pitch enhancement boosts word learning.

  17. MEGALEX: A megastudy of visual and auditory word recognition.

    PubMed

    Ferrand, Ludovic; Méot, Alain; Spinelli, Elsa; New, Boris; Pallier, Christophe; Bonin, Patrick; Dufau, Stéphane; Mathôt, Sebastiaan; Grainger, Jonathan

    2018-06-01

    Using the megastudy approach, we report a new database (MEGALEX) of visual and auditory lexical decision times and accuracy rates for tens of thousands of words. We collected visual lexical decision data for 28,466 French words and the same number of pseudowords, and auditory lexical decision data for 17,876 French words and the same number of pseudowords (synthesized tokens were used for the auditory modality). This constitutes the first large-scale database for auditory lexical decision, and the first database to enable a direct comparison of word recognition in different modalities. Different regression analyses were conducted to illustrate potential ways to exploit this megastudy database. First, we compared the proportions of variance accounted for by five word frequency measures. Second, we conducted item-level regression analyses to examine the relative importance of the lexical variables influencing performance in the different modalities (visual and auditory). Finally, we compared the similarities and differences between the two modalities. All data are freely available on our website ( https://sedufau.shinyapps.io/megalex/ ) and are searchable at www.lexique.org , inside the Open Lexique search engine.

  18. Syllables and bigrams: orthographic redundancy and syllabic units affect visual word recognition at different processing levels.

    PubMed

    Conrad, Markus; Carreiras, Manuel; Tamm, Sascha; Jacobs, Arthur M

    2009-04-01

    Over the last decade, there has been increasing evidence for syllabic processing during visual word recognition. If syllabic effects prove to be independent from orthographic redundancy, this would seriously challenge the ability of current computational models to account for the processing of polysyllabic words. Three experiments are presented to disentangle effects of the frequency of syllabic units and orthographic segments in lexical decision. In Experiment 1 the authors obtained an inhibitory syllable frequency effect that was unaffected by the presence or absence of a bigram trough at the syllable boundary. In Experiments 2 and 3 an inhibitory effect of initial syllable frequency but a facilitative effect of initial bigram frequency emerged when manipulating 1 of the 2 measures and controlling for the other in Spanish words starting with consonant-vowel syllables. The authors conclude that effects of syllable frequency and letter-cluster frequency are independent and arise at different processing levels of visual word recognition. Results are discussed within the framework of an interactive activation model of visual word recognition. (c) 2009 APA, all rights reserved.

  19. Phonological and Semantic Cues to Learning from Word-Types

    PubMed Central

    Richtsmeier, Peter

    2017-01-01

    Word-types represent the primary form of data for many models of phonological learning, and they often predict performance in psycholinguistic tasks. Word-types are often tacitly defined as phonologically unique words. Yet, an explicit test of this definition is lacking, and natural language patterning suggests that word meaning could also act as a cue to word-type status. This possibility was tested in a statistical phonotactic learning experiment in which phonological and semantic properties of word-types varied. During familiarization, the learning targets—word-medial consonant sequences—were instantiated either by four related word-types or by just one word-type (the experimental frequency factor). The expectation was that more word-types would lead participants to generalize the target sequences. Regarding semantic cues, related word-types were either associated with different referents or all with a single referent. Regarding phonological cues, related word-types differed from each other by one, two, or more phonemes. At test, participants rated novel wordforms for their similarity to the familiarization words. When participants heard four related word-types, they gave higher ratings to test words with the same consonant sequences, irrespective of the phonological and semantic manipulations. The results support the existing phonological definition of word-types. PMID:29187914

  20. Semantic and phonological schema influence spoken word learning and overnight consolidation.

    PubMed

    Havas, Viktória; Taylor, Jsh; Vaquero, Lucía; de Diego-Balaguer, Ruth; Rodríguez-Fornells, Antoni; Davis, Matthew H

    2018-06-01

    We studied the initial acquisition and overnight consolidation of new spoken words that resemble words in the native language (L1) or in an unfamiliar, non-native language (L2). Spanish-speaking participants learned the spoken forms of novel words in their native language (Spanish) or in a different language (Hungarian), which were paired with pictures of familiar or unfamiliar objects, or no picture. We thereby assessed, in a factorial way, the impact of existing knowledge (schema) on word learning by manipulating both semantic (familiar vs unfamiliar objects) and phonological (L1- vs L2-like novel words) familiarity. Participants were trained and tested with a 12-hr intervening period that included overnight sleep or daytime awake. Our results showed (1) benefits of sleep to recognition memory that were greater for words with L2-like phonology and (2) that learned associations with familiar but not unfamiliar pictures enhanced recognition memory for novel words. Implications for complementary systems accounts of word learning are discussed.

  1. Hierarchical extreme learning machine based reinforcement learning for goal localization

    NASA Astrophysics Data System (ADS)

    AlDahoul, Nouar; Zaw Htike, Zaw; Akmeliawati, Rini

    2017-03-01

    The objective of goal localization is to find the location of goals in noisy environments. Simple actions are performed to move the agent towards the goal. The goal detector should be capable of minimizing the error between the predicted locations and the true ones. Few regions need to be processed by the agent to reduce the computational effort and increase the speed of convergence. In this paper, reinforcement learning (RL) method was utilized to find optimal series of actions to localize the goal region. The visual data, a set of images, is high dimensional unstructured data and needs to be represented efficiently to get a robust detector. Different deep Reinforcement models have already been used to localize a goal but most of them take long time to learn the model. This long learning time results from the weights fine tuning stage that is applied iteratively to find an accurate model. Hierarchical Extreme Learning Machine (H-ELM) was used as a fast deep model that doesn’t fine tune the weights. In other words, hidden weights are generated randomly and output weights are calculated analytically. H-ELM algorithm was used in this work to find good features for effective representation. This paper proposes a combination of Hierarchical Extreme learning machine and Reinforcement learning to find an optimal policy directly from visual input. This combination outperforms other methods in terms of accuracy and learning speed. The simulations and results were analysed by using MATLAB.

  2. Statistical Word Learning in Children with Autism Spectrum Disorder and Specific Language Impairment

    ERIC Educational Resources Information Center

    Haebig, Eileen; Saffran, Jenny R.; Ellis Weismer, Susan

    2017-01-01

    Background: Word learning is an important component of language development that influences child outcomes across multiple domains. Despite the importance of word knowledge, word-learning mechanisms are poorly understood in children with specific language impairment (SLI) and children with autism spectrum disorder (ASD). This study examined…

  3. Body in Mind: How Gestures Empower Foreign Language Learning

    ERIC Educational Resources Information Center

    Macedonia, Manuela; Knosche, Thomas R.

    2011-01-01

    It has previously been demonstrated that enactment (i.e., performing representative gestures during encoding) enhances memory for concrete words, in particular action words. Here, we investigate the impact of enactment on abstract word learning in a foreign language. We further ask if learning novel words with gestures facilitates sentence…

  4. Two- and Three-Year-Olds Track a Single Meaning during Word Learning: Evidence for Propose-but-Verify

    ERIC Educational Resources Information Center

    Woodard, Kristina; Gleitman, Lila R.; Trueswell, John C.

    2016-01-01

    A child word-learning experiment is reported that examines 2- and 3-year-olds' ability to learn the meanings of novel words across multiple, referentially ambiguous, word occurrences. Children were told they were going on an animal safari in which they would learn the names of unfamiliar animals. Critical trial sequences began with hearing a novel…

  5. Looking and touching: What extant approaches reveal about the structure of early word knowledge

    PubMed Central

    Hendrickson, Kristi; Mitsven, Samantha; Poulin-Dubois, Diane; Zesiger, Pascal; Friend, Margaret

    2014-01-01

    The goal of the current study is to assess the temporal dynamics of vision and action to evaluate the underlying word representations that guide infants’ responses. Sixteen-month-old infants participated in a two-alternative forced-choice word-picture matching task. We conducted a moment-by-moment analysis of looking and reaching behaviors as they occurred in tandem to assess the speed with which a prompted word was processed (visual reaction time) as a function of the type of haptic response: Target, Distractor, or No Touch. Visual reaction times (visual RTs) were significantly slower during No Touches compared to Distractor and Target Touches, which were statistically indistinguishable. The finding that visual RTs were significantly faster during Distractor Touches compared to No Touches suggests that incorrect and absent haptic responses appear to index distinct knowledge states: incorrect responses are associated with partial knowledge whereas absent responses appear to reflect a true failure to map lexical items to their target referents. Further, we found that those children who were faster at processing words were also those children who exhibited better haptic performance. This research provides a methodological clarification on knowledge measured by the visual and haptic modalities and new evidence for a continuum of word knowledge in the second year of life. PMID:25444711

  6. Characteristics of Chinese-English bilingual dyslexia in right occipito-temporal lesion.

    PubMed

    Ting, Simon Kang Seng; Chia, Pei Shi; Chan, Yiong Huak; Kwek, Kevin Jun Hong; Tan, Wilnard; Hameed, Shahul; Tan, Eng-King

    2017-11-01

    Current literature suggests that right hemisphere lesions produce predominant spatial-related dyslexic error in English speakers. However, little is known regarding such lesions in Chinese speakers. In this paper, we describe the dyslexic characteristics of a Chinese-English bilingual patient with a right posterior cortical lesion. He was found to have profound spatial-related errors during his English word reading, in both real and non-words. During Chinese word reading, there was significantly less error compared to English, probably due to the ideographic nature of the Chinese language. He was also found to commit phonological-like visual errors in English, characterized by error responses that were visually similar to the actual word. There was no significant difference in visual errors during English word reading compared with Chinese. In general, our patient's performance in both languages appears to be consistent with the current literature on right posterior hemisphere lesions. Additionally, his performance also likely suggests that the right posterior cortical region participates in the visual analysis of orthographical word representation, both in ideographical and alphabetic languages, at least from a bilingual perspective. Future studies should further examine the role of the right posterior region in initial visual analysis of both languages. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Individual Differences in Reported Visual Imagery and Memory Performance.

    ERIC Educational Resources Information Center

    McKelvie, Stuart J.; Demers, Elizabeth G.

    1979-01-01

    High- and low-visualizing males, identified by the self-report VVIQ, participated in a memory experiment involving abstract words, concrete words, and pictures. High-visualizers were superior on all items in short-term recall but superior only on pictures in long-term recall, supporting the VVIQ's validity. (Author/SJL)

  8. Dual Coding in Children.

    ERIC Educational Resources Information Center

    Burton, John K.; Wildman, Terry M.

    The purpose of this study was to test the applicability of the dual coding hypothesis to children's recall performance. The hypothesis predicts that visual interference will have a small effect on the recall of visually presented words or pictures, but that acoustic interference will cause a decline in recall of visually presented words and…

  9. Visual Speech Primes Open-Set Recognition of Spoken Words

    ERIC Educational Resources Information Center

    Buchwald, Adam B.; Winters, Stephen J.; Pisoni, David B.

    2009-01-01

    Visual speech perception has become a topic of considerable interest to speech researchers. Previous research has demonstrated that perceivers neurally encode and use speech information from the visual modality, and this information has been found to facilitate spoken word recognition in tasks such as lexical decision (Kim, Davis, & Krins,…

  10. Caffeine Improves Left Hemisphere Processing of Positive Words

    PubMed Central

    Kuchinke, Lars; Lux, Vanessa

    2012-01-01

    A positivity advantage is known in emotional word recognition in that positive words are consistently processed faster and with fewer errors compared to emotionally neutral words. A similar advantage is not evident for negative words. Results of divided visual field studies, where stimuli are presented in either the left or right visual field and are initially processed by the contra-lateral brain hemisphere, point to a specificity of the language-dominant left hemisphere. The present study examined this effect by showing that the intake of caffeine further enhanced the recognition performance of positive, but not negative or neutral stimuli compared to a placebo control group. Because this effect was only present in the right visual field/left hemisphere condition, and based on the close link between caffeine intake and dopaminergic transmission, this result points to a dopaminergic explanation of the positivity advantage in emotional word recognition. PMID:23144893

  11. Top-down modulation of ventral occipito-temporal responses during visual word recognition.

    PubMed

    Twomey, Tae; Kawabata Duncan, Keith J; Price, Cathy J; Devlin, Joseph T

    2011-04-01

    Although interactivity is considered a fundamental principle of cognitive (and computational) models of reading, it has received far less attention in neural models of reading that instead focus on serial stages of feed-forward processing from visual input to orthographic processing to accessing the corresponding phonological and semantic information. In particular, the left ventral occipito-temporal (vOT) cortex is proposed to be the first stage where visual word recognition occurs prior to accessing nonvisual information such as semantics and phonology. We used functional magnetic resonance imaging (fMRI) to investigate whether there is evidence that activation in vOT is influenced top-down by the interaction of visual and nonvisual properties of the stimuli during visual word recognition tasks. Participants performed two different types of lexical decision tasks that focused on either visual or nonvisual properties of the word or word-like stimuli. The design allowed us to investigate how vOT activation during visual word recognition was influenced by a task change to the same stimuli and by a stimulus change during the same task. We found both stimulus- and task-driven modulation of vOT activation that can only be explained by top-down processing of nonvisual aspects of the task and stimuli. Our results are consistent with the hypothesis that vOT acts as an interface linking visual form with nonvisual processing in both bottom up and top down directions. Such interactive processing at the neural level is in agreement with cognitive and computational models of reading but challenges some of the assumptions made by current neuro-anatomical models of reading. Copyright © 2011 Elsevier Inc. All rights reserved.

  12. Does Grammatical Structure Accelerate Number Word Learning? Evidence from Learners of Dual and Non-Dual Dialects of Slovenian

    PubMed Central

    Plesničar, Vesna; Razboršek, Tina; Sullivan, Jessica; Barner, David

    2016-01-01

    How does linguistic structure affect children’s acquisition of early number word meanings? Previous studies have tested this question by comparing how children learning languages with different grammatical representations of number learn the meanings of labels for small numbers, like 1, 2, and 3. For example, children who acquire a language with singular-plural marking, like English, are faster to learn the word for 1 than children learning a language that lacks the singular-plural distinction, perhaps because the word for 1 is always used in singular contexts, highlighting its meaning. These studies are problematic, however, because reported differences in number word learning may be due to unmeasured cross-cultural differences rather than specific linguistic differences. To address this problem, we investigated number word learning in four groups of children from a single culture who spoke different dialects of the same language that differed chiefly with respect to how they grammatically mark number. We found that learning a dialect which features “dual” morphology (marking of pairs) accelerated children’s acquisition of the number word two relative to learning a “non-dual” dialect of the same language. PMID:27486802

  13. Does Grammatical Structure Accelerate Number Word Learning? Evidence from Learners of Dual and Non-Dual Dialects of Slovenian.

    PubMed

    Marušič, Franc; Žaucer, Rok; Plesničar, Vesna; Razboršek, Tina; Sullivan, Jessica; Barner, David

    2016-01-01

    How does linguistic structure affect children's acquisition of early number word meanings? Previous studies have tested this question by comparing how children learning languages with different grammatical representations of number learn the meanings of labels for small numbers, like 1, 2, and 3. For example, children who acquire a language with singular-plural marking, like English, are faster to learn the word for 1 than children learning a language that lacks the singular-plural distinction, perhaps because the word for 1 is always used in singular contexts, highlighting its meaning. These studies are problematic, however, because reported differences in number word learning may be due to unmeasured cross-cultural differences rather than specific linguistic differences. To address this problem, we investigated number word learning in four groups of children from a single culture who spoke different dialects of the same language that differed chiefly with respect to how they grammatically mark number. We found that learning a dialect which features "dual" morphology (marking of pairs) accelerated children's acquisition of the number word two relative to learning a "non-dual" dialect of the same language.

  14. Statistical learning of an auditory sequence and reorganization of acquired knowledge: A time course of word segmentation and ordering.

    PubMed

    Daikoku, Tatsuya; Yatomi, Yutaka; Yumoto, Masato

    2017-01-27

    Previous neural studies have supported the hypothesis that statistical learning mechanisms are used broadly across different domains such as language and music. However, these studies have only investigated a single aspect of statistical learning at a time, such as recognizing word boundaries or learning word order patterns. In this study, we neutrally investigated how the two levels of statistical learning for recognizing word boundaries and word ordering could be reflected in neuromagnetic responses and how acquired statistical knowledge is reorganised when the syntactic rules are revised. Neuromagnetic responses to the Japanese-vowel sequence (a, e, i, o, and u), presented every .45s, were recorded from 14 right-handed Japanese participants. The vowel order was constrained by a Markov stochastic model such that five nonsense words (aue, eao, iea, oiu, and uoi) were chained with an either-or rule: the probability of the forthcoming word was statistically defined (80% for one word; 20% for the other word) by the most recent two words. All of the word transition probabilities (80% and 20%) were switched in the middle of the sequence. In the first and second quarters of the sequence, the neuromagnetic responses to the words that appeared with higher transitional probability were significantly reduced compared with those that appeared with a lower transitional probability. After switching the word transition probabilities, the response reduction was replicated in the last quarter of the sequence. The responses to the final vowels in the words were significantly reduced compared with those to the initial vowels in the last quarter of the sequence. The results suggest that both within-word and between-word statistical learning are reflected in neural responses. The present study supports the hypothesis that listeners learn larger structures such as phrases first, and they subsequently extract smaller structures, such as words, from the learned phrases. The present study provides the first neurophysiological evidence that the correction of statistical knowledge requires more time than the acquisition of new statistical knowledge. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Utterance-final position and pitch marking aid word learning in school-age children

    PubMed Central

    Laaha, Sabine; Fitch, W. Tecumseh

    2017-01-01

    We investigated the effects of word order and prosody on word learning in school-age children. Third graders viewed photographs belonging to one of three semantic categories while hearing four-word nonsense utterances containing a target word. In the control condition, all words had the same pitch and, across trials, the position of the target word was varied systematically within each utterance. The only cue to word–meaning mapping was the co-occurrence of target words and referents. This cue was present in all conditions. In the Utterance-final condition, the target word always occurred in utterance-final position, and at the same fundamental frequency as all the other words of the utterance. In the Pitch peak condition, the position of the target word was varied systematically within each utterance across trials, and produced with pitch contrasts typical of infant-directed speech (IDS). In the Pitch peak + Utterance-final condition, the target word always occurred in utterance-final position, and was marked with a pitch contrast typical of IDS. Word learning occurred in all conditions except the control condition. Moreover, learning performance was significantly higher than that observed with simple co-occurrence (control condition) only for the Pitch peak + Utterance-final condition. We conclude that, for school-age children, the combination of words' utterance-final alignment and pitch enhancement boosts word learning. PMID:28878961

  16. Grounding statistical learning in context: The effects of learning and retrieval contexts on cross-situational word learning.

    PubMed

    Chen, Chi-Hsin; Yu, Chen

    2017-06-01

    Natural language environments usually provide structured contexts for learning. This study examined the effects of semantically themed contexts-in both learning and retrieval phases-on statistical word learning. Results from 2 experiments consistently showed that participants had higher performance in semantically themed learning contexts. In contrast, themed retrieval contexts did not affect performance. Our work suggests that word learners are sensitive to statistical regularities not just at the level of individual word-object co-occurrences but also at another level containing a whole network of associations among objects and their properties.

  17. The Influence of Concreteness of Concepts on the Integration of Novel Words into the Semantic Network

    PubMed Central

    Ding, Jinfeng; Liu, Wenjuan; Yang, Yufang

    2017-01-01

    On the basis of previous studies revealing a processing advantage of concrete words over abstract words, the current study aimed to further explore the influence of concreteness on the integration of novel words into semantic memory with the event related potential (ERP) technique. In the experiment during the learning phase participants read two-sentence contexts and inferred the meaning of novel words. The novel words were two-character non-words in Chinese language. Their meaning was either a concrete or abstract known concept which could be inferred from the contexts. During the testing phase participants performed a lexical decision task in which the learned novel words served as primes for either their corresponding concepts, semantically related or unrelated targets. For the concrete novel words, the semantically related words belonged to the same semantic categories with their corresponding concepts. For the abstract novel words, the semantically related words were synonyms of their corresponding concepts. The unrelated targets were real words which were concrete or abstract for the concrete or abstract novel words respectively. The ERP results showed that the corresponding concepts and the semantically related words elicited smaller N400s than the unrelated words. The N400 effect was not modulated by the concreteness of the concepts. In addition, the concrete corresponding concepts elicited a smaller late positive component (LPC) than the concrete unrelated words. This LPC effect was absent for the abstract words. The results indicate that although both concrete and abstract novel words can be acquired and linked to their related words in the semantic network after a short learning phase, the concrete novel words are learned better. Our findings support the (extended) dual coding theory and broaden our understanding of adult word learning and changes in concept organization. PMID:29255440

  18. The Influence of Concreteness of Concepts on the Integration of Novel Words into the Semantic Network.

    PubMed

    Ding, Jinfeng; Liu, Wenjuan; Yang, Yufang

    2017-01-01

    On the basis of previous studies revealing a processing advantage of concrete words over abstract words, the current study aimed to further explore the influence of concreteness on the integration of novel words into semantic memory with the event related potential (ERP) technique. In the experiment during the learning phase participants read two-sentence contexts and inferred the meaning of novel words. The novel words were two-character non-words in Chinese language. Their meaning was either a concrete or abstract known concept which could be inferred from the contexts. During the testing phase participants performed a lexical decision task in which the learned novel words served as primes for either their corresponding concepts, semantically related or unrelated targets. For the concrete novel words, the semantically related words belonged to the same semantic categories with their corresponding concepts. For the abstract novel words, the semantically related words were synonyms of their corresponding concepts. The unrelated targets were real words which were concrete or abstract for the concrete or abstract novel words respectively. The ERP results showed that the corresponding concepts and the semantically related words elicited smaller N400s than the unrelated words. The N400 effect was not modulated by the concreteness of the concepts. In addition, the concrete corresponding concepts elicited a smaller late positive component (LPC) than the concrete unrelated words. This LPC effect was absent for the abstract words. The results indicate that although both concrete and abstract novel words can be acquired and linked to their related words in the semantic network after a short learning phase, the concrete novel words are learned better. Our findings support the (extended) dual coding theory and broaden our understanding of adult word learning and changes in concept organization.

  19. Lessons from Television: Children's Word Learning When Viewing.

    ERIC Educational Resources Information Center

    Rice, Mabel L.; Woodsmall, Linda

    1988-01-01

    Preschoolers were assigned to experimental and control groups to investigate whether they could learn novel words when viewing television and whether the learning was influenced by age or type of word. (PCB)

  20. Using variability to guide dimensional weighting: Associative mechanisms in early word learning

    PubMed Central

    Apfelbaum, Keith S.; McMurray, Bob

    2013-01-01

    At 14 months, children appear to struggle to apply their fairly well developed speech perception abilities to learning similar sounding words (e.g. bih/dih; Stager & Werker, 1997). However, variability in non-phonetic aspects of the training stimuli seems to aid word learning at this age. Extant theories of early word learning cannot account for this benefit of variability. We offer a simple explanation for this range of effects based on associative learning. Simulations suggest that if infants encode both non-contrastive information (e.g. cues to speaker voice) and meaningful linguistic cues (e.g. place of articulation or voicing), then associative learning mechanisms predict these variability effects in early word learning. Crucially, this means that despite the importance of task variables in predicting performance, this body of work shows that phonological categories are still developing in this age, and that the structure of non-informative cues has critical influences on word learning abilities. PMID:21609356

Top