Sample records for visual lexical categorization

  1. Lexical Familiarity and Processing Efficiency: Individual Differences in Naming, Lexical Decision, and Semantic Categorization

    PubMed Central

    Lewellen, Mary Jo; Goldinger, Stephen D.; Pisoni, David B.; Greene, Beth G.

    2012-01-01

    College students were separated into 2 groups (high and low) on the basis of 3 measures: subjective familiarity ratings of words, self-reported language experiences, and a test of vocabulary knowledge. Three experiments were conducted to determine if the groups also differed in visual word naming, lexical decision, and semantic categorization. High Ss were consistently faster than low Ss in naming visually presented words. They were also faster and more accurate in making difficult lexical decisions and in rejecting homophone foils in semantic categorization. Taken together, the results demonstrate that Ss who differ in lexical familiarity also differ in processing efficiency. The relationship between processing efficiency and working memory accounts of individual differences in language processing is also discussed. PMID:8371087

  2. Is the Lateralized Categorical Perception of Color a Situational Effect of Language on Color Perception?

    PubMed

    Zhong, Weifang; Li, You; Huang, Yulan; Li, He; Mo, Lei

    2018-01-01

    This study investigated whether and how a person's varied series of lexical categories corresponding to different discriminatory characteristics of the same colors affect his or her perception of colors. In three experiments, Chinese participants were primed to categorize four graduated colors-specifically dark green, light green, light blue, and dark blue-into green and blue; light color and dark color; and dark green, light green, light blue, and dark blue. The participants were then required to complete a visual search task. Reaction times in the visual search task indicated that different lateralized categorical perceptions (CPs) of color corresponded to the various priming situations. These results suggest that all of the lexical categories corresponding to different discriminatory characteristics of the same colors can influence people's perceptions of colors and that color perceptions can be influenced differently by distinct types of lexical categories depending on the context. Copyright © 2017 Cognitive Science Society, Inc.

  3. Unfolding Visual Lexical Decision in Time

    PubMed Central

    Barca, Laura; Pezzulo, Giovanni

    2012-01-01

    Visual lexical decision is a classical paradigm in psycholinguistics, and numerous studies have assessed the so-called “lexicality effect" (i.e., better performance with lexical than non-lexical stimuli). Far less is known about the dynamics of choice, because many studies measured overall reaction times, which are not informative about underlying processes. To unfold visual lexical decision in (over) time, we measured participants' hand movements toward one of two item alternatives by recording the streaming x,y coordinates of the computer mouse. Participants categorized four kinds of stimuli as “lexical" or “non-lexical:" high and low frequency words, pseudowords, and letter strings. Spatial attraction toward the opposite category was present for low frequency words and pseudowords. Increasing the ambiguity of the stimuli led to greater movement complexity and trajectory attraction to competitors, whereas no such effect was present for high frequency words and letter strings. Results fit well with dynamic models of perceptual decision-making, which describe the process as a competition between alternatives guided by the continuous accumulation of evidence. More broadly, our results point to a key role of statistical decision theory in studying linguistic processing in terms of dynamic and non-modular mechanisms. PMID:22563419

  4. Phoneme categorization and discrimination in younger and older adults: a comparative analysis of perceptual, lexical, and attentional factors.

    PubMed

    Mattys, Sven L; Scharenborg, Odette

    2014-03-01

    This study investigates the extent to which age-related language processing difficulties are due to a decline in sensory processes or to a deterioration of cognitive factors, specifically, attentional control. Two facets of attentional control were examined: inhibition of irrelevant information and divided attention. Younger and older adults were asked to categorize the initial phoneme of spoken syllables ("Was it m or n?"), trying to ignore the lexical status of the syllables. The phonemes were manipulated to range in eight steps from m to n. Participants also did a discrimination task on syllable pairs ("Were the initial sounds the same or different?"). Categorization and discrimination were performed under either divided attention (concurrent visual-search task) or focused attention (no visual task). The results showed that even when the younger and older adults were matched on their discrimination scores: (1) the older adults had more difficulty inhibiting lexical knowledge than did younger adults, (2) divided attention weakened lexical inhibition in both younger and older adults, and (3) divided attention impaired sound discrimination more in older than younger listeners. The results confirm the independent and combined contribution of sensory decline and deficit in attentional control to language processing difficulties associated with aging. The relative weight of these variables and their mechanisms of action are discussed in the context of theories of aging and language. (c) 2014 APA, all rights reserved.

  5. Phi-square Lexical Competition Database (Phi-Lex): an online tool for quantifying auditory and visual lexical competition.

    PubMed

    Strand, Julia F

    2014-03-01

    A widely agreed-upon feature of spoken word recognition is that multiple lexical candidates in memory are simultaneously activated in parallel when a listener hears a word, and that those candidates compete for recognition (Luce, Goldinger, Auer, & Vitevitch, Perception 62:615-625, 2000; Luce & Pisoni, Ear and Hearing 19:1-36, 1998; McClelland & Elman, Cognitive Psychology 18:1-86, 1986). Because the presence of those competitors influences word recognition, much research has sought to quantify the processes of lexical competition. Metrics that quantify lexical competition continuously are more effective predictors of auditory and visual (lipread) spoken word recognition than are the categorical metrics traditionally used (Feld & Sommers, Speech Communication 53:220-228, 2011; Strand & Sommers, Journal of the Acoustical Society of America 130:1663-1672, 2011). A limitation of the continuous metrics is that they are somewhat computationally cumbersome and require access to existing speech databases. This article describes the Phi-square Lexical Competition Database (Phi-Lex): an online, searchable database that provides access to multiple metrics of auditory and visual (lipread) lexical competition for English words, available at www.juliastrand.com/phi-lex .

  6. ERP correlates of letter identity and letter position are modulated by lexical frequency

    PubMed Central

    Vergara-Martínez, Marta; Perea, Manuel; Gómez, Pablo; Swaab, Tamara Y.

    2013-01-01

    The encoding of letter position is a key aspect in all recently proposed models of visual-word recognition. We analyzed the impact of lexical frequency on letter position assignment by examining the temporal dynamics of lexical activation induced by pseudowords extracted from words of different frequencies. For each word (e.g., BRIDGE), we created two pseudowords: A transposed-letter (TL: BRIGDE) and a replaced-letter pseudoword (RL: BRITGE). ERPs were recorded while participants read words and pseudowords in two tasks: Semantic categorization (Experiment 1) and lexical decision (Experiment 2). For high-frequency stimuli, similar ERPs were obtained for words and TL-pseudowords, but the N400 component to words was reduced relative to RL-pseudowords, indicating less lexical/semantic activation. In contrast, TL- and RL-pseudowords created from low-frequency stimuli elicited similar ERPs. Behavioral responses in the lexical decision task paralleled this asymmetry. The present findings impose constraints on computational and neural models of visual-word recognition. PMID:23454070

  7. The influence of print exposure on the body-object interaction effect in visual word recognition.

    PubMed

    Hansen, Dana; Siakaluk, Paul D; Pexman, Penny M

    2012-01-01

    We examined the influence of print exposure on the body-object interaction (BOI) effect in visual word recognition. High print exposure readers and low print exposure readers either made semantic categorizations ("Is the word easily imageable?"; Experiment 1) or phonological lexical decisions ("Does the item sound like a real English word?"; Experiment 2). The results from Experiment 1 showed that there was a larger BOI effect for the low print exposure readers than for the high print exposure readers in semantic categorization, though an effect was observed for both print exposure groups. However, the results from Experiment 2 showed that the BOI effect was observed only for the high print exposure readers in phonological lexical decision. The results of the present study suggest that print exposure does influence the BOI effect, and that this influence varies as a function of task demands.

  8. Spatial and temporal features of superordinate semantic processing studied with fMRI and EEG.

    PubMed

    Costanzo, Michelle E; McArdle, Joseph J; Swett, Bruce; Nechaev, Vladimir; Kemeny, Stefan; Xu, Jiang; Braun, Allen R

    2013-01-01

    The relationships between the anatomical representation of semantic knowledge in the human brain and the timing of neurophysiological mechanisms involved in manipulating such information remain unclear. This is the case for superordinate semantic categorization-the extraction of general features shared by broad classes of exemplars (e.g., living vs. non-living semantic categories). We proposed that, because of the abstract nature of this information, input from diverse input modalities (visual or auditory, lexical or non-lexical) should converge and be processed in the same regions of the brain, at similar time scales during superordinate categorization-specifically in a network of heteromodal regions, and late in the course of the categorization process. In order to test this hypothesis, we utilized electroencephalography and event related potentials (EEG/ERP) with functional magnetic resonance imaging (fMRI) to characterize subjects' responses as they made superordinate categorical decisions (living vs. non-living) about objects presented as visual pictures or auditory words. Our results reveal that, consistent with our hypothesis, during the course of superordinate categorization, information provided by these diverse inputs appears to converge in both time and space: fMRI showed that heteromodal areas of the parietal and temporal cortices are active during categorization of both classes of stimuli. The ERP results suggest that superordinate categorization is reflected as a late positive component (LPC) with a parietal distribution and long latencies for both stimulus types. Within the areas and times in which modality independent responses were identified, some differences between living and non-living categories were observed, with a more widespread spatial extent and longer latency responses for categorization of non-living items.

  9. Task-Dependent Masked Priming Effects in Visual Word Recognition

    PubMed Central

    Kinoshita, Sachiko; Norris, Dennis

    2012-01-01

    A method used widely to study the first 250 ms of visual word recognition is masked priming: These studies have yielded a rich set of data concerning the processes involved in recognizing letters and words. In these studies, there is an implicit assumption that the early processes in word recognition tapped by masked priming are automatic, and masked priming effects should therefore be invariant across tasks. Contrary to this assumption, masked priming effects are modulated by the task goal: For example, only word targets show priming in the lexical decision task, but both words and non-words do in the same-different task; semantic priming effects are generally weak in the lexical decision task but are robust in the semantic categorization task. We explain how such task dependence arises within the Bayesian Reader account of masked priming (Norris and Kinoshita, 2008), and how the task dissociations can be used to understand the early processes in lexical access. PMID:22675316

  10. Visual Word Recognition in Deaf Readers: Lexicality Is Modulated by Communication Mode

    PubMed Central

    Barca, Laura; Pezzulo, Giovanni; Castrataro, Marianna; Rinaldi, Pasquale; Caselli, Maria Cristina

    2013-01-01

    Evidence indicates that adequate phonological abilities are necessary to develop proficient reading skills and that later in life phonology also has a role in the covert visual word recognition of expert readers. Impairments of acoustic perception, such as deafness, can lead to atypical phonological representations of written words and letters, which in turn can affect reading proficiency. Here, we report an experiment in which young adults with different levels of acoustic perception (i.e., hearing and deaf individuals) and different modes of communication (i.e., hearing individuals using spoken language, deaf individuals with a preference for sign language, and deaf individuals using the oral modality with less or no competence in sign language) performed a visual lexical decision task, which consisted of categorizing real words and consonant strings. The lexicality effect was restricted to deaf signers who responded faster to real words than consonant strings, showing over-reliance on whole word lexical processing of stimuli. No effect of stimulus type was found in deaf individuals using the oral modality or in hearing individuals. Thus, mode of communication modulates the lexicality effect. This suggests that learning a sign language during development shapes visuo-motor representations of words, which are tuned to the actions used to express them (phono-articulatory movements vs. hand movements) and to associated perceptions. As these visuo-motor representations are elicited during on-line linguistic processing and can overlap with the perceptual-motor processes required to execute the task, they can potentially produce interference or facilitation effects. PMID:23554976

  11. Visual word recognition in deaf readers: lexicality is modulated by communication mode.

    PubMed

    Barca, Laura; Pezzulo, Giovanni; Castrataro, Marianna; Rinaldi, Pasquale; Caselli, Maria Cristina

    2013-01-01

    Evidence indicates that adequate phonological abilities are necessary to develop proficient reading skills and that later in life phonology also has a role in the covert visual word recognition of expert readers. Impairments of acoustic perception, such as deafness, can lead to atypical phonological representations of written words and letters, which in turn can affect reading proficiency. Here, we report an experiment in which young adults with different levels of acoustic perception (i.e., hearing and deaf individuals) and different modes of communication (i.e., hearing individuals using spoken language, deaf individuals with a preference for sign language, and deaf individuals using the oral modality with less or no competence in sign language) performed a visual lexical decision task, which consisted of categorizing real words and consonant strings. The lexicality effect was restricted to deaf signers who responded faster to real words than consonant strings, showing over-reliance on whole word lexical processing of stimuli. No effect of stimulus type was found in deaf individuals using the oral modality or in hearing individuals. Thus, mode of communication modulates the lexicality effect. This suggests that learning a sign language during development shapes visuo-motor representations of words, which are tuned to the actions used to express them (phono-articulatory movements vs. hand movements) and to associated perceptions. As these visuo-motor representations are elicited during on-line linguistic processing and can overlap with the perceptual-motor processes required to execute the task, they can potentially produce interference or facilitation effects.

  12. Spatial and temporal features of superordinate semantic processing studied with fMRI and EEG

    PubMed Central

    Costanzo, Michelle E.; McArdle, Joseph J.; Swett, Bruce; Nechaev, Vladimir; Kemeny, Stefan; Xu, Jiang; Braun, Allen R.

    2013-01-01

    The relationships between the anatomical representation of semantic knowledge in the human brain and the timing of neurophysiological mechanisms involved in manipulating such information remain unclear. This is the case for superordinate semantic categorization—the extraction of general features shared by broad classes of exemplars (e.g., living vs. non-living semantic categories). We proposed that, because of the abstract nature of this information, input from diverse input modalities (visual or auditory, lexical or non-lexical) should converge and be processed in the same regions of the brain, at similar time scales during superordinate categorization—specifically in a network of heteromodal regions, and late in the course of the categorization process. In order to test this hypothesis, we utilized electroencephalography and event related potentials (EEG/ERP) with functional magnetic resonance imaging (fMRI) to characterize subjects' responses as they made superordinate categorical decisions (living vs. non-living) about objects presented as visual pictures or auditory words. Our results reveal that, consistent with our hypothesis, during the course of superordinate categorization, information provided by these diverse inputs appears to converge in both time and space: fMRI showed that heteromodal areas of the parietal and temporal cortices are active during categorization of both classes of stimuli. The ERP results suggest that superordinate categorization is reflected as a late positive component (LPC) with a parietal distribution and long latencies for both stimulus types. Within the areas and times in which modality independent responses were identified, some differences between living and non-living categories were observed, with a more widespread spatial extent and longer latency responses for categorization of non-living items. PMID:23847490

  13. Tracking Second Thoughts: Continuous and Discrete Revision Processes during Visual Lexical Decision

    PubMed Central

    Barca, Laura; Pezzulo, Giovanni

    2015-01-01

    We studied the dynamics of lexical decisions by asking participants to categorize lexical and nonlexical stimuli and recording their mouse movements toward response buttons during the choice. In a previous report we revealed greater trajectory curvature and attraction to competitors for Low Frequency words and Pseudowords. This analysis did not clarify whether the trajectory curvature in the two conditions was due to a continuous dynamic competition between the response alternatives or if a discrete revision process (a "change of mind") took place during the choice from an initially selected response to the opposite one. To disentangle these two possibilities, here we analyse the velocity and acceleration profiles of mouse movements during the choice. Pseudowords' peak movement velocity occurred with 100ms delay with respect to words and Letters Strings. Acceleration profile for High and Low Frequency words and Letters Strings exhibited a butterfly plot with one acceleration peak at 400ms and one deceleration peak at 650ms. Differently, Pseudowords' acceleration profile had double positive peaks (at 400 and 600ms) followed by movement deceleration, in correspondence with changes in the decision from lexical to nonlexical response buttons. These results speak to different online processes during the categorization of Low Frequency words and Pseudowords, with a continuous competition process for the former and a discrete revision process for the latter. PMID:25699992

  14. Bedding down new words: Sleep promotes the emergence of lexical competition in visual word recognition.

    PubMed

    Wang, Hua-Chen; Savage, Greg; Gaskell, M Gareth; Paulin, Tamara; Robidoux, Serje; Castles, Anne

    2017-08-01

    Lexical competition processes are widely viewed as the hallmark of visual word recognition, but little is known about the factors that promote their emergence. This study examined for the first time whether sleep may play a role in inducing these effects. A group of 27 participants learned novel written words, such as banara, at 8 am and were tested on their learning at 8 pm the same day (AM group), while 29 participants learned the words at 8 pm and were tested at 8 am the following day (PM group). Both groups were retested after 24 hours. Using a semantic categorization task, we showed that lexical competition effects, as indexed by slowed responses to existing neighbor words such as banana, emerged 12 h later in the PM group who had slept after learning but not in the AM group. After 24 h the competition effects were evident in both groups. These findings have important implications for theories of orthographic learning and broader neurobiological models of memory consolidation.

  15. Evidence for Separate Contributions of High and Low Spatial Frequencies during Visual Word Recognition.

    PubMed

    Winsler, Kurt; Holcomb, Phillip J; Midgley, Katherine J; Grainger, Jonathan

    2017-01-01

    Previous studies have shown that different spatial frequency information processing streams interact during the recognition of visual stimuli. However, it is a matter of debate as to the contributions of high and low spatial frequency (HSF and LSF) information for visual word recognition. This study examined the role of different spatial frequencies in visual word recognition using event-related potential (ERP) masked priming. EEG was recorded from 32 scalp sites in 30 English-speaking adults in a go/no-go semantic categorization task. Stimuli were white characters on a neutral gray background. Targets were uppercase five letter words preceded by a forward-mask (#######) and a 50 ms lowercase prime. Primes were either the same word (repeated) or a different word (un-repeated) than the subsequent target and either contained only high, only low, or full spatial frequency information. Additionally within each condition, half of the prime-target pairs were high lexical frequency, and half were low. In the full spatial frequency condition, typical ERP masked priming effects were found with an attenuated N250 (sub-lexical) and N400 (lexical-semantic) for repeated compared to un-repeated primes. For HSF primes there was a weaker N250 effect which interacted with lexical frequency, a significant reversal of the effect around 300 ms, and an N400-like effect for only high lexical frequency word pairs. LSF primes did not produce any of the classic ERP repetition priming effects, however they did elicit a distinct early effect around 200 ms in the opposite direction of typical repetition effects. HSF information accounted for many of the masked repetition priming ERP effects and therefore suggests that HSFs are more crucial for word recognition. However, LSFs did produce their own pattern of priming effects indicating that larger scale information may still play a role in word recognition.

  16. From sound to syntax: phonological constraints on children's lexical categorization of new words.

    PubMed

    Fitneva, Stanka A; Christiansen, Morten H; Monaghan, Padraic

    2009-11-01

    Two studies examined the role of phonological cues in the lexical categorization of new words when children could also rely on learning by exclusion and whether the role of phonology depends on extensive experience with a language. Phonological cues were assessed via phonological typicality - an aggregate measure of the relationship between the phonology of a word and the phonology of words in the same lexical class. Experiment 1 showed that when monolingual English-speaking seven-year-olds could rely on learning by exclusion, phonological typicality only affected their initial inferences about the words. Consistent with recent computational analyses, phonological cues had stronger impact on the processing of verb-like than noun-like items. Experiment 2 revealed an impact of French on the performance of seven-year-olds in French immersion when tested in a French language environment. Thus, phonological knowledge may affect lexical categorization even in the absence of extensive experience.

  17. Aphasic Patients Exhibit a Reversal of Hemispheric Asymmetries in Categorical Color Discrimination

    PubMed Central

    Paluy, Yulia; Gilbert, Aubrey L.; Baldo, Juliana V.; Dronkers, Nina F.; Ivry, Richard B.

    2010-01-01

    Patients with left hemisphere (LH) or right hemisphere (RH) brain injury due to stroke were tested on a speeded, color discrimination task in which two factors were manipulated: 1) the categorical relationship between the target and the distracters and 2) the visual field in which the target was presented. Similar to controls, the RH patients were faster in detecting targets in the right visual field when the target and distracters had different color names compared to when their names were the same. This effect was absent in the LH patients, consistent with the hypothesis that injury to the left hemisphere handicaps the automatic activation of lexical codes. Moreover, the LH patients showed a reversed effect, such that the advantage of different target-distracter names was now evident for targets in the left visual field. This reversal may suggest a reorganization of the color lexicon in the right hemisphere following left hemisphere brain injury and/or the unmasking of a heightened right hemisphere sensitivity to color categories. PMID:21216454

  18. Impact of Visual, Vocal, and Lexical Cues on Judgments of Counselor Qualities

    ERIC Educational Resources Information Center

    Strahan, Carole; Zytowski, Donald G.

    1976-01-01

    Undergraduate students (N=130) rated Carl Rogers via visual, lexical, vocal, or vocal-lexical communication channels. Lexical cues were more important in creating favorable impressions among females. Subsequent exposure to combined visual-vocal-lexical cues resulted in warmer and less distant ratings, but not on a consistent basis. (Author)

  19. When Wine and Apple Both Help the Production of Grapes: ERP Evidence for Post-lexical Semantic Facilitation in Picture Naming

    PubMed Central

    Python, Grégoire; Fargier, Raphaël; Laganaro, Marina

    2018-01-01

    Background: Producing a word in referential naming requires to select the right word in our mental lexicon among co-activated semantically related words. The mechanisms underlying semantic context effects during speech planning are still controversial, particularly for semantic facilitation which investigation remains under-represented in contrast to the plethora of studies dealing with interference. Our aim is to study the time-course of semantic facilitation in picture naming, using a picture-word “interference” paradigm and event-related potentials (ERPs). Methods: We compared two different types of semantic relationships, associative and categorical, in a single word priming and a double word priming paradigm. The primes were presented visually with a long negative Stimulus Onset Asynchrony (SOA), which is expected to cause facilitation. Results: Shorter naming latencies were observed after both associative and categorical primes, as compared to unrelated primes, and even shorter latencies after two primes. Electrophysiological results showed relatively late modulations of waveform amplitudes for both types of primes (beginning ~330 ms post picture onset with a single prime and ~275 ms post picture onset with two primes), corresponding to a shift in latency of similar topographic maps across conditions. Conclusion: The present results are in favor of a post-lexical locus of semantic facilitation for associative and categorical priming in picture naming and confirm that semantic facilitation is as relevant as semantic interference to inform on word production. The post-lexical locus argued here might be related to self-monitoting or/and to modulations at the level of word-form planning, without excluding the participation of strategic processes. PMID:29692716

  20. An RT distribution analysis of relatedness proportion effects in lexical decision and semantic categorization reveals different mechanisms.

    PubMed

    de Wit, Bianca; Kinoshita, Sachiko

    2015-01-01

    The magnitude of the semantic priming effect is known to increase as the proportion of related prime-target pairs in an experiment increases. This relatedness proportion (RP) effect was studied in a lexical decision task at a short prime-target stimulus onset asynchrony (240 ms), which is widely assumed to preclude strategic prospective usage of the prime. The analysis of the reaction time (RT) distribution suggested that the observed RP effect reflected a modulation of a retrospective semantic matching process. The pattern of the RP effect on the RT distribution found here is contrasted to that reported in De Wit and Kinoshita's (2014) semantic categorization study, and it is concluded that the RP effect is driven by different underlying mechanisms in lexical decision and semantic categorization.

  1. Direct Mapping of Acoustics to Phonology: On the Lexical Encoding of Front Rounded Vowels in L1 English-L2 French Acquisition

    ERIC Educational Resources Information Center

    Darcy, Isabelle; Dekydtspotter, Laurent; Sprouse, Rex A.; Glover, Justin; Kaden, Christiane; McGuire, Michael; Scott, John H. G.

    2012-01-01

    It is well known that adult US-English-speaking learners of French experience difficulties acquiring high /y/-/u/ and mid /oe/-/[openo]/ front vs. back rounded vowel contrasts in French. This study examines the acquisition of these French vowel contrasts at two levels: phonetic categorization and lexical representations. An ABX categorization task…

  2. When bees hamper the production of honey: lexical interference from associates in speech production.

    PubMed

    Abdel Rahman, Rasha; Melinger, Alissa

    2007-05-01

    In this article, the authors explore semantic context effects in speaking. In particular, the authors investigate a marked discrepancy between categorically and associatively induced effects; only categorical relationships have been reported to cause interference in object naming. In Experiments 1 and 2, a variant of the semantic blocking paradigm was used to induce two different types of semantic context effects. Pictures were either named in the context of categorically related objects (e.g., animals: bee, cow, fish) or in the context of associatively related objects from different semantic categories (e.g., apiary: bee, honey, bee keeper). Semantic interference effects were observed in both conditions, relative to an unrelated context. Experiment 3 replicated the classic effects of categorical interference and associative facilitation in a picture-word interference paradigm with the material used in Experiment 2. These findings suggest that associates are active lexical competitors and that the microstructure of lexicalization is highly flexible and adjustable to the semantic context in which the utterance takes place.

  3. Talker and lexical effects on audiovisual word recognition by adults with cochlear implants.

    PubMed

    Kaiser, Adam R; Kirk, Karen Iler; Lachs, Lorin; Pisoni, David B

    2003-04-01

    The present study examined how postlingually deafened adults with cochlear implants combine visual information from lipreading with auditory cues in an open-set word recognition task. Adults with normal hearing served as a comparison group. Word recognition performance was assessed using lexically controlled word lists presented under auditory-only, visual-only, and combined audiovisual presentation formats. Effects of talker variability were studied by manipulating the number of talkers producing the stimulus tokens. Lexical competition was investigated using sets of lexically easy and lexically hard test words. To assess the degree of audiovisual integration, a measure of visual enhancement, R(a), was used to assess the gain in performance provided in the audiovisual presentation format relative to the maximum possible performance obtainable in the auditory-only format. Results showed that word recognition performance was highest for audiovisual presentation followed by auditory-only and then visual-only stimulus presentation. Performance was better for single-talker lists than for multiple-talker lists, particularly under the audiovisual presentation format. Word recognition performance was better for the lexically easy than for the lexically hard words regardless of presentation format. Visual enhancement scores were higher for single-talker conditions compared to multiple-talker conditions and tended to be somewhat better for lexically easy words than for lexically hard words. The pattern of results suggests that information from the auditory and visual modalities is used to access common, multimodal lexical representations in memory. The findings are discussed in terms of the complementary nature of auditory and visual sources of information that specify the same underlying gestures and articulatory events in speech.

  4. Talker and Lexical Effects on Audiovisual Word Recognition by Adults With Cochlear Implants

    PubMed Central

    Kaiser, Adam R.; Kirk, Karen Iler; Lachs, Lorin; Pisoni, David B.

    2012-01-01

    The present study examined how postlingually deafened adults with cochlear implants combine visual information from lipreading with auditory cues in an open-set word recognition task. Adults with normal hearing served as a comparison group. Word recognition performance was assessed using lexically controlled word lists presented under auditory-only, visual-only, and combined audiovisual presentation formats. Effects of talker variability were studied by manipulating the number of talkers producing the stimulus tokens. Lexical competition was investigated using sets of lexically easy and lexically hard test words. To assess the degree of audiovisual integration, a measure of visual enhancement, Ra, was used to assess the gain in performance provided in the audiovisual presentation format relative to the maximum possible performance obtainable in the auditory-only format. Results showed that word recognition performance was highest for audiovisual presentation followed by auditory-only and then visual-only stimulus presentation. Performance was better for single-talker lists than for multiple-talker lists, particularly under the audiovisual presentation format. Word recognition performance was better for the lexically easy than for the lexically hard words regardless of presentation format. Visual enhancement scores were higher for single-talker conditions compared to multiple-talker conditions and tended to be somewhat better for lexically easy words than for lexically hard words. The pattern of results suggests that information from the auditory and visual modalities is used to access common, multimodal lexical representations in memory. The findings are discussed in terms of the complementary nature of auditory and visual sources of information that specify the same underlying gestures and articulatory events in speech. PMID:14700380

  5. The role of visual representations during the lexical access of spoken words

    PubMed Central

    Lewis, Gwyneth; Poeppel, David

    2015-01-01

    Do visual representations contribute to spoken word recognition? We examine, using MEG, the effects of sublexical and lexical variables at superior temporal (ST) areas and the posterior middle temporal gyrus (pMTG) compared with that of word imageability at visual cortices. Embodied accounts predict early modulation of visual areas by imageability - concurrently with or prior to modulation of pMTG by lexical variables. Participants responded to speech stimuli varying continuously in imageability during lexical decision with simultaneous MEG recording. We employed the linguistic variables in a new type of correlational time course analysis to assess trial-by-trial activation in occipital, ST, and pMTG regions of interest (ROIs). The linguistic variables modulated the ROIs during different time windows. Critically, visual regions reflected an imageability effect prior to effects of lexicality on pMTG. This surprising effect supports a view on which sensory aspects of a lexical item are not a consequence of lexical activation. PMID:24814579

  6. The role of visual representations during the lexical access of spoken words.

    PubMed

    Lewis, Gwyneth; Poeppel, David

    2014-07-01

    Do visual representations contribute to spoken word recognition? We examine, using MEG, the effects of sublexical and lexical variables at superior temporal (ST) areas and the posterior middle temporal gyrus (pMTG) compared with that of word imageability at visual cortices. Embodied accounts predict early modulation of visual areas by imageability--concurrently with or prior to modulation of pMTG by lexical variables. Participants responded to speech stimuli varying continuously in imageability during lexical decision with simultaneous MEG recording. We employed the linguistic variables in a new type of correlational time course analysis to assess trial-by-trial activation in occipital, ST, and pMTG regions of interest (ROIs). The linguistic variables modulated the ROIs during different time windows. Critically, visual regions reflected an imageability effect prior to effects of lexicality on pMTG. This surprising effect supports a view on which sensory aspects of a lexical item are not a consequence of lexical activation. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. Speaking Rate Affects the Perception of Duration as a Suprasegmental Lexical-Stress Cue

    ERIC Educational Resources Information Center

    Reinisch, Eva; Jesse, Alexandra; McQueen, James M.

    2011-01-01

    Three categorization experiments investigated whether the speaking rate of a preceding sentence influences durational cues to the perception of suprasegmental lexical-stress patterns. Dutch two-syllable word fragments had to be judged as coming from one of two longer words that matched the fragment segmentally but differed in lexical stress…

  8. Exploration of SWRL Rule Bases through Visualization, Paraphrasing, and Categorization of Rules

    NASA Astrophysics Data System (ADS)

    Hassanpour, Saeed; O'Connor, Martin J.; Das, Amar K.

    Rule bases are increasingly being used as repositories of knowledge content on the Semantic Web. As the size and complexity of these rule bases increases, developers and end users need methods of rule abstraction to facilitate rule management. In this paper, we describe a rule abstraction method for Semantic Web Rule Language (SWRL) rules that is based on lexical analysis and a set of heuristics. Our method results in a tree data structure that we exploit in creating techniques to visualize, paraphrase, and categorize SWRL rules. We evaluate our approach by applying it to several biomedical ontologies that contain SWRL rules, and show how the results reveal rule patterns within the rule base. We have implemented our method as a plug-in tool for Protégé-OWL, the most widely used ontology modeling software for the Semantic Web. Our tool can allow users to rapidly explore content and patterns in SWRL rule bases, enabling their acquisition and management.

  9. The effects of bilateral presentations on lateralized lexical decision.

    PubMed

    Fernandino, Leonardo; Iacoboni, Marco; Zaidel, Eran

    2007-06-01

    We investigated how lateralized lexical decision is affected by the presence of distractors in the visual hemifield contralateral to the target. The study had three goals: first, to determine how the presence of a distractor (either a word or a pseudoword) affects visual field differences in the processing of the target; second, to identify the stage of the process in which the distractor is affecting the decision about the target; and third, to determine whether the interaction between the lexicality of the target and the lexicality of the distractor ("lexical redundancy effect") is due to facilitation or inhibition of lexical processing. Unilateral and bilateral trials were presented in separate blocks. Target stimuli were always underlined. Regarding our first goal, we found that bilateral presentations (a) increased the effect of visual hemifield of presentation (right visual field advantage) for words by slowing down the processing of word targets presented to the left visual field, and (b) produced an interaction between visual hemifield of presentation (VF) and target lexicality (TLex), which implies the use of different strategies by the two hemispheres in lexical processing. For our second goal of determining the processing stage that is affected by the distractor, we introduced a third condition in which targets were always accompanied by "perceptual" distractors consisting of sequences of the letter "x" (e.g., xxxx). Performance on these trials indicated that most of the interaction occurs during lexical access (after basic perceptual analysis but before response programming). Finally, a comparison between performance patterns on the trials containing perceptual and lexical distractors indicated that the lexical redundancy effect is mainly due to inhibition of word processing by pseudoword distractors.

  10. Before the N400: effects of lexical-semantic violations in visual cortex.

    PubMed

    Dikker, Suzanne; Pylkkanen, Liina

    2011-07-01

    There exists an increasing body of research demonstrating that language processing is aided by context-based predictions. Recent findings suggest that the brain generates estimates about the likely physical appearance of upcoming words based on syntactic predictions: words that do not physically look like the expected syntactic category show increased amplitudes in the visual M100 component, the first salient MEG response to visual stimulation. This research asks whether violations of predictions based on lexical-semantic information might similarly generate early visual effects. In a picture-noun matching task, we found early visual effects for words that did not accurately describe the preceding pictures. These results demonstrate that, just like syntactic predictions, lexical-semantic predictions can affect early visual processing around ∼100ms, suggesting that the M100 response is not exclusively tuned to recognizing visual features relevant to syntactic category analysis. Rather, the brain might generate predictions about upcoming visual input whenever it can. However, visual effects of lexical-semantic violations only occurred when a single lexical item could be predicted. We argue that this may be due to the fact that in natural language processing, there is typically no straightforward mapping between lexical-semantic fields (e.g., flowers) and visual or auditory forms (e.g., tulip, rose, magnolia). For syntactic categories, in contrast, certain form features do reliably correlate with category membership. This difference may, in part, explain why certain syntactic effects typically occur much earlier than lexical-semantic effects. Copyright © 2011 Elsevier Inc. All rights reserved.

  11. Influence of prime-target relationship on semantic priming effects from words in a lexical-decision task.

    PubMed

    Abad, María J F; Noguera, Carmen; Ortells, Juan J

    2003-07-01

    The present research examines the influence of prime-target relationship (associative and categorical versus categorical only) on priming effects from attended and ignored parafoveal words. Participants performed a lexical-decision task on a single central target, which was preceded by two parafoveal prime words, one of which (the attended prime) was spatially precued. The results showed reliable positive and negative priming effects from attended and ignored words, respectively. However, this priming pattern was observed only for the "associative and categorical", but not for the "categorical only" relationship condition. These results suggest that the lack of semantic priming effects from words in some prior studies may be attributed to the kind of material used (i.e. weakly-associated word pairs).

  12. What You See Isn’t Always What You Get: Auditory Word Signals Trump Consciously Perceived Words in Lexical Access

    PubMed Central

    Ostrand, Rachel; Blumstein, Sheila E.; Ferreira, Victor S.; Morgan, James L.

    2016-01-01

    Human speech perception often includes both an auditory and visual component. A conflict in these signals can result in the McGurk illusion, in which the listener perceives a fusion of the two streams, implying that information from both has been integrated. We report two experiments investigating whether auditory-visual integration of speech occurs before or after lexical access, and whether the visual signal influences lexical access at all. Subjects were presented with McGurk or Congruent primes and performed a lexical decision task on related or unrelated targets. Although subjects perceived the McGurk illusion, McGurk and Congruent primes with matching real-word auditory signals equivalently primed targets that were semantically related to the auditory signal, but not targets related to the McGurk percept. We conclude that the time course of auditory-visual integration is dependent on the lexicality of the auditory and visual input signals, and that listeners can lexically access one word and yet consciously perceive another. PMID:27011021

  13. Newborn infants' sensitivity to perceptual cues to lexical and grammatical words.

    PubMed

    Shi, R; Werker, J F; Morgan, J L

    1999-09-30

    In our study newborn infants were presented with lists of lexical and grammatical words prepared from natural maternal speech. The results show that newborns are able to categorically discriminate these sets of words based on a constellation of perceptual cues that distinguish them. This general ability to detect and categorically discriminate sets of words on the basis of multiple acoustic and phonological cues may provide a perceptual base that can help older infants bootstrap into the acquisition of grammatical categories and syntactic structure.

  14. The effect of orthographic and emotional neighbourhood in a colour categorization task.

    PubMed

    Camblats, Anna-Malika; Mathey, Stéphanie

    2016-02-01

    This study investigated whether and how the strength of reading interference in a colour categorization task can be influenced by lexical competition and the emotional characteristics of words not directly presented. Previous findings showed inhibitory effects of high-frequency orthographic and emotional neighbourhood in the lexical decision task. Here, we examined the effect of orthographic neighbour frequency according to the emotional valence of the higher-frequency neighbour in an emotional orthographic Stroop paradigm. Stimuli were coloured neutral words that had either (1) no orthographic neighbour (e.g. PISTIL [pistil]), (2) one neutral higher-frequency neighbour (e.g. tirade [tirade]/TIRAGE [draw]) or (3) one negative higher-frequency neighbour (e.g. idiome [idiom]/IDIOTE [idiotic]). The results showed that colour categorization times were longer for words with no orthographic neighbour than for words with one neutral neighbour of higher frequency and even longer when the higher-frequency neighbour was neutral rather than negative. Thus, it appears not only that the orthographic neighbourhood of the coloured stimulus words intervenes in a colour categorization task, but also that the emotional content of the neighbour contributes to response times. These findings are discussed in terms of lexical competition between the stimulus word and non-presented orthographic neighbours, which in turn would modify the strength of reading interference on colour categorization times.

  15. The Use of Segmental and Suprasegmental Information in Lexical Access: A First- and Second-Language Chinese Investigation

    ERIC Educational Resources Information Center

    Connell, Katrina

    2017-01-01

    The present study investigated first language (L1) and second language (L2) Chinese categorization of tones and segments and use of tones and segments in lexical access. Previous research has shown that English listeners rely more on pitch height than pitch direction when perceiving lexical tones; however, it remains unclear if this superior use…

  16. Associative and repetition priming with the repeated masked prime technique: no priming found.

    PubMed

    Avons, S E; Russo, Riccardo; Cinel, Caterina; Verolini, Veronica; Glynn, Kevin; McDonald, Rebecca; Cameron, Marie

    2009-01-01

    Wentura and Frings (2005) reported evidence of subliminal categorical priming on a lexical decision task, using a new method of visual masking in which the prime string consisted of the prime word flanked by random consonants and random letter masks alternated with the prime string on successive refresh cycles. We investigated associative and repetition priming on lexical decision, using the same method of visual masking. Three experiments failed to show any evidence of associative priming, (1) when the prime string was fixed at 10 characters (three to six flanking letters) and (2) when the number of flanking letters were reduced or absent. In all cases, prime detection was at chance level. Strong associative priming was observed with visible unmasked primes, but the addition of flanking letters restricted priming even though prime detection was still high. With repetition priming, no priming effects were found with the repeated masked technique, and prime detection was poor but just above chance levels. We conclude that with repeated masked primes, there is effective visual masking but that associative priming and repetition priming do not occur with experiment-unique prime-target pairs. Explanations for this apparent discrepancy across priming paradigms are discussed. The priming stimuli and prime-target pairs used in this study may be downloaded as supplemental materials from mc.psychonomic-journals.org/content/supplemental.

  17. A Dual-Route Perspective on Brain Activation in Response to Visual Words: Evidence for a Length by Lexicality Interaction in the Visual Word Form Area (VWFA)

    PubMed Central

    Schurz, Matthias; Sturm, Denise; Richlan, Fabio; Kronbichler, Martin; Ladurner, Gunther; Wimmer, Heinz

    2010-01-01

    Based on our previous work, we expected the Visual Word Form Area (VWFA) in the left ventral visual pathway to be engaged by both whole-word recognition and by serial sublexical coding of letter strings. To examine this double function, a phonological lexical decision task (i.e., “Does xxx sound like an existing word?”) presented short and long letter strings of words, pseudohomophones, and pseudowords (e.g., Taxi, Taksi and Tazi). Main findings were that the length effect for words was limited to occipital regions and absent in the VWFA. In contrast, a marked length effect for pseudowords was found throughout the ventral visual pathway including the VWFA, as well as in regions presumably engaged by visual attention and silent-articulatory processes. The length by lexicality interaction on brain activation corresponds to well-established behavioral findings of a length by lexicality interaction on naming latencies and speaks for the engagement of the VWFA by both lexical and sublexical processes. PMID:19896538

  18. A dual-route perspective on brain activation in response to visual words: evidence for a length by lexicality interaction in the visual word form area (VWFA).

    PubMed

    Schurz, Matthias; Sturm, Denise; Richlan, Fabio; Kronbichler, Martin; Ladurner, Gunther; Wimmer, Heinz

    2010-02-01

    Based on our previous work, we expected the Visual Word Form Area (VWFA) in the left ventral visual pathway to be engaged by both whole-word recognition and by serial sublexical coding of letter strings. To examine this double function, a phonological lexical decision task (i.e., "Does xxx sound like an existing word?") presented short and long letter strings of words, pseudohomophones, and pseudowords (e.g., Taxi, Taksi and Tazi). Main findings were that the length effect for words was limited to occipital regions and absent in the VWFA. In contrast, a marked length effect for pseudowords was found throughout the ventral visual pathway including the VWFA, as well as in regions presumably engaged by visual attention and silent-articulatory processes. The length by lexicality interaction on brain activation corresponds to well-established behavioral findings of a length by lexicality interaction on naming latencies and speaks for the engagement of the VWFA by both lexical and sublexical processes. Copyright (c) 2009 Elsevier Inc. All rights reserved.

  19. Match graph generation for symbolic indirect correlation

    NASA Astrophysics Data System (ADS)

    Lopresti, Daniel; Nagy, George; Joshi, Ashutosh

    2006-01-01

    Symbolic indirect correlation (SIC) is a new approach for bringing lexical context into the recognition of unsegmented signals that represent words or phrases in printed or spoken form. One way of viewing the SIC problem is to find the correspondence, if one exists, between two bipartite graphs, one representing the matching of the two lexical strings and the other representing the matching of the two signal strings. While perfect matching cannot be expected with real-world signals and while some degree of mismatch is allowed for in the second stage of SIC, such errors, if they are too numerous, can present a serious impediment to a successful implementation of the concept. In this paper, we describe a framework for evaluating the effectiveness of SIC match graph generation and examine the relatively simple, controlled cases of synthetic images of text strings typeset, both normally and in highly condensed fashion. We quantify and categorize the errors that arise, as well as present a variety of techniques we have developed to visualize the intermediate results of the SIC process.

  20. Visual Word Recognition by Bilinguals in a Sentence Context: Evidence for Nonselective Lexical Access

    ERIC Educational Resources Information Center

    Duyck, Wouter; Van Assche, Eva; Drieghe, Denis; Hartsuiker, Robert J.

    2007-01-01

    Recent research on bilingualism has shown that lexical access in visual word recognition by bilinguals is not selective with respect to language. In the present study, the authors investigated language-independent lexical access in bilinguals reading sentences, which constitutes a strong unilingual linguistic context. In the first experiment,…

  1. Reduction of Left Visual Field Lexical Decision Accuracy as a Result of Concurrent Nonverbal Auditory Stimulation

    ERIC Educational Resources Information Center

    Van Strien, Jan W.

    2004-01-01

    To investigate whether concurrent nonverbal sound sequences would affect visual-hemifield lexical processing, lexical-decision performance of 24 strongly right-handed students (12 men, 12 women) was measured in three conditions: baseline, concurrent neutral sound sequence, and concurrent emotional sound sequence. With the neutral sequence,…

  2. Speech Perception Deficits in Mandarin-Speaking School-Aged Children with Poor Reading Comprehension

    PubMed Central

    Liu, Huei-Mei; Tsao, Feng-Ming

    2017-01-01

    Previous studies have shown that children learning alphabetic writing systems who have language impairment or dyslexia exhibit speech perception deficits. However, whether such deficits exist in children learning logographic writing systems who have poor reading comprehension remains uncertain. To further explore this issue, the present study examined speech perception deficits in Mandarin-speaking children with poor reading comprehension. Two self-designed tasks, consonant categorical perception task and lexical tone discrimination task were used to compare speech perception performance in children (n = 31, age range = 7;4–10;2) with poor reading comprehension and an age-matched typically developing group (n = 31, age range = 7;7–9;10). Results showed that the children with poor reading comprehension were less accurate in consonant and lexical tone discrimination tasks and perceived speech contrasts less categorically than the matched group. The correlations between speech perception skills (i.e., consonant and lexical tone discrimination sensitivities and slope of consonant identification curve) and individuals’ oral language and reading comprehension were stronger than the correlations between speech perception ability and word recognition ability. In conclusion, the results revealed that Mandarin-speaking children with poor reading comprehension exhibit less-categorized speech perception, suggesting that imprecise speech perception, especially lexical tone perception, is essential to account for reading learning difficulties in Mandarin-speaking children. PMID:29312031

  3. Effects of visual familiarity for words on interhemispheric cooperation for lexical processing.

    PubMed

    Yoshizaki, K

    2001-12-01

    The purpose of this study was to examine the effects of visual familiarity of words on interhemispheric lexical processing. Words and pseudowords were tachistoscopically presented in a left, a right, or bilateral visual fields. Two types of words, Katakana-familiar-type and Hiragana-familiar-type, were used as the word stimuli. The former refers to the words which are more frequently written with Katakana script, and the latter refers to the words which are written predominantly in Hiragana script. Two conditions for the words were set up in terms of visual familiarity for a word. In visually familiar condition, words were presented in familiar script form and in visually unfamiliar condition, words were presented in less familiar script form. The 32 right-handed Japanese students were asked to make a lexical decision. Results showed that a bilateral gain, which indicated that the performance in the bilateral visual fields was superior to that in the unilateral visual field, was obtained only in the visually familiar condition, not in the visually unfamiliar condition. These results suggested that the visual familiarity for a word had an influence on the interhemispheric lexical processing.

  4. Acquisition of the Novel Name-Nameless Category (N3C) Principle.

    ERIC Educational Resources Information Center

    Mervis, Carolyn B.; Bertrand, Jacquelyn

    1994-01-01

    Examined the use by children of the Novel Name-Nameless Category principle, under the framework that lexical principles are acquired in a developmental sequence. Results indicated that the particular principle was not available at the start of lexical acquisition but that exhaustive categorization ability and a vocabulary spurt occur…

  5. The Perception of Lexical Tone in Mambila.

    ERIC Educational Resources Information Center

    Connell, Bruce

    2000-01-01

    Examines tone perception in Mambila, a Benue-Congo language with four level lexical tones. A categorization experiment was run to determine some of the salient aspects of the perceptual nature of these tones. Results are discussed in light of what is known about universal tendencies of tone systems and the historical development of the Mambila…

  6. The effect of concurrent semantic categorization on delayed serial recall.

    PubMed

    Acheson, Daniel J; MacDonald, Maryellen C; Postle, Bradley R

    2011-01-01

    The influence of semantic processing on the serial ordering of items in short-term memory was explored using a novel dual-task paradigm. Participants engaged in 2 picture-judgment tasks while simultaneously performing delayed serial recall. List material varied in the presence of phonological overlap (Experiments 1 and 2) and in semantic content (concrete words in Experiment 1 and 3; nonwords in Experiments 2 and 3). Picture judgments varied in the extent to which they required accessing visual semantic information (i.e., semantic categorization and line orientation judgments). Results showed that, relative to line-orientation judgments, engaging in semantic categorization judgments increased the proportion of item-ordering errors for concrete lists but did not affect error proportions for nonword lists. Furthermore, although more ordering errors were observed for phonologically similar relative to dissimilar lists, no interactions were observed between the phonological overlap and picture-judgment task manipulations. These results demonstrate that lexical-semantic representations can affect the serial ordering of items in short-term memory. Furthermore, the dual-task paradigm provides a new method for examining when and how semantic representations affect memory performance.

  7. The Effect of Concurrent Semantic Categorization on Delayed Serial Recall

    PubMed Central

    Acheson, Daniel J.; MacDonald, Maryellen C.; Postle, Bradley R.

    2010-01-01

    The influence of semantic processing on the serial ordering of items in short-term memory was explored using a novel dual-task paradigm. Subjects engaged in two picture judgment tasks while simultaneously performing delayed serial recall. List material varied in the presence of phonological overlap (Experiments 1 and 2) and in semantic content (concrete words in Experiment 1 and 3; nonwords in Experiments 2 and 3). Picture judgments varied in the extent to which they required accessing visual semantic information (i.e., semantic categorization and line orientation judgments). Results showed that, relative to line orientation judgments, engaging in semantic categorization judgments increased the proportion of item ordering errors for concrete lists but did not affect error proportions for nonword lists. Furthermore, although more ordering errors were observed for phonologically similar relative to dissimilar lists, no interactions were observed between the phonological overlap and picture judgment task manipulations. These results thus demonstrate that lexical-semantic representations can affect the serial ordering of items in short-term memory. Furthermore, the dual-task paradigm provides a new method for examining when and how semantic representations affect memory performance. PMID:21058880

  8. Individual differences in language ability are related to variation in word recognition, not speech perception: evidence from eye movements.

    PubMed

    McMurray, Bob; Munson, Cheyenne; Tomblin, J Bruce

    2014-08-01

    The authors examined speech perception deficits associated with individual differences in language ability, contrasting auditory, phonological, or lexical accounts by asking whether lexical competition is differentially sensitive to fine-grained acoustic variation. Adolescents with a range of language abilities (N = 74, including 35 impaired) participated in an experiment based on McMurray, Tanenhaus, and Aslin (2002). Participants heard tokens from six 9-step voice onset time (VOT) continua spanning 2 words (beach/peach, beak/peak, etc.) while viewing a screen containing pictures of those words and 2 unrelated objects. Participants selected the referent while eye movements to each picture were monitored as a measure of lexical activation. Fixations were examined as a function of both VOT and language ability. Eye movements were sensitive to within-category VOT differences: As VOT approached the boundary, listeners made more fixations to the competing word. This did not interact with language ability, suggesting that language impairment is not associated with differential auditory sensitivity or phonetic categorization. Listeners with poorer language skills showed heightened competitors fixations overall, suggesting a deficit in lexical processes. Language impairment may be better characterized by a deficit in lexical competition (inability to suppress competing words), rather than differences in phonological categorization or auditory abilities.

  9. The affective regulation of cognitive priming.

    PubMed

    Storbeck, Justin; Clore, Gerald L

    2008-04-01

    Semantic and affective priming are classic effects observed in cognitive and social psychology, respectively. The authors discovered that affect regulates such priming effects. In Experiment 1, positive and negative moods were induced before one of three priming tasks; evaluation, categorization, or lexical decision. As predicted, positive affect led to both affective priming (evaluation task) and semantic priming (category and lexical decision tasks). However, negative affect inhibited such effects. In Experiment 2, participants in their natural affective state completed the same priming tasks as in Experiment 1. As expected, affective priming (evaluation task) and category priming (categorization and lexical decision tasks) were observed in such resting affective states. Hence, the authors conclude that negative affect inhibits semantic and affective priming. These results support recent theoretical models, which suggest that positive affect promotes associations among strong and weak concepts, and that negative affect impairs such associations (Clore & Storbeck, 2006; Kuhl, 2000). (Copyright) 2008 APA.

  10. Conscious intention to speak proactively facilitates lexical access during overt object naming

    PubMed Central

    Strijkers, Kristof; Holcomb, Phillip J.; Costa, Albert

    2013-01-01

    The present study explored when and how the top-down intention to speak influences the language production process. We did so by comparing the brain’s electrical response for a variable known to affect lexical access, namely word frequency, during overt object naming and non-verbal object categorization. We found that during naming, the event-related brain potentials elicited for objects with low frequency names started to diverge from those with high frequency names as early as 152 ms after stimulus onset, while during non-verbal categorization the same frequency comparison appeared 200 ms later eliciting a qualitatively different brain response. Thus, only when participants had the conscious intention to name an object the brain rapidly engaged in lexical access. The data offer evidence that top-down intention to speak proactively facilitates the activation of words related to perceived objects. PMID:24039339

  11. Lexical Representation of Schwa Words: Two Mackerels, but Only One Salami

    ERIC Educational Resources Information Center

    Burki, Audrey; Gaskell, M. Gareth

    2012-01-01

    The present study investigated the lexical representations underlying the production of English schwa words. Two types of schwa words were compared: words with a schwa in poststress position (e.g., mack"e"rel), whose schwa and reduced variants differ in a categorical way, and words with a schwa in prestress position (e.g.,…

  12. The effect of response mode on lateralized lexical decision performance.

    PubMed

    Weems, Scott A; Zaidel, Eran

    2005-01-01

    We examined the effect of manipulations of response programming, i.e. post-lexical decision making requirements, on lateralized lexical decision. Although response hand manipulations tend to elicit weaker laterality effects than those involving visual field of presentation, the implementation of different lateralized response strategies remains relatively unexplored. Four different response conditions were compared in a between-subjects design: (1) unimanual, (2) bimanual, (3) congruent visual field/response hand, and (4) confounded response hand/target lexicality response. It was observed that hemispheric specialization and interaction effects during the lexical decision task remained unchanged despite the very different response requirements. However, a priori examination of each condition revealed that some manipulations yielded a reduced power to detect laterality effects. The consistent observation of left hemisphere specialization, and both left and right hemisphere lexicality priming effects (interhemispheric transfer), indicate that these effects are relatively robust and unaffected by late occurring processes in the lexical decision task. It appears that the lateralized response mode neither determines nor reflects the laterality of decision processes. In contrast, the target visual half-field is critical for determining the deciding hemisphere and is a sensitive index of hemispheric specialization, as well as of directional interhemispheric transfer.

  13. MEGALEX: A megastudy of visual and auditory word recognition.

    PubMed

    Ferrand, Ludovic; Méot, Alain; Spinelli, Elsa; New, Boris; Pallier, Christophe; Bonin, Patrick; Dufau, Stéphane; Mathôt, Sebastiaan; Grainger, Jonathan

    2018-06-01

    Using the megastudy approach, we report a new database (MEGALEX) of visual and auditory lexical decision times and accuracy rates for tens of thousands of words. We collected visual lexical decision data for 28,466 French words and the same number of pseudowords, and auditory lexical decision data for 17,876 French words and the same number of pseudowords (synthesized tokens were used for the auditory modality). This constitutes the first large-scale database for auditory lexical decision, and the first database to enable a direct comparison of word recognition in different modalities. Different regression analyses were conducted to illustrate potential ways to exploit this megastudy database. First, we compared the proportions of variance accounted for by five word frequency measures. Second, we conducted item-level regression analyses to examine the relative importance of the lexical variables influencing performance in the different modalities (visual and auditory). Finally, we compared the similarities and differences between the two modalities. All data are freely available on our website ( https://sedufau.shinyapps.io/megalex/ ) and are searchable at www.lexique.org , inside the Open Lexique search engine.

  14. Emotional words facilitate lexical but not early visual processing.

    PubMed

    Trauer, Sophie M; Kotz, Sonja A; Müller, Matthias M

    2015-12-12

    Emotional scenes and faces have shown to capture and bind visual resources at early sensory processing stages, i.e. in early visual cortex. However, emotional words have led to mixed results. In the current study ERPs were assessed simultaneously with steady-state visual evoked potentials (SSVEPs) to measure attention effects on early visual activity in emotional word processing. Neutral and negative words were flickered at 12.14 Hz whilst participants performed a Lexical Decision Task. Emotional word content did not modulate the 12.14 Hz SSVEP amplitude, neither did word lexicality. However, emotional words affected the ERP. Negative compared to neutral words as well as words compared to pseudowords lead to enhanced deflections in the P2 time range indicative of lexico-semantic access. The N400 was reduced for negative compared to neutral words and enhanced for pseudowords compared to words indicating facilitated semantic processing of emotional words. LPC amplitudes reflected word lexicality and thus the task-relevant response. In line with previous ERP and imaging evidence, the present results indicate that written emotional words are facilitated in processing only subsequent to visual analysis.

  15. Lexical Processing in Spanish Sign Language (LSE)

    ERIC Educational Resources Information Center

    Carreiras, Manuel; Gutierrez-Sigut, Eva; Baquero, Silvia; Corina, David

    2008-01-01

    Lexical access is concerned with how the spoken or visual input of language is projected onto the mental representations of lexical forms. To date, most theories of lexical access have been based almost exclusively on studies of spoken languages and/or orthographic representations of spoken languages. Relatively few studies have examined how…

  16. Subliminal semantic priming in speech.

    PubMed

    Daltrozzo, Jérôme; Signoret, Carine; Tillmann, Barbara; Perrin, Fabien

    2011-01-01

    Numerous studies have reported subliminal repetition and semantic priming in the visual modality. We transferred this paradigm to the auditory modality. Prime awareness was manipulated by a reduction of sound intensity level. Uncategorized prime words (according to a post-test) were followed by semantically related, unrelated, or repeated target words (presented without intensity reduction) and participants performed a lexical decision task (LDT). Participants with slower reaction times in the LDT showed semantic priming (faster reaction times for semantically related compared to unrelated targets) and negative repetition priming (slower reaction times for repeated compared to semantically related targets). This is the first report of semantic priming in the auditory modality without conscious categorization of the prime.

  17. Subliminal Semantic Priming in Speech

    PubMed Central

    Tillmann, Barbara; Perrin, Fabien

    2011-01-01

    Numerous studies have reported subliminal repetition and semantic priming in the visual modality. We transferred this paradigm to the auditory modality. Prime awareness was manipulated by a reduction of sound intensity level. Uncategorized prime words (according to a post-test) were followed by semantically related, unrelated, or repeated target words (presented without intensity reduction) and participants performed a lexical decision task (LDT). Participants with slower reaction times in the LDT showed semantic priming (faster reaction times for semantically related compared to unrelated targets) and negative repetition priming (slower reaction times for repeated compared to semantically related targets). This is the first report of semantic priming in the auditory modality without conscious categorization of the prime. PMID:21655277

  18. Behavioral evidence for inter-hemispheric cooperation during a lexical decision task: a divided visual field experiment.

    PubMed

    Perrone-Bertolotti, Marcela; Lemonnier, Sophie; Baciu, Monica

    2013-01-01

    HIGHLIGHTSThe redundant bilateral visual presentation of verbal stimuli decreases asymmetry and increases the cooperation between the two hemispheres.The increased cooperation between the hemispheres is related to semantic information during lexical processing.The inter-hemispheric interaction is represented by both inhibition and cooperation. This study explores inter-hemispheric interaction (IHI) during a lexical decision task by using a behavioral approach, the bilateral presentation of stimuli within a divided visual field experiment. Previous studies have shown that compared to unilateral presentation, the bilateral redundant (BR) presentation decreases the inter-hemispheric asymmetry and facilitates the cooperation between hemispheres. However, it is still poorly understood which type of information facilitates this cooperation. In the present study, verbal stimuli were presented unilaterally (left or right visual hemi-field successively) and bilaterally (left and right visual hemi-field simultaneously). Moreover, during the bilateral presentation of stimuli, we manipulated the relationship between target and distractors in order to specify the type of information which modulates the IHI. Thus, three types of information were manipulated: perceptual, semantic, and decisional, respectively named pre-lexical, lexical and post-lexical processing. Our results revealed left hemisphere (LH) lateralization during the lexical decision task. In terms of inter-hemisphere interaction, the perceptual and decision-making information increased the inter-hemispheric asymmetry, suggesting the inhibition of one hemisphere upon the other. In contrast, semantic information decreased the inter-hemispheric asymmetry, suggesting cooperation between the hemispheres. We discussed our results according to current models of IHI and concluded that cerebral hemispheres interact and communicate according to various excitatory and inhibitory mechanisms, all which depend on specific processes and various levels of word processing.

  19. Behavioral evidence for inter-hemispheric cooperation during a lexical decision task: a divided visual field experiment

    PubMed Central

    Perrone-Bertolotti, Marcela; Lemonnier, Sophie; Baciu, Monica

    2013-01-01

    HIGHLIGHTS The redundant bilateral visual presentation of verbal stimuli decreases asymmetry and increases the cooperation between the two hemispheres.The increased cooperation between the hemispheres is related to semantic information during lexical processing.The inter-hemispheric interaction is represented by both inhibition and cooperation. This study explores inter-hemispheric interaction (IHI) during a lexical decision task by using a behavioral approach, the bilateral presentation of stimuli within a divided visual field experiment. Previous studies have shown that compared to unilateral presentation, the bilateral redundant (BR) presentation decreases the inter-hemispheric asymmetry and facilitates the cooperation between hemispheres. However, it is still poorly understood which type of information facilitates this cooperation. In the present study, verbal stimuli were presented unilaterally (left or right visual hemi-field successively) and bilaterally (left and right visual hemi-field simultaneously). Moreover, during the bilateral presentation of stimuli, we manipulated the relationship between target and distractors in order to specify the type of information which modulates the IHI. Thus, three types of information were manipulated: perceptual, semantic, and decisional, respectively named pre-lexical, lexical and post-lexical processing. Our results revealed left hemisphere (LH) lateralization during the lexical decision task. In terms of inter-hemisphere interaction, the perceptual and decision-making information increased the inter-hemispheric asymmetry, suggesting the inhibition of one hemisphere upon the other. In contrast, semantic information decreased the inter-hemispheric asymmetry, suggesting cooperation between the hemispheres. We discussed our results according to current models of IHI and concluded that cerebral hemispheres interact and communicate according to various excitatory and inhibitory mechanisms, all which depend on specific processes and various levels of word processing. PMID:23818879

  20. Rapid Extraction of Lexical Tone Phonology in Chinese Characters: A Visual Mismatch Negativity Study

    PubMed Central

    Wang, Xiao-Dong; Liu, A-Ping; Wu, Yin-Yuan; Wang, Peng

    2013-01-01

    Background In alphabetic languages, emerging evidence from behavioral and neuroimaging studies shows the rapid and automatic activation of phonological information in visual word recognition. In the mapping from orthography to phonology, unlike most alphabetic languages in which there is a natural correspondence between the visual and phonological forms, in logographic Chinese, the mapping between visual and phonological forms is rather arbitrary and depends on learning and experience. The issue of whether the phonological information is rapidly and automatically extracted in Chinese characters by the brain has not yet been thoroughly addressed. Methodology/Principal Findings We continuously presented Chinese characters differing in orthography and meaning to adult native Mandarin Chinese speakers to construct a constant varying visual stream. In the stream, most stimuli were homophones of Chinese characters: The phonological features embedded in these visual characters were the same, including consonants, vowels and the lexical tone. Occasionally, the rule of phonology was randomly violated by characters whose phonological features differed in the lexical tone. Conclusions/Significance We showed that the violation of the lexical tone phonology evoked an early, robust visual response, as revealed by whole-head electrical recordings of the visual mismatch negativity (vMMN), indicating the rapid extraction of phonological information embedded in Chinese characters. Source analysis revealed that the vMMN was involved in neural activations of the visual cortex, suggesting that the visual sensory memory is sensitive to phonological information embedded in visual words at an early processing stage. PMID:23437235

  1. Attentional Modulation of Masked Repetition and Categorical Priming in Young and Older Adults

    ERIC Educational Resources Information Center

    Fabre, Ludovic; Lemaire, Patrick; Grainger, Jonathan

    2007-01-01

    Three experiments examined the effects of temporal attention and aging on masked repetition and categorical priming for numbers and words. Participants' temporal attention was manipulated by varying the stimulus onset asynchrony (i.e., constant or variable SOA). In Experiment 1, participants performed a parity judgment task and a lexical decision…

  2. When Bees Hamper the Production of Honey: Lexical Interference from Associates in Speech Production

    ERIC Educational Resources Information Center

    Abdel Rahman, Rasha; Melinger, Alissa

    2007-01-01

    In this article, the authors explore semantic context effects in speaking. In particular, the authors investigate a marked discrepancy between categorically and associatively induced effects; only categorical relationships have been reported to cause interference in object naming. In Experiments 1 and 2, a variant of the semantic blocking paradigm…

  3. Differential processing of thematic and categorical conceptual relations in spoken word production.

    PubMed

    de Zubicaray, Greig I; Hansen, Samuel; McMahon, Katie L

    2013-02-01

    Studies of semantic context effects in spoken word production have typically distinguished between categorical (or taxonomic) and associative relations. However, associates tend to confound semantic features or morphological representations, such as whole-part relations and compounds (e.g., BOAT-anchor, BEE-hive). Using a picture-word interference paradigm and functional magnetic resonance imaging (fMRI), we manipulated categorical (COW-rat) and thematic (COW-pasture) TARGET-distractor relations in a balanced design, finding interference and facilitation effects on naming latencies, respectively, as well as differential patterns of brain activation compared with an unrelated distractor condition. While both types of distractor relation activated the middle portion of the left middle temporal gyrus (MTG) consistent with retrieval of conceptual or lexical representations, categorical relations involved additional activation of posterior left MTG, consistent with retrieval of a lexical cohort. Thematic relations involved additional activation of the left angular gyrus. These results converge with recent lesion evidence implicating the left inferior parietal lobe in processing thematic relations and may indicate a potential role for this region during spoken word production. 2013 APA, all rights reserved

  4. Dissociating Visual Form from Lexical Frequency Using Japanese

    ERIC Educational Resources Information Center

    Twomey, Tae; Duncan, Keith J. Kawabata; Hogan, John S.; Morita, Kenji; Umeda, Kazumasa; Sakai, Katsuyuki; Devlin, Joseph T.

    2013-01-01

    In Japanese, the same word can be written in either morphographic Kanji or syllabographic Hiragana and this provides a unique opportunity to disentangle a word's lexical frequency from the frequency of its visual form--an important distinction for understanding the neural information processing in regions engaged by reading. Behaviorally,…

  5. Dissociating visual form from lexical frequency using Japanese.

    PubMed

    Twomey, Tae; Kawabata Duncan, Keith J; Hogan, John S; Morita, Kenji; Umeda, Kazumasa; Sakai, Katsuyuki; Devlin, Joseph T

    2013-05-01

    In Japanese, the same word can be written in either morphographic Kanji or syllabographic Hiragana and this provides a unique opportunity to disentangle a word's lexical frequency from the frequency of its visual form - an important distinction for understanding the neural information processing in regions engaged by reading. Behaviorally, participants responded more quickly to high than low frequency words and to visually familiar relative to less familiar words, independent of script. Critically, the imaging results showed that visual familiarity, as opposed to lexical frequency, had a strong effect on activation in ventral occipito-temporal cortex. Activation here was also greater for Kanji than Hiragana words and this was not due to their inherent differences in visual complexity. These findings can be understood within a predictive coding framework in which vOT receives bottom-up information encoding complex visual forms and top-down predictions from regions encoding non-visual attributes of the stimulus. Copyright © 2012 Elsevier Inc. All rights reserved.

  6. A dual-task investigation of automaticity in visual word processing

    NASA Technical Reports Server (NTRS)

    McCann, R. S.; Remington, R. W.; Van Selst, M.

    2000-01-01

    An analysis of activation models of visual word processing suggests that frequency-sensitive forms of lexical processing should proceed normally while unattended. This hypothesis was tested by having participants perform a speeded pitch discrimination task followed by lexical decisions or word naming. As the stimulus onset asynchrony between the tasks was reduced, lexical-decision and naming latencies increased dramatically. Word-frequency effects were additive with the increase, indicating that frequency-sensitive processing was subject to postponement while attention was devoted to the other task. Either (a) the same neural hardware shares responsibility for lexical processing and central stages of choice reaction time task processing and cannot perform both computations simultaneously, or (b) lexical processing is blocked in order to optimize performance on the pitch discrimination task. Either way, word processing is not as automatic as activation models suggest.

  7. The Downside of Greater Lexical Influences: Selectively Poorer Speech Perception in Noise

    PubMed Central

    Xie, Zilong; Tessmer, Rachel; Chandrasekaran, Bharath

    2017-01-01

    Purpose Although lexical information influences phoneme perception, the extent to which reliance on lexical information enhances speech processing in challenging listening environments is unclear. We examined the extent to which individual differences in lexical influences on phonemic processing impact speech processing in maskers containing varying degrees of linguistic information (2-talker babble or pink noise). Method Twenty-nine monolingual English speakers were instructed to ignore the lexical status of spoken syllables (e.g., gift vs. kift) and to only categorize the initial phonemes (/g/ vs. /k/). The same participants then performed speech recognition tasks in the presence of 2-talker babble or pink noise in audio-only and audiovisual conditions. Results Individuals who demonstrated greater lexical influences on phonemic processing experienced greater speech processing difficulties in 2-talker babble than in pink noise. These selective difficulties were present across audio-only and audiovisual conditions. Conclusion Individuals with greater reliance on lexical processes during speech perception exhibit impaired speech recognition in listening conditions in which competing talkers introduce audible linguistic interferences. Future studies should examine the locus of lexical influences/interferences on phonemic processing and speech-in-speech processing. PMID:28586824

  8. Extrinsic cognitive load impairs low-level speech perception.

    PubMed

    Mattys, Sven L; Barden, Katharine; Samuel, Arthur G

    2014-06-01

    Recent research has suggested that the extrinsic cognitive load generated by performing a nonlinguistic visual task while perceiving speech increases listeners' reliance on lexical knowledge and decreases their capacity to perceive phonetic detail. In the present study, we asked whether this effect is accounted for better at a lexical or a sublexical level. The former would imply that cognitive load directly affects lexical activation but not perceptual sensitivity; the latter would imply that increased lexical reliance under cognitive load is only a secondary consequence of imprecise or incomplete phonetic encoding. Using the phoneme restoration paradigm, we showed that perceptual sensitivity decreases (i.e., phoneme restoration increases) almost linearly with the effort involved in the concurrent visual task. However, cognitive load had only a minimal effect on the contribution of lexical information to phoneme restoration. We concluded that the locus of extrinsic cognitive load on the speech system is perceptual rather than lexical. Mechanisms by which cognitive load increases tolerance to acoustic imprecision and broadens phonemic categories were discussed.

  9. Neural signatures of lexical tone reading.

    PubMed

    Kwok, Veronica P Y; Wang, Tianfu; Chen, Siping; Yakpo, Kofi; Zhu, Linlin; Fox, Peter T; Tan, Li Hai

    2015-01-01

    Research on how lexical tone is neuroanatomically represented in the human brain is central to our understanding of cortical regions subserving language. Past studies have exclusively focused on tone perception of the spoken language, and little is known as to the lexical tone processing in reading visual words and its associated brain mechanisms. In this study, we performed two experiments to identify neural substrates in Chinese tone reading. First, we used a tone judgment paradigm to investigate tone processing of visually presented Chinese characters. We found that, relative to baseline, tone perception of printed Chinese characters were mediated by strong brain activation in bilateral frontal regions, left inferior parietal lobule, left posterior middle/medial temporal gyrus, left inferior temporal region, bilateral visual systems, and cerebellum. Surprisingly, no activation was found in superior temporal regions, brain sites well known for speech tone processing. In activation likelihood estimation (ALE) meta-analysis to combine results of relevant published studies, we attempted to elucidate whether the left temporal cortex activities identified in Experiment one is consistent with those found in previous studies of auditory lexical tone perception. ALE results showed that only the left superior temporal gyrus and putamen were critical in auditory lexical tone processing. These findings suggest that activation in the superior temporal cortex associated with lexical tone perception is modality-dependent. © 2014 Wiley Periodicals, Inc.

  10. Lexical Categorization Modalities in Pre-School Children: Influence of Perceptual and Verbal Tasks

    ERIC Educational Resources Information Center

    Tallandini, Maria Anna; Roia, Anna

    2005-01-01

    This study investigates how categorical organization functions in pre-school children, focusing on the dichotomy between living and nonliving things. The variables of familiarity, frequency of word use and perceptual complexity were controlled. Sixty children aged between 4 years and 5 years 10 months were investigated. Three tasks were used: a…

  11. The Role of Visual Form in Lexical Access: Evidence from Chinese Classifier Production

    ERIC Educational Resources Information Center

    Bi, Yanchao; Yu, Xi; Geng, Jingyi; Alario, F. -Xavier.

    2010-01-01

    The interface between the conceptual and lexical systems was investigated in a word production setting. We tested the effects of two conceptual dimensions--semantic category and visual shape--on the selection of Chinese nouns and classifiers. Participants named pictures with nouns ("rope") or classifier-noun phrases ("one-"classifier"-rope") in…

  12. Brain activation for lexical decision and reading aloud: two sides of the same coin?

    PubMed

    Carreiras, Manuel; Mechelli, Andrea; Estévez, Adelina; Price, Cathy J

    2007-03-01

    This functional magnetic resonance imaging study compared the neuronal implementation of word and pseudoword processing during two commonly used word recognition tasks: lexical decision and reading aloud. In the lexical decision task, participants made a finger-press response to indicate whether a visually presented letter string is a word or a pseudoword (e.g., "paple"). In the reading-aloud task, participants read aloud visually presented words and pseudowords. The same sets of words and pseudowords were used for both tasks. This enabled us to look for the effects of task (lexical decision vs. reading aloud), lexicality (words vs. nonwords), and the interaction of lexicality with task. We found very similar patterns of activation for lexical decision and reading aloud in areas associated with word recognition and lexical retrieval (e.g., left fusiform gyrus, posterior temporal cortex, pars opercularis, and bilateral insulae), but task differences were observed bilaterally in sensorimotor areas. Lexical decision increased activation in areas associated with decision making and finger tapping (bilateral postcentral gyri, supplementary motor area, and right cerebellum), whereas reading aloud increased activation in areas associated with articulation and hearing the sound of the spoken response (bilateral precentral gyri, superior temporal gyri, and posterior cerebellum). The effect of lexicality (pseudoword vs. words) was also remarkably consistent across tasks. Nevertheless, increased activation for pseudowords relative to words was greater in the left precentral cortex for reading than lexical decision, and greater in the right inferior frontal cortex for lexical decision than reading. We attribute these effects to differences in the demands on speech production and decision-making processes, respectively.

  13. Sources of Information for Stress Assignment in Reading Greek

    ERIC Educational Resources Information Center

    Protopapas, Athanassios; Gerakaki, Svetlana; Alexandri, Stella

    2007-01-01

    To assign lexical stress when reading, the Greek reader can potentially rely on lexical information (knowledge of the word), visual-orthographic information (processing of the written diacritic), or a default metrical strategy (penultimate stress pattern). Previous studies with secondary education children have shown strong lexical effects on…

  14. Testing for Lexical Competition during Reading: Fast Priming with Orthographic Neighbors

    ERIC Educational Resources Information Center

    Nakayama, Mariko; Sears, Christopher R.; Lupker, Stephen J.

    2010-01-01

    Recent studies have found that masked word primes that are orthographic neighbors of the target inhibit lexical decision latencies (Davis & Lupker, 2006; Nakayama, Sears, & Lupker, 2008), consistent with the predictions of lexical competition models of visual word identification (e.g., Grainger & Jacobs, 1996). In contrast, using the…

  15. Visual-Attentional Span and Lexical ­Decision in Skilled Adult Readers

    ERIC Educational Resources Information Center

    Holmes, Virginia M.; Dawson, Georgia

    2014-01-01

    The goal of the study was to examine the association between visual-attentional span and lexical decision in skilled adult readers. In the span tasks, an array of letters was presented briefly and recognition or production of a single cued letter (partial span) or production of all letters (whole span) was required. Independently of letter…

  16. Optical Phonetics and Visual Perception of Lexical and Phrasal Stress in English

    ERIC Educational Resources Information Center

    Scarborough, Rebecca; Keating, Patricia; Mattys, Sven L.; Cho, Taehong; Alwan, Abeer

    2009-01-01

    In a study of optical cues to the visual perception of stress, three American English talkers spoke words that differed in lexical stress and sentences that differed in phrasal stress, while video and movements of the face were recorded. The production of stressed and unstressed syllables from these utterances was analyzed along many measures of…

  17. Phonological and Semantic Priming in Children with Reading Disability

    ERIC Educational Resources Information Center

    Betjemann, Rebecca S.; Keenan, Janice M.

    2008-01-01

    Lexical priming was assessed in children with reading disability (RD) and in age-matched controls (M= 11.5 years), in visual and auditory lexical decision tasks. In the visual task, children with RD were found to have deficits in semantic (SHIP-BOAT), phonological/graphemic (GOAT-BOAT), and combined (FLOAT-BOAT) priming. The same pattern of…

  18. The Effects of Bilateral Presentations on Lateralized Lexical Decision

    ERIC Educational Resources Information Center

    Fernandino, Leonardo; Iacoboni, Marco; Zaidel, Eran

    2007-01-01

    We investigated how lateralized lexical decision is affected by the presence of distractors in the visual hemifield contralateral to the target. The study had three goals: first, to determine how the presence of a distractor (either a word or a pseudoword) affects visual field differences in the processing of the target; second, to identify the…

  19. Lexical decision with pseudohomophones and reading in the semantic variant of primary progressive aphasia: A double dissociation.

    PubMed

    Boukadi, Mariem; Potvin, Karel; Macoir, Joël; Jr Laforce, Robert; Poulin, Stéphane; Brambati, Simona M; Wilson, Maximiliano A

    2016-06-01

    The co-occurrence of semantic impairment and surface dyslexia in the semantic variant of primary progressive aphasia (svPPA) has often been taken as supporting evidence for the central role of semantics in visual word processing. According to connectionist models, semantic access is needed to accurately read irregular words. They also postulate that reliance on semantics is necessary to perform the lexical decision task under certain circumstances (for example, when the stimulus list comprises pseudohomophones). In the present study, we report two svPPA cases: M.F. who presented with surface dyslexia but performed accurately on the lexical decision task with pseudohomophones, and R.L. who showed no surface dyslexia but performed below the normal range on the lexical decision task with pseudohomophones. This double dissociation between reading and lexical decision with pseudohomophones is in line with the dual-route cascaded (DRC) model of reading. According to this model, impairments in visual word processing in svPPA are not necessarily associated with the semantic deficits characterizing this disease. Our findings also call into question the central role given to semantics in visual word processing within the connectionist account. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Brain Routes for Reading in Adults with and without Autism: EMEG Evidence

    ERIC Educational Resources Information Center

    Moseley, Rachel L.; Pulvermüller, Friedemann; Mohr, Bettina; Lombardo, Michael V.; Baron-Cohen, Simon; Shtyrov, Yury

    2014-01-01

    Reading utilises at least two neural pathways. The temporal lexical route visually maps whole words to their lexical entries, whilst the nonlexical route decodes words phonologically via parietal cortex. Readers typically employ the lexical route for familiar words, but poor comprehension plus precocity at mechanically "sounding out"…

  1. Impact of auditory-visual bimodality on lexical retrieval in Alzheimer's disease patients.

    PubMed

    Simoes Loureiro, Isabelle; Lefebvre, Laurent

    2015-01-01

    The aim of this study was to generalize the positive impact of auditory-visual bimodality on lexical retrieval in Alzheimer's disease (AD) patients. In practice, the naming skills of healthy elderly persons improve when additional sensory signals are included. The hypothesis of this study was that the same influence would be observable in AD patients. Sixty elderly patients separated into three groups (healthy subjects, stage 1 AD patients, and stage 2 AD patients) were tested with a battery of naming tasks comprising three different modalities: a visual modality, an auditory modality, and a visual and auditory modality (bimodality). Our results reveal the positive influence of bimodality on the accuracy with which bimodal items are named (when compared with unimodal items) and their latency (when compared with unimodal auditory items). These results suggest that multisensory enrichment can improve lexical retrieval in AD patients.

  2. Picture-Induced Semantic Interference Reflects Lexical Competition during Object Naming

    PubMed Central

    Aristei, Sabrina; Zwitserlood, Pienie; Rahman, Rasha Abdel

    2012-01-01

    With a picture–picture experiment, we contrasted competitive and non-competitive models of lexical selection during language production. Participants produced novel noun–noun compounds in response to two adjacently displayed objects that were categorically related or unrelated (e.g., depicted objects: apple and cherry; naming response: “apple–cherry”). We observed semantic interference, with slower compound naming for related relative to unrelated pictures, very similar to interference effects produced by semantically related context words in picture–word-interference paradigms. This finding suggests that previous failures to observe reliable interference induced by context pictures may be due to the weakness of lexical activation and competition induced by pictures, relative to words. The production of both picture names within one integrated compound word clearly enhances lexical activation, resulting in measurable interference effects. We interpret this interference as resulting from lexical competition, because the alternative interpretation, in terms of response-exclusion from the articulatory buffer, does not apply to pictures, even when they are named. PMID:22363304

  3. Neural dissociation in the production of lexical versus classifier signs in ASL: distinct patterns of hemispheric asymmetry.

    PubMed

    Hickok, Gregory; Pickell, Herbert; Klima, Edward; Bellugi, Ursula

    2009-01-01

    We examine the hemispheric organization for the production of two classes of ASL signs, lexical signs and classifier signs. Previous work has found strong left hemisphere dominance for the production of lexical signs, but several authors have speculated that classifier signs may involve the right hemisphere to a greater degree because they can represent spatial information in a topographic, non-categorical manner. Twenty-one unilaterally brain damaged signers (13 left hemisphere damaged, 8 right hemisphere damaged) were presented with a story narration task designed to elicit both lexical and classifier signs. Relative frequencies of the two types of errors were tabulated. Left hemisphere damaged signers produced significantly more lexical errors than did right hemisphere damaged signers, whereas the reverse pattern held for classifier signs. Our findings argue for different patterns of hemispheric asymmetry for these two classes of ASL signs. We suggest that the requirement to encode analogue spatial information in the production of classifier signs results in the increased involvement of the right hemisphere systems.

  4. Lexical-Semantic Processing and Reading: Relations between Semantic Priming, Visual Word Recognition and Reading Comprehension

    ERIC Educational Resources Information Center

    Nobre, Alexandre de Pontes; de Salles, Jerusa Fumagalli

    2016-01-01

    The aim of this study was to investigate relations between lexical-semantic processing and two components of reading: visual word recognition and reading comprehension. Sixty-eight children from private schools in Porto Alegre, Brazil, from 7 to 12 years, were evaluated. Reading was assessed with a word/nonword reading task and a reading…

  5. Optical phonetics and visual perception of lexical and phrasal stress in English.

    PubMed

    Scarborough, Rebecca; Keating, Patricia; Mattys, Sven L; Cho, Taehong; Alwan, Abeer

    2009-01-01

    In a study of optical cues to the visual perception of stress, three American English talkers spoke words that differed in lexical stress and sentences that differed in phrasal stress, while video and movements of the face were recorded. The production of stressed and unstressed syllables from these utterances was analyzed along many measures of facial movement, which were generally larger and faster in the stressed condition. In a visual perception experiment, 16 perceivers identified the location of stress in forced-choice judgments of video clips of these utterances (without audio). Phrasal stress was better perceived than lexical stress. The relation of the visual intelligibility of the prosody of these utterances to the optical characteristics of their production was analyzed to determine which cues are associated with successful visual perception. While most optical measures were correlated with perception performance, chin measures, especially Chin Opening Displacement, contributed the most to correct perception independently of the other measures. Thus, our results indicate that the information for visual stress perception is mainly associated with mouth opening movements.

  6. Lexical Link Analysis (LLA) Application: Improving Web Service to Defense Acquisition Visibility Environment (DAVE)

    DTIC Science & Technology

    2015-05-01

    1 LEXICAL LINK ANALYSIS (LLA) APPLICATION: IMPROVING WEB SERVICE TO DEFENSE ACQUISITION VISIBILITY ENVIRONMENT(DAVE) May 13-14, 2015 Dr. Ying...REPORT DATE MAY 2015 2. REPORT TYPE 3. DATES COVERED 00-00-2015 to 00-00-2015 4. TITLE AND SUBTITLE Lexical Link Analysis (LLA) Application...Making 3 2 1 3 L L A Methods • Lexical Link Analysis (LLA) Core – LLA Reports and Visualizations • Collaborative Learning Agents (CLA) for

  7. The Precise Time Course of Lexical Activation: MEG Measurements of the Effects of Frequency, Probability, and Density in Lexical Decision

    ERIC Educational Resources Information Center

    Stockall, Linnaea; Stringfellow, Andrew; Marantz, Alec

    2004-01-01

    Visually presented letter strings consistently yield three MEG response components: the M170, associated with letter-string processing (Tarkiainen, Helenius, Hansen, Cornelissen, & Salmelin, 1999); the M250, affected by phonotactic probability, (Pylkkanen, Stringfellow, & Marantz, 2002); and the M350, responsive to lexical frequency (Embick,…

  8. Graded effects of regularity in language revealed by N400 indices of morphological priming.

    PubMed

    Kielar, Aneta; Joanisse, Marc F

    2010-07-01

    Differential electrophysiological effects for regular and irregular linguistic forms have been used to support the theory that grammatical rules are encoded using a dedicated cognitive mechanism. The alternative hypothesis is that language systematicities are encoded probabilistically in a way that does not categorically distinguish rule-like and irregular forms. In the present study, this matter was investigated more closely by focusing specifically on whether the regular-irregular distinction in English past tenses is categorical or graded. We compared the ERP priming effects of regulars (baked-bake), vowel-change irregulars (sang-sing), and "suffixed" irregulars that display a partial regularity (suffixed irregular verbs, e.g., slept-sleep), as well as forms that are related strictly along formal or semantic dimensions. Participants performed a visual lexical decision task with either visual (Experiment 1) or auditory prime (Experiment 2). Stronger N400 priming effects were observed for regular than vowel-change irregular verbs, whereas suffixed irregulars tended to group with regular verbs. Subsequent analyses decomposed early versus late-going N400 priming, and suggested that differences among forms can be attributed to the orthographic similarity of prime and target. Effects of morphological relatedness were observed in the later-going time period, however, we failed to observe true regular-irregular dissociations in either experiment. The results indicate that morphological effects emerge from the interaction of orthographic, phonological, and semantic overlap between words.

  9. Lexical Decision with Left, Right and Center Visual Field Presentation: A Comparison between Dyslexic and Regular Readers by Means of Electrophysiological and Behavioral Measures

    ERIC Educational Resources Information Center

    Shaul, Shelley

    2012-01-01

    This study examined the differences in processing between regular and dyslexic readers in a lexical decision task in different visual field presentations (left, right, and center). The research utilized behavioral measures that provide information on accuracy and reaction time and electro-physiological measures that permit the examination of brain…

  10. Do handwritten words magnify lexical effects in visual word recognition?

    PubMed

    Perea, Manuel; Gil-López, Cristina; Beléndez, Victoria; Carreiras, Manuel

    2016-01-01

    An examination of how the word recognition system is able to process handwritten words is fundamental to formulate a comprehensive model of visual word recognition. Previous research has revealed that the magnitude of lexical effects (e.g., the word-frequency effect) is greater with handwritten words than with printed words. In the present lexical decision experiments, we examined whether the quality of handwritten words moderates the recruitment of top-down feedback, as reflected in word-frequency effects. Results showed a reading cost for difficult-to-read and easy-to-read handwritten words relative to printed words. But the critical finding was that difficult-to-read handwritten words, but not easy-to-read handwritten words, showed a greater word-frequency effect than printed words. Therefore, the inherent physical variability of handwritten words does not necessarily boost the magnitude of lexical effects.

  11. Spatio-temporal Dynamics of Referential and Inferential Naming: Different Brain and Cognitive Operations to Lexical Selection.

    PubMed

    Fargier, Raphaël; Laganaro, Marina

    2017-03-01

    Picture naming tasks are largely used to elicit the production of specific words and sentences in psycholinguistic and neuroimaging research. However, the generation of lexical concepts from a visual input is clearly not the exclusive way speech production is triggered. In inferential speech encoding, the concept is not provided from a visual input, but is elaborated though semantic and/or episodic associations. It is therefore likely that the cognitive operations leading to lexical selection and word encoding are different in inferential and referential expressive language. In particular, in picture naming lexical selection might ensue from a simple association between a perceptual visual representation and a word with minimal semantic processes, whereas richer semantic associations are involved in lexical retrieval in inferential situations. Here we address this hypothesis by analyzing ERP correlates during word production in a referential and an inferential task. The participants produced the same words elicited from pictures or from short written definitions. The two tasks displayed similar electrophysiological patterns only in the time-period preceding the verbal response. In the stimulus-locked ERPs waveform amplitudes and periods of stable global electrophysiological patterns differed across tasks after the P100 component and until 400-500 ms, suggesting the involvement of different, task-specific neural networks. Based on the analysis of the time-windows affected by specific semantic and lexical variables in each task, we conclude that lexical selection is underpinned by a different set of conceptual and brain processes, with semantic processes clearly preceding word retrieval in naming from definition whereas the semantic information is enriched in parallel with word retrieval in picture naming.

  12. I see/hear what you mean: semantic activation in visual word recognition depends on perceptual attention.

    PubMed

    Connell, Louise; Lynott, Dermot

    2014-04-01

    How does the meaning of a word affect how quickly we can recognize it? Accounts of visual word recognition allow semantic information to facilitate performance but have neglected the role of modality-specific perceptual attention in activating meaning. We predicted that modality-specific semantic information would differentially facilitate lexical decision and reading aloud, depending on how perceptual attention is implicitly directed by each task. Large-scale regression analyses showed the perceptual modalities involved in representing a word's referent concept influence how easily that word is recognized. Both lexical decision and reading-aloud tasks direct attention toward vision, and are faster and more accurate for strongly visual words. Reading aloud additionally directs attention toward audition and is faster and more accurate for strongly auditory words. Furthermore, the overall semantic effects are as large for reading aloud as lexical decision and are separable from age-of-acquisition effects. These findings suggest that implicitly directing perceptual attention toward a particular modality facilitates representing modality-specific perceptual information in the meaning of a word, which in turn contributes to the lexical decision or reading-aloud response.

  13. Lexical Access in Early Stages of Visual Word Processing: A Single-Trial Correlational MEG Study of Heteronym Recognition

    ERIC Educational Resources Information Center

    Solomyak, Olla; Marantz, Alec

    2009-01-01

    We present an MEG study of heteronym recognition, aiming to distinguish between two theories of lexical access: the "early access" theory, which entails that lexical access occurs at early (pre 200 ms) stages of processing, and the "late access" theory, which interprets this early activity as orthographic word-form identification rather than…

  14. Semantic size does not matter: "bigger" words are not recognized faster.

    PubMed

    Kang, Sean H K; Yap, Melvin J; Tse, Chi-Shing; Kurby, Christopher A

    2011-06-01

    Sereno, O'Donnell, and Sereno (2009) reported that words are recognized faster in a lexical decision task when their referents are physically large than when they are small, suggesting that "semantic size" might be an important variable that should be considered in visual word recognition research and modelling. We sought to replicate their size effect, but failed to find a significant latency advantage in lexical decision for "big" words (cf. "small" words), even though we used the same word stimuli as Sereno et al. and had almost three times as many subjects. We also examined existing data from visual word recognition megastudies (e.g., English Lexicon Project) and found that semantic size is not a significant predictor of lexical decision performance after controlling for the standard lexical variables. In summary, the null results from our lab experiment--despite a much larger subject sample size than Sereno et al.--converged with our analysis of megastudy lexical decision performance, leading us to conclude that semantic size does not matter for word recognition. Discussion focuses on why semantic size (unlike some other semantic variables) is unlikely to play a role in lexical decision.

  15. Deficits of congenital amusia beyond pitch: Evidence from impaired categorical perception of vowels in Cantonese-speaking congenital amusics

    PubMed Central

    Shao, Jing; Huang, Xunan

    2017-01-01

    Congenital amusia is a lifelong disorder of fine-grained pitch processing in music and speech. However, it remains unclear whether amusia is a pitch-specific deficit, or whether it affects frequency/spectral processing more broadly, such as the perception of formant frequency in vowels, apart from pitch. In this study, in order to illuminate the scope of the deficits, we compared the performance of 15 Cantonese-speaking amusics and 15 matched controls on the categorical perception of sound continua in four stimulus contexts: lexical tone, pure tone, vowel, and voice onset time (VOT). Whereas lexical tone, pure tone and vowel continua rely on frequency/spectral processing, the VOT continuum depends on duration/temporal processing. We found that the amusic participants performed similarly to controls in all stimulus contexts in the identification, in terms of the across-category boundary location and boundary width. However, the amusic participants performed systematically worse than controls in discriminating stimuli in those three contexts that depended on frequency/spectral processing (lexical tone, pure tone and vowel), whereas they performed normally when discriminating duration differences (VOT). These findings suggest that the deficit of amusia is probably not pitch specific, but affects frequency/spectral processing more broadly. Furthermore, there appeared to be differences in the impairment of frequency/spectral discrimination in speech and nonspeech contexts. The amusic participants exhibited less benefit in between-category discriminations than controls in speech contexts (lexical tone and vowel), suggesting reduced categorical perception; on the other hand, they performed inferiorly compared to controls across the board regardless of between- and within-category discriminations in nonspeech contexts (pure tone), suggesting impaired general auditory processing. These differences imply that the frequency/spectral-processing deficit might be manifested differentially in speech and nonspeech contexts in amusics—it is manifested as a deficit of higher-level phonological processing in speech sounds, and as a deficit of lower-level auditory processing in nonspeech sounds. PMID:28829808

  16. Turning an advantage into a disadvantage: ambiguity effects in lexical decision versus reading tasks.

    PubMed

    Piercey, C D; Joordens, S

    2000-06-01

    When performing a lexical decision task, participants can correctly categorize letter strings as words faster if they have multiple meanings (i.e., ambiguous words) than if they have one meaning (i.e., unambiguous words). In contrast, when reading connected text, participants tend to fixate longer on ambiguous words than on unambiguous words. Why are ambiguous words at an advantage in one word recognition task, and at a disadvantage in another? These disparate results can be reconciled if it is assumed that ambiguous words are relatively fast to reach a semantic-blend state sufficient for supporting lexical decisions, but then slow to escape the blend when the task requires a specific meaning be retrieved. We report several experiments that support this possibility.

  17. Individual Differences in the Joint Effects of Semantic Priming and Word Frequency Revealed by RT Distributional Analyses: The Role of Lexical Integrity

    ERIC Educational Resources Information Center

    Yap, Melvin J.; Tse, Chi-Shing; Balota, David A.

    2009-01-01

    Word frequency and semantic priming effects are among the most robust effects in visual word recognition, and it has been generally assumed that these two variables produce interactive effects in lexical decision performance, with larger priming effects for low-frequency targets. The results from four lexical decision experiments indicate that the…

  18. Automatic vigilance for negative words in lexical decision and naming: comment on Larsen, Mercer, and Balota (2006).

    PubMed

    Estes, Zachary; Adelman, James S

    2008-08-01

    An automatic vigilance hypothesis states that humans preferentially attend to negative stimuli, and this attention to negative valence disrupts the processing of other stimulus properties. Thus, negative words typically elicit slower color naming, word naming, and lexical decisions than neutral or positive words. Larsen, Mercer, and Balota analyzed the stimuli from 32 published studies, and they found that word valence was confounded with several lexical factors known to affect word recognition. Indeed, with these lexical factors covaried out, Larsen et al. found no evidence of automatic vigilance. The authors report a more sensitive analysis of 1011 words. Results revealed a small but reliable valence effect, such that negative words (e.g., "shark") elicit slower lexical decisions and naming than positive words (e.g., "beach"). Moreover, the relation between valence and recognition was categorical rather than linear; the extremity of a word's valence did not affect its recognition. This valence effect was not attributable to word length, frequency, orthographic neighborhood size, contextual diversity, first phoneme, or arousal. Thus, the present analysis provides the most powerful demonstration of automatic vigilance to date.

  19. Visual selective attention and reading efficiency are related in children.

    PubMed

    Casco, C; Tressoldi, P E; Dellantonio, A

    1998-09-01

    We investigated the relationship between visual selective attention and linguistic performance. Subjects were classified in four categories according to their accuracy in a letter cancellation task involving selective attention. The task consisted in searching a target letter in a set of background letters and accuracy was measured as a function of set size. We found that children with the lowest performance in the cancellation task present a significantly slower reading rate and a higher number of reading visual errors than children with highest performance. Results also show that these groups of searchers present significant differences in a lexical search task whereas their performance did not differ in lexical decision and syllables control task. The relationship between letter search and reading, as well as the finding that poor readers-searchers perform poorly lexical search tasks also involving selective attention, suggest that the relationship between letter search and reading difficulty may reflect a deficit in a visual selective attention mechanisms which is involved in all these tasks. A deficit in visual attention can be linked to the problems that disabled readers present in the function of magnocellular stream which culminates in posterior parietal cortex, an area which plays an important role in guiding visual attention.

  20. Developmental changes in the neural influence of sublexical information on semantic processing.

    PubMed

    Lee, Shu-Hui; Booth, James R; Chou, Tai-Li

    2015-07-01

    Functional magnetic resonance imaging (fMRI) was used to examine the developmental changes in a group of normally developing children (aged 8-12) and adolescents (aged 13-16) during semantic processing. We manipulated association strength (i.e. a global reading unit) and semantic radical (i.e. a local reading unit) to explore the interaction of lexical and sublexical semantic information in making semantic judgments. In the semantic judgment task, two types of stimuli were used: visually-similar (i.e. shared a semantic radical) versus visually-dissimilar (i.e. did not share a semantic radical) character pairs. Participants were asked to indicate if two Chinese characters, arranged according to association strength, were related in meaning. The results showed greater developmental increases in activation in left angular gyrus (BA 39) in the visually-similar compared to the visually-dissimilar pairs for the strong association. There were also greater age-related increases in angular gyrus for the strong compared to weak association in the visually-similar pairs. Both of these results suggest that shared semantics at the sublexical level facilitates the integration of overlapping features at the lexical level in older children. In addition, there was a larger developmental increase in left posterior middle temporal gyrus (BA 21) for the weak compared to strong association in the visually-dissimilar pairs, suggesting conflicting sublexical information placed greater demands on access to lexical representations in the older children. All together, these results suggest that older children are more sensitive to sublexical information when processing lexical representations. Copyright © 2015 Elsevier Ltd. All rights reserved.

  1. Semantic interference from distractor pictures in single-picture naming: evidence for competitive lexical selection.

    PubMed

    Jescheniak, Jörg D; Matushanskaya, Asya; Mädebach, Andreas; Müller, Matthias M

    2014-10-01

    Picture-naming studies have demonstrated interference from semantic-categorically related distractor words, but not from corresponding distractor pictures, and the lack of generality of the interference effect has been argued to challenge theories viewing lexical selection in speech production as a competitive process. Here, we demonstrate that semantic interference from context pictures does become visible, if sufficient attention is allocated to them. We combined picture naming with a spatial-cuing procedure. When participants' attention was shifted to the distractor, semantically related distractor pictures interfered with the response, as compared with unrelated distractor pictures. This finding supports models conceiving lexical retrieval as competitive (Levelt, Roelofs, & Meyer, 1999) but is difficult to reconcile with the response exclusion hypothesis (Finkbeiner & Caramazza, 2006b) proposed as an alternative.

  2. What are they thinking? Automated analysis of student writing about acid-base chemistry in introductory biology.

    PubMed

    Haudek, Kevin C; Prevost, Luanna B; Moscarella, Rosa A; Merrill, John; Urban-Lurain, Mark

    2012-01-01

    Students' writing can provide better insight into their thinking than can multiple-choice questions. However, resource constraints often prevent faculty from using writing assessments in large undergraduate science courses. We investigated the use of computer software to analyze student writing and to uncover student ideas about chemistry in an introductory biology course. Students were asked to predict acid-base behavior of biological functional groups and to explain their answers. Student explanations were rated by two independent raters. Responses were also analyzed using SPSS Text Analysis for Surveys and a custom library of science-related terms and lexical categories relevant to the assessment item. These analyses revealed conceptual connections made by students, student difficulties explaining these topics, and the heterogeneity of student ideas. We validated the lexical analysis by correlating student interviews with the lexical analysis. We used discriminant analysis to create classification functions that identified seven key lexical categories that predict expert scoring (interrater reliability with experts = 0.899). This study suggests that computerized lexical analysis may be useful for automatically categorizing large numbers of student open-ended responses. Lexical analysis provides instructors unique insights into student thinking and a whole-class perspective that are difficult to obtain from multiple-choice questions or reading individual responses.

  3. What Are They Thinking? Automated Analysis of Student Writing about Acid–Base Chemistry in Introductory Biology

    PubMed Central

    Haudek, Kevin C.; Prevost, Luanna B.; Moscarella, Rosa A.; Merrill, John; Urban-Lurain, Mark

    2012-01-01

    Students’ writing can provide better insight into their thinking than can multiple-choice questions. However, resource constraints often prevent faculty from using writing assessments in large undergraduate science courses. We investigated the use of computer software to analyze student writing and to uncover student ideas about chemistry in an introductory biology course. Students were asked to predict acid–base behavior of biological functional groups and to explain their answers. Student explanations were rated by two independent raters. Responses were also analyzed using SPSS Text Analysis for Surveys and a custom library of science-related terms and lexical categories relevant to the assessment item. These analyses revealed conceptual connections made by students, student difficulties explaining these topics, and the heterogeneity of student ideas. We validated the lexical analysis by correlating student interviews with the lexical analysis. We used discriminant analysis to create classification functions that identified seven key lexical categories that predict expert scoring (interrater reliability with experts = 0.899). This study suggests that computerized lexical analysis may be useful for automatically categorizing large numbers of student open-ended responses. Lexical analysis provides instructors unique insights into student thinking and a whole-class perspective that are difficult to obtain from multiple-choice questions or reading individual responses. PMID:22949425

  4. Individual differences in emotion processing: how similar are diffusion model parameters across tasks?

    PubMed

    Mueller, Christina J; White, Corey N; Kuchinke, Lars

    2017-11-27

    The goal of this study was to replicate findings of diffusion model parameters capturing emotion effects in a lexical decision task and investigating whether these findings extend to other tasks of implicit emotion processing. Additionally, we were interested in the stability of diffusion model parameters across emotional stimuli and tasks for individual subjects. Responses to words in a lexical decision task were compared with responses to faces in a gender categorization task for stimuli of the emotion categories: happy, neutral and fear. Main effects of emotion as well as stability of emerging response style patterns as evident in diffusion model parameters across these tasks were analyzed. Based on earlier findings, drift rates were assumed to be more similar in response to stimuli of the same emotion category compared to stimuli of a different emotion category. Results showed that emotion effects of the tasks differed with a processing advantage for happy followed by neutral and fear-related words in the lexical decision task and a processing advantage for neutral followed by happy and fearful faces in the gender categorization task. Both emotion effects were captured in estimated drift rate parameters-and in case of the lexical decision task also in the non-decision time parameters. A principal component analysis showed that contrary to our hypothesis drift rates were more similar within a specific task context than within a specific emotion category. Individual response patterns of subjects across tasks were evident in significant correlations regarding diffusion model parameters including response styles, non-decision times and information accumulation.

  5. Phonological-orthographic consistency for Japanese words and its impact on visual and auditory word recognition.

    PubMed

    Hino, Yasushi; Kusunose, Yuu; Miyamura, Shinobu; Lupker, Stephen J

    2017-01-01

    In most models of word processing, the degrees of consistency in the mappings between orthographic, phonological, and semantic representations are hypothesized to affect reading time. Following Hino, Miyamura, and Lupker's (2011) examination of the orthographic-phonological (O-P) and orthographic-semantic (O-S) consistency for 1,114 Japanese words (339 katakana and 775 kanji words), in the present research, we initially attempted to measure the phonological-orthographic (P-O) consistency for those same words. In contrast to the O-P and O-S consistencies, which were equivalent for kanji and katakana words, the P-O relationships were much more inconsistent for the kanji words than for the katakana words. The impact of kanji words' P-O consistency was then examined in both visual and auditory word recognition tasks. Although there was no effect of P-O consistency in the standard visual lexical-decision task, significant effects were detected in a lexical-decision task with auditory stimuli, in a perceptual identification task using masked visual stimuli, and in a lexical-decision task with degraded visual stimuli. The implications of these results are discussed in terms of the impact of P-O consistency in auditory and visual word recognition. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  6. Language identification from visual-only speech signals

    PubMed Central

    Ronquest, Rebecca E.; Levi, Susannah V.; Pisoni, David B.

    2010-01-01

    Our goal in the present study was to examine how observers identify English and Spanish from visual-only displays of speech. First, we replicated the recent findings of Soto-Faraco et al. (2007) with Spanish and English bilingual and monolingual observers using different languages and a different experimental paradigm (identification). We found that prior linguistic experience affected response bias but not sensitivity (Experiment 1). In two additional experiments, we investigated the visual cues that observers use to complete the language-identification task. The results of Experiment 2 indicate that some lexical information is available in the visual signal but that it is limited. Acoustic analyses confirmed that our Spanish and English stimuli differed acoustically with respect to linguistic rhythmic categories. In Experiment 3, we tested whether this rhythmic difference could be used by observers to identify the language when the visual stimuli is temporally reversed, thereby eliminating lexical information but retaining rhythmic differences. The participants performed above chance even in the backward condition, suggesting that the rhythmic differences between the two languages may aid language identification in visual-only speech signals. The results of Experiments 3A and 3B also confirm previous findings that increased stimulus length facilitates language identification. Taken together, the results of these three experiments replicate earlier findings and also show that prior linguistic experience, lexical information, rhythmic structure, and utterance length influence visual-only language identification. PMID:20675804

  7. The Bayesian reader: explaining word recognition as an optimal Bayesian decision process.

    PubMed

    Norris, Dennis

    2006-04-01

    This article presents a theory of visual word recognition that assumes that, in the tasks of word identification, lexical decision, and semantic categorization, human readers behave as optimal Bayesian decision makers. This leads to the development of a computational model of word recognition, the Bayesian reader. The Bayesian reader successfully simulates some of the most significant data on human reading. The model accounts for the nature of the function relating word frequency to reaction time and identification threshold, the effects of neighborhood density and its interaction with frequency, and the variation in the pattern of neighborhood density effects seen in different experimental tasks. Both the general behavior of the model and the way the model predicts different patterns of results in different tasks follow entirely from the assumption that human readers approximate optimal Bayesian decision makers. ((c) 2006 APA, all rights reserved).

  8. Acquisition of the novel name-nameless category (N3C) principle by young children who have Down syndrome.

    PubMed

    Mervis, C B; Bertrand, J

    1995-11-01

    Acquisition of the novel name-nameless category (N3C) principle by 22 children with Down syndrome between the ages of 2.42 and 3.33 years was examined to investigate the generalizability of a new approach to early lexical development: the developmental lexical principles framework. Results indicated that, as predicted, the N3C principle (operationally defined as the ability to fast map a new word to a [basic level] category), is not available at the start of lexical acquisition. The predicted link between ability to use the N3C principle and ability to perform exhaustive categorization of objects was supported. Children who used the principle had significantly larger productive vocabularies than did those who did not and, according to maternal report, had begun to acquire new words rapidly.

  9. Modelling individual difference in visual categorization.

    PubMed

    Shen, Jianhong; Palmeri, Thomas J

    Recent years has seen growing interest in understanding, characterizing, and explaining individual differences in visual cognition. We focus here on individual differences in visual categorization. Categorization is the fundamental visual ability to group different objects together as the same kind of thing. Research on visual categorization and category learning has been significantly informed by computational modeling, so our review will focus both on how formal models of visual categorization have captured individual differences and how individual difference have informed the development of formal models. We first examine the potential sources of individual differences in leading models of visual categorization, providing a brief review of a range of different models. We then describe several examples of how computational models have captured individual differences in visual categorization. This review also provides a bit of an historical perspective, starting with models that predicted no individual differences, to those that captured group differences, to those that predict true individual differences, and to more recent hierarchical approaches that can simultaneously capture both group and individual differences in visual categorization. Via this selective review, we see how considerations of individual differences can lead to important theoretical insights into how people visually categorize objects in the world around them. We also consider new directions for work examining individual differences in visual categorization.

  10. Modelling individual difference in visual categorization

    PubMed Central

    Shen, Jianhong; Palmeri, Thomas J.

    2016-01-01

    Recent years has seen growing interest in understanding, characterizing, and explaining individual differences in visual cognition. We focus here on individual differences in visual categorization. Categorization is the fundamental visual ability to group different objects together as the same kind of thing. Research on visual categorization and category learning has been significantly informed by computational modeling, so our review will focus both on how formal models of visual categorization have captured individual differences and how individual difference have informed the development of formal models. We first examine the potential sources of individual differences in leading models of visual categorization, providing a brief review of a range of different models. We then describe several examples of how computational models have captured individual differences in visual categorization. This review also provides a bit of an historical perspective, starting with models that predicted no individual differences, to those that captured group differences, to those that predict true individual differences, and to more recent hierarchical approaches that can simultaneously capture both group and individual differences in visual categorization. Via this selective review, we see how considerations of individual differences can lead to important theoretical insights into how people visually categorize objects in the world around them. We also consider new directions for work examining individual differences in visual categorization. PMID:28154496

  11. Morphological Influences on the Recognition of Monosyllabic Monomorphemic Words

    ERIC Educational Resources Information Center

    Baayen, R. H.; Feldman, L. B.; Schreuder, R.

    2006-01-01

    Balota et al. [Balota, D., Cortese, M., Sergent-Marshall, S., Spieler, D., & Yap, M. (2004). Visual word recognition for single-syllable words. "Journal of Experimental Psychology: General, 133," 283-316] studied lexical processing in word naming and lexical decision using hierarchical multiple regression techniques for a large data set of…

  12. Transposed-Letter and Laterality Effects in Lexical Decision

    ERIC Educational Resources Information Center

    Perea, Manuel; Fraga, Isabel

    2006-01-01

    Two divided visual field lexical decision experiments were conducted to examine the role of the cerebral hemispheres in transposed-letter similarity effects. In Experiment 1, we created two types of nonwords: nonadjacent transposed-letter nonwords ("TRADEGIA"; the base word was "TRAGEDIA," the Spanish for "TRAGEDY") and two-letter different…

  13. Accessible Reading Assessments for Students with Disabilities

    ERIC Educational Resources Information Center

    Abedi, Jamal; Bayley, Robert; Ewers, Nancy; Mundhenk, Kimberly; Leon, Seth; Kao, Jenny; Herman, Joan

    2012-01-01

    Assessments developed and field tested for the mainstream student population may not be accessible for students with disabilities (SWDs) as a result of the impact of extraneous variables, including cognitive features, such as depth of knowledge required, grammatical and lexical complexity, lexical density, and textual/visual features. This study…

  14. Get rich quick: the signal to respond procedure reveals the time course of semantic richness effects during visual word recognition.

    PubMed

    Hargreaves, Ian S; Pexman, Penny M

    2014-05-01

    According to several current frameworks, semantic processing involves an early influence of language-based information followed by later influences of object-based information (e.g., situated simulations; Santos, Chaigneau, Simmons, & Barsalou, 2011). In the present study we examined whether these predictions extend to the influence of semantic variables in visual word recognition. We investigated the time course of semantic richness effects in visual word recognition using a signal-to-respond (STR) paradigm fitted to a lexical decision (LDT) and a semantic categorization (SCT) task. We used linear mixed effects to examine the relative contributions of language-based (number of senses, ARC) and object-based (imageability, number of features, body-object interaction ratings) descriptions of semantic richness at four STR durations (75, 100, 200, and 400ms). Results showed an early influence of number of senses and ARC in the SCT. In both LDT and SCT, object-based effects were the last to influence participants' decision latencies. We interpret our results within a framework in which semantic processes are available to influence word recognition as a function of their availability over time, and of their relevance to task-specific demands. Copyright © 2014 Elsevier B.V. All rights reserved.

  15. A dual task priming investigation of right hemisphere inhibition for people with left hemisphere lesions

    PubMed Central

    2012-01-01

    Background During normal semantic processing, the left hemisphere (LH) is suggested to restrict right hemisphere (RH) performance via interhemispheric suppression. However, a lesion in the LH or the use of concurrent tasks to overload the LH's attentional resource balance has been reported to result in RH disinhibition with subsequent improvements in RH performance. The current study examines variations in RH semantic processing in the context of unilateral LH lesions and the manipulation of the interhemispheric processing resource balance, in order to explore the relevance of RH disinhibition to hemispheric contributions to semantic processing following a unilateral LH lesion. Methods RH disinhibition was examined for nine participants with a single LH lesion and 13 matched controls using the dual task paradigm. Hemispheric performance on a divided visual field lexical decision semantic priming task was compared over three verbal memory load conditions, of zero-, two- and six-words. Related stimuli consisted of categorically related, associatively related, and categorically and associatively related prime-target pairs. Response time and accuracy data were recorded and analyzed using linear mixed model analysis, and planned contrasts were performed to compare priming effects in both visual fields, for each of the memory load conditions. Results Control participants exhibited significant bilateral visual field priming for all related conditions (p < .05), and a LH advantage over all three memory load conditions. Participants with LH lesions exhibited an improvement in RH priming performance as memory load increased, with priming for the categorically related condition occurring only in the 2- and 6-word memory conditions. RH disinhibition was also reflected for the LH damage (LHD) group by the removal of the LH performance advantage following the introduction of the memory load conditions. Conclusions The results from the control group are consistent with suggestions of an age related hemispheric asymmetry reduction and indicate that in healthy aging compensatory bilateral activation may reduce the impact of inhibition. In comparison, the results for the LHD group indicate that following a LH lesion RH semantic processing can be manipulated and enhanced by the introduction of a verbal memory task designed to engage LH resources and allow disinhibition of RH processing. PMID:22429687

  16. The effect of compression and attention allocation on speech intelligibility. II

    NASA Astrophysics Data System (ADS)

    Choi, Sangsook; Carrell, Thomas

    2004-05-01

    Previous investigations of the effects of amplitude compression on measures of speech intelligibility have shown inconsistent results. Recently, a novel paradigm was used to investigate the possibility of more consistent findings with a measure of speech perception that is not based entirely on intelligibility (Choi and Carrell, 2003). That study exploited a dual-task paradigm using a pursuit rotor online visual-motor tracking task (Dlhopolsky, 2000) along with a word repetition task. Intensity-compressed words caused reduced performance on the tracking task as compared to uncompressed words when subjects engaged in a simultaneous word repetition task. This suggested an increased cognitive load when listeners processed compressed words. A stronger result might be obtained if a single resource (linguistic) is required rather than two (linguistic and visual-motor) resources. In the present experiment a visual lexical decision task and an auditory word repetition task were used. The visual stimuli for the lexical decision task were blurred and presented in a noise background. The compressed and uncompressed words for repetition were placed in speech-shaped noise. Participants with normal hearing and vision conducted word repetition and lexical decision tasks both independently and simultaneously. The pattern of results is discussed and compared to the previous study.

  17. Investigating the flow of information during speaking: the impact of morpho-phonological, associative, and categorical picture distractors on picture naming

    PubMed Central

    Bölte, Jens; Böhl, Andrea; Dobel, Christian; Zwitserlood, Pienie

    2015-01-01

    In three experiments, participants named target pictures by means of German compound words (e.g., Gartenstuhl–garden chair), each accompanied by two different distractor pictures (e.g., lawn mower and swimming pool). Targets and distractor pictures were semantically related either associatively (garden chair and lawn mower) or by a shared semantic category (garden chair and wardrobe). Within each type of semantic relation, target and distractor pictures either shared morpho-phonological (word-form) information (Gartenstuhl with Gartenzwerg, garden gnome, and Gartenschlauch, garden hose) or not. A condition with two completely unrelated pictures served as baseline. Target naming was facilitated when distractor and target pictures were morpho-phonologically related. This is clear evidence for the activation of word-form information of distractor pictures. Effects were larger for associatively than for categorically related distractors and targets, which constitute evidence for lexical competition. Mere categorical relatedness, in the absence of morpho-phonological overlap, resulted in null effects (Experiments 1 and 2), and only speeded target naming when effects reflect only conceptual, but not lexical processing (Experiment 3). Given that distractor pictures activate their word forms, the data cannot be easily reconciled with discrete serial models. The results fit well with models that allow information to cascade forward from conceptual to word-form levels. PMID:26528209

  18. Investigating the flow of information during speaking: the impact of morpho-phonological, associative, and categorical picture distractors on picture naming.

    PubMed

    Bölte, Jens; Böhl, Andrea; Dobel, Christian; Zwitserlood, Pienie

    2015-01-01

    In three experiments, participants named target pictures by means of German compound words (e.g., Gartenstuhl-garden chair), each accompanied by two different distractor pictures (e.g., lawn mower and swimming pool). Targets and distractor pictures were semantically related either associatively (garden chair and lawn mower) or by a shared semantic category (garden chair and wardrobe). Within each type of semantic relation, target and distractor pictures either shared morpho-phonological (word-form) information (Gartenstuhl with Gartenzwerg, garden gnome, and Gartenschlauch, garden hose) or not. A condition with two completely unrelated pictures served as baseline. Target naming was facilitated when distractor and target pictures were morpho-phonologically related. This is clear evidence for the activation of word-form information of distractor pictures. Effects were larger for associatively than for categorically related distractors and targets, which constitute evidence for lexical competition. Mere categorical relatedness, in the absence of morpho-phonological overlap, resulted in null effects (Experiments 1 and 2), and only speeded target naming when effects reflect only conceptual, but not lexical processing (Experiment 3). Given that distractor pictures activate their word forms, the data cannot be easily reconciled with discrete serial models. The results fit well with models that allow information to cascade forward from conceptual to word-form levels.

  19. Ceci n'est pas un walrus: lexical processing in vigilance performance.

    PubMed

    Neigel, Alexis R; Claypoole, Victoria L; Hancock, Gabriella M; Fraulini, Nicholas W; Szalma, James L

    2018-03-01

    Vigilance, or the ability to sustain attention for extended periods of time, has traditionally been examined using a myriad of symbolic, cognitive, and sensory tasks. However, the current literature indicates a relative lack of empirical investigation on vigilance performance involving lexical processing. To address this gap in the literature, the present study examined the effect of stimulus meaning on vigilance performance (i.e., lure effects). A sample of 126 observers completed a 12-min lexical vigilance task in a research laboratory. Observers were randomly assigned to a standard task (targets and neutral events only) or a lure task (lures, targets, and neutral events presented), wherein lures were stimuli that were categorically similar to target stimuli. A novel analytical approach was utilized to examine the results; the lure groups were divided based on false alarm performance post hoc. Groups were further divided to demonstrate that the presence of lure stimuli significantly affects the decision-making criteria used to assess the performance of lexical vigilance tasks. We also discuss the effect of lure stimuli on measures related to signal detection theory (e.g., sensitivity and response bias).

  20. The Resolution of Visual Noise in Word Recognition

    ERIC Educational Resources Information Center

    Pae, Hye K.; Lee, Yong-Won

    2015-01-01

    This study examined lexical processing in English by native speakers of Korean and Chinese, compared to that of native speakers of English, using normal, alternated, and inverse fonts. Sixty four adult students participated in a lexical decision task. The findings demonstrated similarities and differences in accuracy and latency among the three L1…

  1. Reading Polymorphemic Dutch Compounds: Toward a Multiple Route Model of Lexical Processing

    ERIC Educational Resources Information Center

    Kuperman, Victor; Schreuder, Robert; Bertram, Raymond; Baayen, R. Harald

    2009-01-01

    This article reports an eye-tracking experiment with 2,500 polymorphemic Dutch compounds presented in isolation for visual lexical decision while readers' eye movements were registered. The authors found evidence that both full forms of compounds ("dishwasher") and their constituent morphemes (e.g., "dish," "washer," "er") and morphological…

  2. Effects of Morphological Family Size for Young Readers

    ERIC Educational Resources Information Center

    Perdijk, Kors; Schreuder, Robert; Baayen, R. Harald; Verhoeven, Ludo

    2012-01-01

    Dutch children, from the second and fourth grade of primary school, were each given a visual lexical decision test on 210 Dutch monomorphemic words. After removing words not recognized by a majority of the younger group, (lexical) decisions were analysed by mixed-model regression methods to see whether morphological Family Size influenced decision…

  3. Masked Priming with Orthographic Neighbors: A Test of the Lexical Competition Assumption

    ERIC Educational Resources Information Center

    Nakayama, Mariko; Sears, Christopher R.; Lupker, Stephen J.

    2008-01-01

    In models of visual word identification that incorporate inhibitory competition among activated lexical units, a word's higher frequency neighbors will be the word's strongest competitors. Preactivation of these neighbors by a prime is predicted to delay the word's identification. Using the masked priming paradigm (K. I. Forster & C. Davis, 1984,…

  4. There Is Something about Grammatical Category in Chinese Visual Word Recognition

    ERIC Educational Resources Information Center

    Kwong, Oi Yee

    2016-01-01

    The differential processing of nouns and verbs has been attributed to a combination of morphological, syntactic and semantic factors which are often intertwined with other general lexical properties. This study tested the noun-verb difference with Chinese disyllabic words controlled on various lexical parameters. As Chinese words are free from…

  5. Morphological Structures in Visual Word Recognition: The Case of Arabic

    ERIC Educational Resources Information Center

    Abu-Rabia, Salim; Awwad, Jasmin (Shalhoub)

    2004-01-01

    This research examined the function within lexical access of the main morphemic units from which most Arabic words are assembled, namely roots and word patterns. The present study focused on the derivation of nouns, in particular, whether the lexical representation of Arabic words reflects their morphological structure and whether recognition of a…

  6. Divided attention disrupts perceptual encoding during speech recognition.

    PubMed

    Mattys, Sven L; Palmer, Shekeila D

    2015-03-01

    Performing a secondary task while listening to speech has a detrimental effect on speech processing, but the locus of the disruption within the speech system is poorly understood. Recent research has shown that cognitive load imposed by a concurrent visual task increases dependency on lexical knowledge during speech processing, but it does not affect lexical activation per se. This suggests that "lexical drift" under cognitive load occurs either as a post-lexical bias at the decisional level or as a secondary consequence of reduced perceptual sensitivity. This study aimed to adjudicate between these alternatives using a forced-choice task that required listeners to identify noise-degraded spoken words with or without the addition of a concurrent visual task. Adding cognitive load increased the likelihood that listeners would select a word acoustically similar to the target even though its frequency was lower than that of the target. Thus, there was no evidence that cognitive load led to a high-frequency response bias. Rather, cognitive load seems to disrupt sublexical encoding, possibly by impairing perceptual acuity at the auditory periphery.

  7. Using Student Writing and Lexical Analysis to Reveal Student Thinking about the Role of Stop Codons in the Central Dogma

    PubMed Central

    Prevost, Luanna B.; Smith, Michelle K.; Knight, Jennifer K.

    2016-01-01

    Previous work has shown that students have persistent difficulties in understanding how central dogma processes can be affected by a stop codon mutation. To explore these difficulties, we modified two multiple-choice questions from the Genetics Concept Assessment into three open-ended questions that asked students to write about how a stop codon mutation potentially impacts replication, transcription, and translation. We then used computer-assisted lexical analysis combined with human scoring to categorize student responses. The lexical analysis models showed high agreement with human scoring, demonstrating that this approach can be successfully used to analyze large numbers of student written responses. The results of this analysis show that students’ ideas about one process in the central dogma can affect their thinking about subsequent and previous processes, leading to mixed models of conceptual understanding. PMID:27909016

  8. Computational Linguistics in the Netherlands 1996. Papers from the CLIN Meeting (7th, Eindhoven, Netherlands, November 15, 1996).

    ERIC Educational Resources Information Center

    Landsbergen, Jan, Ed.; Odijk, Jan, Ed.; van Deemter, Kees, Ed.; van Zanten, Gert Veldhuijzen, Ed.

    Papers from the meeting on computational linguistics include: "Conversational Games, Belief Revision and Bayesian Networks" (Stephen G. Pulman); "Valence Alternation without Lexical Rules" (Gosse Bouma); "Filtering Left Dislocation Chains in Parsing Categorical Grammar" (Crit Cremers, Maarten Hijzelendoorn);…

  9. The Acquisition of Productive Rules in Child and Adult Language Learners

    ERIC Educational Resources Information Center

    Schuler, Kathryn Dolores

    2017-01-01

    In natural language, evidence suggests that, while some rules are productive (regular), applying broadly to new words, others are restricted to a specific set of lexical items (irregular). Further, the literature suggests that children make a categorical distinction between regular and irregular rules, applying only regular rules productively…

  10. A Lexical Analysis of Environmental Sound Categories

    ERIC Educational Resources Information Center

    Houix, Olivier; Lemaitre, Guillaume; Misdariis, Nicolas; Susini, Patrick; Urdapilleta, Isabel

    2012-01-01

    In this article we report on listener categorization of meaningful environmental sounds. A starting point for this study was the phenomenological taxonomy proposed by Gaver (1993b). In the first experimental study, 15 participants classified 60 environmental sounds and indicated the properties shared by the sounds in each class. In a second…

  11. Lexical Competition is Enhanced in the Left Hemisphere: Evidence from Different Types of Orthographic Neighbors

    ERIC Educational Resources Information Center

    Perea, Manuel; Acha, Joana; Fraga, Isabel

    2008-01-01

    Two divided visual field lexical decision experiments were conducted to examine the role of the cerebral hemispheres in orthographic neighborhood effects. In Experiment 1, we employed two types of words: words with many substitution neighbors (high-"N") and words with few substitution neighbors (low-"N"). Results showed a facilitative effect of…

  12. Decomposition into Multiple Morphemes during Lexical Access: A Masked Priming Study of Russian Nouns

    ERIC Educational Resources Information Center

    Kazanina, Nina; Dukova-Zheleva, Galina; Geber, Dana; Kharlamov, Viktor; Tonciulescu, Keren

    2008-01-01

    The study reports the results of a masked priming experiment with morphologically complex Russian nouns. Participants performed a lexical decision task to a visual target that differed from its prime in one consonant. Three conditions were included: (1) "transparent," in which the prime was morphologically related to the target and contained the…

  13. Grammatical number agreement processing using the visual half-field paradigm: an event-related brain potential study.

    PubMed

    Kemmer, Laura; Coulson, Seana; Kutas, Marta

    2014-02-01

    Despite indications in the split-brain and lesion literatures that the right hemisphere is capable of some syntactic analysis, few studies have investigated right hemisphere contributions to syntactic processing in people with intact brains. Here we used the visual half-field paradigm in healthy adults to examine each hemisphere's processing of correct and incorrect grammatical number agreement marked either lexically, e.g., antecedent/reflexive pronoun ("The grateful niece asked herself/*themselves…") or morphologically, e.g., subject/verb ("Industrial scientists develop/*develops…"). For reflexives, response times and accuracy of grammaticality decisions suggested similar processing regardless of visual field of presentation. In the subject/verb condition, we observed similar response times and accuracies for central and right visual field (RVF) presentations. For left visual field (LVF) presentation, response times were longer and accuracy rates were reduced relative to RVF presentation. An event-related brain potential (ERP) study using the same materials revealed similar ERP responses to the reflexive pronouns in the two visual fields, but very different ERP effects to the subject/verb violations. For lexically marked violations on reflexives, P600 was elicited by stimuli in both the LVF and RVF; for morphologically marked violations on verbs, P600 was elicited only by RVF stimuli. These data suggest that both hemispheres can process lexically marked pronoun agreement violations, and do so in a similar fashion. Morphologically marked subject/verb agreement errors, however, showed a distinct LH advantage. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. Grammatical number agreement processing using the visual half-field paradigm: An event-related brain potential study

    PubMed Central

    Kemmer, Laura; Coulson, Seana; Kutas, Marta

    2014-01-01

    Despite indications in the split-brain and lesion literatures that the right hemisphere is capable of some syntactic analysis, few studies have investigated right hemisphere contributions to syntactic processing in people with intact brains. Here we used the visual half-field paradigm in healthy adults to examine each hemisphere’s processing of correct and incorrect grammatical number agreement marked either lexically, e.g., antecedent/reflexive pronoun (“The grateful niece asked herself/*themselves…”) or morphologically, e.g., subject/verb (“Industrial scientists develop/*develops…”). For reflexives, response times and accuracy of grammaticality decisions suggested similar processing regardless of visual field of presentation. In the subject/verb condition, we observed similar response times and accuracies for central and right visual field (RVF) presentations. For left visual field (LVF) presentation, response times were longer and accuracy rates were reduced relative to RVF presentation. An event-related brain potential (ERP) study using the same materials revealed similar ERP responses to the reflexive pronouns in the two visual fields, but very different ERP effects to the subject/verb violations. For lexically marked violations on reflexives, P600 was elicited by stimuli in both the LVF and RVF; for morphologically marked violations on verbs, P600 was elicited only by RVF stimuli. These data suggest that both hemispheres can process lexically marked pronoun agreement violations, and do so in a similar fashion. Morphologically marked subject/verb agreement errors, however, showed a distinct LH advantage. PMID:24326084

  15. Symbol-string sensitivity and adult performance in lexical decision.

    PubMed

    Pammer, Kristen; Lavis, Ruth; Cooper, Charity; Hansen, Peter C; Cornelissen, Piers L

    2005-09-01

    In this study of adult readers, we used a symbol-string task to assess participants' sensitivity to the position of briefly presented, non-alphabetic but letter-like symbols. We found that sensitivity in this task explained a significant proportion of sample variance in visual lexical decision. Based on a number of controls, we show that this relationship cannot be explained by other factors including: chronological age, intelligence, speed of processing and/or concentration, short term memory consolidation, or fixation stability. This approach represents a new way to elucidate how, and to what extent, individual variation in pre-orthographic visual and cognitive processes impinge on reading skills, and the results suggest that limitations set by visuo-spatial processes constrain visual word recognition.

  16. Vernier But Not Grating Acuity Contributes to an Early Stage of Visual Word Processing.

    PubMed

    Tan, Yufei; Tong, Xiuhong; Chen, Wei; Weng, Xuchu; He, Sheng; Zhao, Jing

    2018-03-28

    The process of reading words depends heavily on efficient visual skills, including analyzing and decomposing basic visual features. Surprisingly, previous reading-related studies have almost exclusively focused on gross aspects of visual skills, while only very few have investigated the role of finer skills. The present study filled this gap and examined the relations of two finer visual skills measured by grating acuity (the ability to resolve periodic luminance variations across space) and Vernier acuity (the ability to detect/discriminate relative locations of features) to Chinese character-processing as measured by character form-matching and lexical decision tasks in skilled adult readers. The results showed that Vernier acuity was significantly correlated with performance in character form-matching but not visual symbol form-matching, while no correlation was found between grating acuity and character processing. Interestingly, we found no correlation of the two visual skills with lexical decision performance. These findings provide for the first time empirical evidence that the finer visual skills, particularly as reflected in Vernier acuity, may directly contribute to an early stage of hierarchical word processing.

  17. Beyond the visual word form area: the orthography-semantics interface in spelling and reading.

    PubMed

    Purcell, Jeremy J; Shea, Jennifer; Rapp, Brenda

    2014-01-01

    Lexical orthographic information provides the basis for recovering the meanings of words in reading and for generating correct word spellings in writing. Research has provided evidence that an area of the left ventral temporal cortex, a subregion of what is often referred to as the visual word form area (VWFA), plays a significant role specifically in lexical orthographic processing. The current investigation goes beyond this previous work by examining the neurotopography of the interface of lexical orthography with semantics. We apply a novel lesion mapping approach with three individuals with acquired dysgraphia and dyslexia who suffered lesions to left ventral temporal cortex. To map cognitive processes to their neural substrates, this lesion mapping approach applies similar logical constraints to those used in cognitive neuropsychological research. Using this approach, this investigation: (a) identifies a region anterior to the VWFA that is important in the interface of orthographic information with semantics for reading and spelling; (b) determines that, within this orthography-semantics interface region (OSIR), access to orthography from semantics (spelling) is topographically distinct from access to semantics from orthography (reading); (c) provides evidence that, within this region, there is modality-specific access to and from lexical semantics for both spoken and written modalities, in both word production and comprehension. Overall, this study contributes to our understanding of the neural architecture at the lexical orthography-semantic-phonological interface within left ventral temporal cortex.

  18. The precise time course of lexical activation: MEG measurements of the effects of frequency, probability, and density in lexical decision.

    PubMed

    Stockall, Linnaea; Stringfellow, Andrew; Marantz, Alec

    2004-01-01

    Visually presented letter strings consistently yield three MEG response components: the M170, associated with letter-string processing (Tarkiainen, Helenius, Hansen, Cornelissen, & Salmelin, 1999); the M250, affected by phonotactic probability, (Pylkkänen, Stringfellow, & Marantz, 2002); and the M350, responsive to lexical frequency (Embick, Hackl, Schaeffer, Kelepir, & Marantz, 2001). Pylkkänen et al. found evidence that the M350 reflects lexical activation prior to competition among phonologically similar words. We investigate the effects of lexical and sublexical frequency and neighborhood density on the M250 and M350 through orthogonal manipulation of phonotactic probability, density, and frequency. The results confirm that probability but not density affects the latency of the M250 and M350; however, an interaction between probability and density on M350 latencies suggests an earlier influence of neighborhoods than previously reported.

  19. Pitch Perception in the First Year of Life, a Comparison of Lexical Tones and Musical Pitch.

    PubMed

    Chen, Ao; Stevens, Catherine J; Kager, René

    2017-01-01

    Pitch variation is pervasive in speech, regardless of the language to which infants are exposed. Lexical tone is influenced by general sensitivity to pitch. We examined whether the development in lexical tone perception may develop in parallel with perception of pitch in other cognitive domains namely music. Using a visual fixation paradigm, 100 and one 4- and 12-month-old Dutch infants were tested on their discrimination of Chinese rising and dipping lexical tones as well as comparable three-note musical pitch contours. The 4-month-old infants failed to show a discrimination effect in either condition, whereas the 12-month-old infants succeeded in both conditions. These results suggest that lexical tone perception may reflect and relate to general pitch perception abilities, which may serve as a basis for developing more complex language and musical skills.

  20. Two-Year-Olds Will Name Artifacts by Their Functions.

    ERIC Educational Resources Information Center

    Nelson, Deborah G. Kemler; Russell, Rachel; Duke, Nell; Jones, Kate

    2000-01-01

    Three studies examined lexical categorization in 2-year- olds. Findings indicated that even with minimal opportunities to familiarize themselves with novel artifacts, children generalized their names in accordance with the objects' functions, even when they had to discover the functions on their own or when all the test objects had some…

  1. Structure of Complex Verb Forms in Meiteilon

    ERIC Educational Resources Information Center

    Singh, Lourembam Surjit

    2016-01-01

    This piece of work proposes to descriptively investigate the structures of complex verbs in Meiteilon. The categorization of such verbs is based on the nature of semantic and syntactic functions of a lexeme or verbal lexeme. A lexeme or verbal lexeme in Meiteilon may have multifunctional properties in the nature of occurrence. Such lexical items…

  2. Listeners are maximally flexible in updating phonetic beliefs over time.

    PubMed

    Saltzman, David; Myers, Emily

    2018-04-01

    Perceptual learning serves as a mechanism for listenexrs to adapt to novel phonetic information. Distributional tracking theories posit that this adaptation occurs as a result of listeners accumulating talker-specific distributional information about the phonetic category in question (Kleinschmidt & Jaeger, 2015, Psychological Review, 122). What is not known is how listeners build these talker-specific distributions; that is, if they aggregate all information received over a certain time period, or if they rely more heavily upon the most recent information received and down-weight older, consolidated information. In the present experiment, listeners were exposed to four interleaved blocks of a lexical decision task and a phonetic categorization task in which the lexical decision blocks were designed to bias perception in opposite directions along a "s"-"sh" continuum. Listeners returned several days later and completed the identical task again. Evidence was consistent with listeners using a relatively short temporal window of integration at the individual session level. Namely, in each individual session, listeners' perception of a "s"-"sh" contrast was biased by the information in the immediately preceding lexical decision block, and there was no evidence that listeners summed their experience with the talker over the entire session. Similarly, the magnitude of the bias effect did not change between sessions, consistent with the idea that talker-specific information remains flexible, even after consolidation. In general, results suggest that listeners are maximally flexible when considering how to categorize speech from a novel talker.

  3. Mandarin Visual Speech Information

    ERIC Educational Resources Information Center

    Chen, Trevor H.

    2010-01-01

    While the auditory-only aspects of Mandarin speech are heavily-researched and well-known in the field, this dissertation addresses its lesser-known aspects: The visual and audio-visual perception of Mandarin segmental information and lexical-tone information. Chapter II of this dissertation focuses on the audiovisual perception of Mandarin…

  4. Phase synchronization of delta and theta oscillations increase during the detection of relevant lexical information.

    PubMed

    Brunetti, Enzo; Maldonado, Pedro E; Aboitiz, Francisco

    2013-01-01

    During monitoring of the discourse, the detection of the relevance of incoming lexical information could be critical for its incorporation to update mental representations in memory. Because, in these situations, the relevance for lexical information is defined by abstract rules that are maintained in memory, a central aspect to elucidate is how an abstract level of knowledge maintained in mind mediates the detection of the lower-level semantic information. In the present study, we propose that neuronal oscillations participate in the detection of relevant lexical information, based on "kept in mind" rules deriving from more abstract semantic information. We tested our hypothesis using an experimental paradigm that restricted the detection of relevance to inferences based on explicit information, thus controlling for ambiguities derived from implicit aspects. We used a categorization task, in which the semantic relevance was previously defined based on the congruency between a kept in mind category (abstract knowledge), and the lexical semantic information presented. Our results show that during the detection of the relevant lexical information, phase synchronization of neuronal oscillations selectively increases in delta and theta frequency bands during the interval of semantic analysis. These increments occurred irrespective of the semantic category maintained in memory, had a temporal profile specific for each subject, and were mainly induced, as they had no effect on the evoked mean global field power. Also, recruitment of an increased number of pairs of electrodes was a robust observation during the detection of semantic contingent words. These results are consistent with the notion that the detection of relevant lexical information based on a particular semantic rule, could be mediated by increasing the global phase synchronization of neuronal oscillations, which may contribute to the recruitment of an extended number of cortical regions.

  5. English Listeners Use Suprasegmental Cues to Lexical Stress Early during Spoken-Word Recognition

    ERIC Educational Resources Information Center

    Jesse, Alexandra; Poellmann, Katja; Kong, Ying-Yee

    2017-01-01

    Purpose: We used an eye-tracking technique to investigate whether English listeners use suprasegmental information about lexical stress to speed up the recognition of spoken words in English. Method: In a visual world paradigm, 24 young English listeners followed spoken instructions to choose 1 of 4 printed referents on a computer screen (e.g.,…

  6. Parallel Distributed Processing and Lexical-Semantic Effects in Visual Word Recognition: Are a Few Stages Necessary?

    ERIC Educational Resources Information Center

    Borowsky, Ron; Besner, Derek

    2006-01-01

    D. C. Plaut and J. R. Booth presented a parallel distributed processing model that purports to simulate human lexical decision performance. This model (and D. C. Plaut, 1995) offers a single mechanism account of the pattern of factor effects on reaction time (RT) between semantic priming, word frequency, and stimulus quality without requiring a…

  7. Language Processing in Children with Cochlear Implants: A Preliminary Report on Lexical Access for Production and Comprehension

    ERIC Educational Resources Information Center

    Schwartz, Richard G.; Steinman, Susan; Ying, Elizabeth; Mystal, Elana Ying; Houston, Derek M.

    2013-01-01

    In this plenary paper, we present a review of language research in children with cochlear implants along with an outline of a 5-year project designed to examine the lexical access for production and recognition. The project will use auditory priming, picture naming with auditory or visual interfering stimuli (Picture-Word Interference and…

  8. From Sensory Perception to Lexical-Semantic Processing: An ERP Study in Non-Verbal Children with Autism.

    PubMed

    Cantiani, Chiara; Choudhury, Naseem A; Yu, Yan H; Shafer, Valerie L; Schwartz, Richard G; Benasich, April A

    2016-01-01

    This study examines electrocortical activity associated with visual and auditory sensory perception and lexical-semantic processing in nonverbal (NV) or minimally-verbal (MV) children with Autism Spectrum Disorder (ASD). Currently, there is no agreement on whether these children comprehend incoming linguistic information and whether their perception is comparable to that of typically developing children. Event-related potentials (ERPs) of 10 NV/MV children with ASD and 10 neurotypical children were recorded during a picture-word matching paradigm. Atypical ERP responses were evident at all levels of processing in children with ASD. Basic perceptual processing was delayed in both visual and auditory domains but overall was similar in amplitude to typically-developing children. However, significant differences between groups were found at the lexical-semantic level, suggesting more atypical higher-order processes. The results suggest that although basic perception is relatively preserved in NV/MV children with ASD, higher levels of processing, including lexical- semantic functions, are impaired. The use of passive ERP paradigms that do not require active participant response shows significant potential for assessment of non-compliant populations such as NV/MV children with ASD.

  9. From Sensory Perception to Lexical-Semantic Processing: An ERP Study in Non-Verbal Children with Autism

    PubMed Central

    Cantiani, Chiara; Choudhury, Naseem A.; Yu, Yan H.; Shafer, Valerie L.; Schwartz, Richard G.; Benasich, April A.

    2016-01-01

    This study examines electrocortical activity associated with visual and auditory sensory perception and lexical-semantic processing in nonverbal (NV) or minimally-verbal (MV) children with Autism Spectrum Disorder (ASD). Currently, there is no agreement on whether these children comprehend incoming linguistic information and whether their perception is comparable to that of typically developing children. Event-related potentials (ERPs) of 10 NV/MV children with ASD and 10 neurotypical children were recorded during a picture-word matching paradigm. Atypical ERP responses were evident at all levels of processing in children with ASD. Basic perceptual processing was delayed in both visual and auditory domains but overall was similar in amplitude to typically-developing children. However, significant differences between groups were found at the lexical-semantic level, suggesting more atypical higher-order processes. The results suggest that although basic perception is relatively preserved in NV/MV children with ASD, higher levels of processing, including lexical- semantic functions, are impaired. The use of passive ERP paradigms that do not require active participant response shows significant potential for assessment of non-compliant populations such as NV/MV children with ASD. PMID:27560378

  10. Reading Proficiency and Adaptability in Orthographic Processing: An Examination of the Effect of Type of Orthography Read on Brain Activity in Regular and Dyslexic Readers

    PubMed Central

    Bar-Kochva, Irit; Breznitz, Zvia

    2014-01-01

    Regular readers were found to adjust the routine of reading to the demands of processing imposed by different orthographies. Dyslexic readers may lack such adaptability in reading. This hypothesis was tested among readers of Hebrew, as Hebrew has two forms of script differing in phonological transparency. Event-related potentials were recorded from 24 regular and 24 dyslexic readers while they carried out a lexical decision task in these two forms of script. The two forms of script elicited distinct amplitudes and latencies at ∼165 ms after target onset, and these effects were larger in regular than in dyslexic readers. These early effects appeared not to be merely a result of the visual difference between the two forms of script (the presence of diacritics). The next effect of form of script was obtained on amplitudes elicited at latencies associated with orthographic-lexical processing and the categorization of stimuli, and these appeared earlier in regular readers (∼340 ms) than in dyslexic readers (∼400 ms). The behavioral measures showed inferior reading skills of dyslexic readers compared to regular readers in reading of both forms of script. Taken together, the results suggest that although dyslexic readers are not indifferent to the type of orthography read, they fail to adjust the routine of reading to the demands of processing imposed by both a transparent and an opaque orthography. PMID:24465844

  11. Top-down preparation modulates visual categorization but not subjective awareness of objects presented in natural backgrounds.

    PubMed

    Koivisto, Mika; Kahila, Ella

    2017-04-01

    Top-down processes are widely assumed to be essential in visual awareness, subjective experience of seeing. However, previous studies have not tried to separate directly the roles of different types of top-down influences in visual awareness. We studied the effects of top-down preparation and object substitution masking (OSM) on visual awareness during categorization of objects presented in natural scene backgrounds. The results showed that preparation facilitated categorization but did not influence visual awareness. OSM reduced visual awareness and impaired categorization. The dissociations between the effects of preparation and OSM on visual awareness and on categorization imply that they influence at different stages of cognitive processing. We propose that preparation influences at the top of the visual hierarchy, whereas OSM interferes with processes occurring at lower levels of the hierarchy. These lower level processes play an essential role in visual awareness. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Processing of threat-related information outside the focus of visual attention.

    PubMed

    Calvo, Manuel G; Castillo, M Dolores

    2005-05-01

    This study investigates whether threat-related words are especially likely to be perceived in unattended locations of the visual field. Threat-related, positive, and neutral words were presented at fixation as probes in a lexical decision task. The probe word was preceded by 2 simultaneous prime words (1 foveal, i.e., at fixation; 1 parafoveal, i.e., 2.2 deg. of visual angle from fixation), which were presented for 150 ms, one of which was either identical or unrelated to the probe. Results showed significant facilitation in lexical response times only for the probe threat words when primed parafoveally by an identical word presented in the right visual field. We conclude that threat-related words have privileged access to processing outside the focus of attention. This reveals a cognitive bias in the preferential, parallel processing of information that is important for adaptation.

  13. Masked Priming Is Abstract in the Left and Right Visual Fields

    ERIC Educational Resources Information Center

    Bowers, Jeffrey S.; Turner, Emma L.

    2005-01-01

    Two experiments assessed masked priming for words presented to the left and right visual fields in a lexical decision task. In both Experiments, the same magnitude and pattern of priming was obtained for visually similar ("kiss"-"KISS") and dissimilar ("read"-"READ") prime-target pairs. These findings…

  14. Visual Speech Primes Open-Set Recognition of Spoken Words

    ERIC Educational Resources Information Center

    Buchwald, Adam B.; Winters, Stephen J.; Pisoni, David B.

    2009-01-01

    Visual speech perception has become a topic of considerable interest to speech researchers. Previous research has demonstrated that perceivers neurally encode and use speech information from the visual modality, and this information has been found to facilitate spoken word recognition in tasks such as lexical decision (Kim, Davis, & Krins,…

  15. Similarity relations in visual search predict rapid visual categorization

    PubMed Central

    Mohan, Krithika; Arun, S. P.

    2012-01-01

    How do we perform rapid visual categorization?It is widely thought that categorization involves evaluating the similarity of an object to other category items, but the underlying features and similarity relations remain unknown. Here, we hypothesized that categorization performance is based on perceived similarity relations between items within and outside the category. To this end, we measured the categorization performance of human subjects on three diverse visual categories (animals, vehicles, and tools) and across three hierarchical levels (superordinate, basic, and subordinate levels among animals). For the same subjects, we measured their perceived pair-wise similarities between objects using a visual search task. Regardless of category and hierarchical level, we found that the time taken to categorize an object could be predicted using its similarity to members within and outside its category. We were able to account for several classic categorization phenomena, such as (a) the longer times required to reject category membership; (b) the longer times to categorize atypical objects; and (c) differences in performance across tasks and across hierarchical levels. These categorization times were also accounted for by a model that extracts coarse structure from an image. The striking agreement observed between categorization and visual search suggests that these two disparate tasks depend on a shared coarse object representation. PMID:23092947

  16. Determinants of structural choice in visually situated sentence production.

    PubMed

    Myachykov, Andriy; Garrod, Simon; Scheepers, Christoph

    2012-11-01

    Three experiments investigated how perceptual, structural, and lexical cues affect structural choices during English transitive sentence production. Participants described transitive events under combinations of visual cueing of attention (toward either agent or patient) and structural priming with and without semantic match between the notional verb in the prime and the target event. Speakers had a stronger preference for passive-voice sentences (1) when their attention was directed to the patient, (2) upon reading a passive-voice prime, and (3) when the verb in the prime matched the target event. The verb-match effect was the by-product of an interaction between visual cueing and verb match: the increase in the proportion of passive-voice responses with matching verbs was limited to the agent-cued condition. Persistence of visual cueing effects in the presence of both structural and lexical cues suggests a strong coupling between referent-directed visual attention and Subject assignment in a spoken sentence. Copyright © 2012 Elsevier B.V. All rights reserved.

  17. Discrete Emotion Effects on Lexical Decision Response Times

    PubMed Central

    Briesemeister, Benny B.; Kuchinke, Lars; Jacobs, Arthur M.

    2011-01-01

    Our knowledge about affective processes, especially concerning effects on cognitive demands like word processing, is increasing steadily. Several studies consistently document valence and arousal effects, and although there is some debate on possible interactions and different notions of valence, broad agreement on a two dimensional model of affective space has been achieved. Alternative models like the discrete emotion theory have received little interest in word recognition research so far. Using backward elimination and multiple regression analyses, we show that five discrete emotions (i.e., happiness, disgust, fear, anger and sadness) explain as much variance as two published dimensional models assuming continuous or categorical valence, with the variables happiness, disgust and fear significantly contributing to this account. Moreover, these effects even persist in an experiment with discrete emotion conditions when the stimuli are controlled for emotional valence and arousal levels. We interpret this result as evidence for discrete emotion effects in visual word recognition that cannot be explained by the two dimensional affective space account. PMID:21887307

  18. Discrete emotion effects on lexical decision response times.

    PubMed

    Briesemeister, Benny B; Kuchinke, Lars; Jacobs, Arthur M

    2011-01-01

    Our knowledge about affective processes, especially concerning effects on cognitive demands like word processing, is increasing steadily. Several studies consistently document valence and arousal effects, and although there is some debate on possible interactions and different notions of valence, broad agreement on a two dimensional model of affective space has been achieved. Alternative models like the discrete emotion theory have received little interest in word recognition research so far. Using backward elimination and multiple regression analyses, we show that five discrete emotions (i.e., happiness, disgust, fear, anger and sadness) explain as much variance as two published dimensional models assuming continuous or categorical valence, with the variables happiness, disgust and fear significantly contributing to this account. Moreover, these effects even persist in an experiment with discrete emotion conditions when the stimuli are controlled for emotional valence and arousal levels. We interpret this result as evidence for discrete emotion effects in visual word recognition that cannot be explained by the two dimensional affective space account.

  19. Categorical perception of intonation contrasts: effects of listeners' language background.

    PubMed

    Liu, Chang; Rodriguez, Amanda

    2012-06-01

    Intonation perception of English speech was examined for English- and Chinese-native listeners. F0 contour was manipulated from falling to rising patterns for the final words of three sentences. Listener's task was to identify and discriminate the intonation of each sentence (question versus statement). English and Chinese listeners had significant differences in the identification functions such as the categorical boundary and the slope. In the discrimination functions, Chinese listeners showed greater peakedness than English peers. The cross-linguistic differences in intonation perception were similar to the previous findings in perception of lexical tones, likely due to listeners' language background differences.

  20. Grasping the invisible: semantic processing of abstract words.

    PubMed

    Zdrazilova, Lenka; Pexman, Penny M

    2013-12-01

    The problem of how abstract word meanings are represented has been a challenging one. In the present study, we extended the semantic richness approach (e.g., Yap, Tan, Pexman, & Hargreaves in Psychonomic Bulletin & Review 18:742-750, 2011) to abstract words, examining the effects of six semantic richness variables on lexical-semantic processing for 207 abstract nouns. The candidate richness dimensions were context availability (CA), sensory experience rating (SER), valence, arousal, semantic neighborhood (SN), and number of associates (NoA). The behavioral tasks were lexical decision (LDT) and semantic categorization (SCT). Our results showed that the semantic richness variables were significantly related to both LDT and SCT latencies, even after lexical and orthographic factors were controlled. The patterns of richness effects varied across tasks, with CA effects in the LDT, and SER and valence effects in the SCT. These results provide new insight into how abstract meanings may be grounded, and are consistent with a dynamic, multidimensional framework for semantic processing.

  1. Effects of Cumulative Frequency, but Not of Frequency Trajectory, in Lexical Decision Times of Older Adults and Patients with Alzheimer's Disease

    ERIC Educational Resources Information Center

    Caza, Nicole; Moscovitch, Morris

    2005-01-01

    The purpose of this study was to investigate the issue of age-limited learning effects on visual lexical decision in normal and pathological aging, by using words with different frequency trajectories and cumulative frequencies. We selected words that objectively changed in frequency trajectory from an early word count (Thorndike, 1921, 1932;…

  2. Effect of Syllable Congruency in Sixth Graders in the Lexical Decision Task with Masked Priming

    ERIC Educational Resources Information Center

    Chetail, Fabienne; Mathey, Stephanie

    2012-01-01

    The aim of this study was to investigate the role of the syllable in visual recognition of French words in Grade 6. To do so, the syllabic congruency effect was examined in the lexical decision task combined with masked priming. Target words were preceded by pseudoword primes sharing the first letters that either corresponded to the syllable…

  3. Children Do Not Overcome Lexical Biases Where Adults Do: The Role of the Referential Scene in Garden-Path Recovery

    ERIC Educational Resources Information Center

    Kidd, Evan; Stewart, Andrew J.; Serratrice, Ludovica

    2011-01-01

    In this paper we report on a visual world eye-tracking experiment that investigated the differing abilities of adults and children to use referential scene information during reanalysis to overcome lexical biases during sentence processing. The results showed that adults incorporated aspects of the referential scene into their parse as soon as it…

  4. Putting lexical constraints in context into the visual-world paradigm.

    PubMed

    Novick, Jared M; Thompson-Schill, Sharon L; Trueswell, John C

    2008-06-01

    Prior eye-tracking studies of spoken sentence comprehension have found that the presence of two potential referents, e.g., two frogs, can guide listeners toward a Modifier interpretation of Put the frog on the napkin... despite strong lexical biases associated with Put that support a Goal interpretation of the temporary ambiguity (Tanenhaus, M. K., Spivey-Knowlton, M. J., Eberhard, K. M. & Sedivy, J. C. (1995). Integration of visual and linguistic information in spoken language comprehension. Science, 268, 1632-1634; Trueswell, J. C., Sekerina, I., Hill, N. M. & Logrip, M. L. (1999). The kindergarten-path effect: Studying on-line sentence processing in young children. Cognition, 73, 89-134). This pattern is not expected under constraint-based parsing theories: cue conflict between the lexical evidence (which supports the Goal analysis) and the visuo-contextual evidence (which supports the Modifier analysis) should result in uncertainty about the intended analysis and partial consideration of the Goal analysis. We reexamined these put studies (Experiment 1) by introducing a response time-constraint and a spatial contrast between competing referents (a frog on a napkin vs. a frog in a bowl). If listeners immediately interpret on the... as the start of a restrictive modifier, then their eye movements should rapidly converge on the intended referent (the frog on something). However, listeners showed this pattern only when the phrase was unambiguously a Modifier (Put the frog that's on the...). Syntactically ambiguous trials resulted in transient consideration of the Competitor animal (the frog in something). A reading study was also run on the same individuals (Experiment 2) and performance was compared between the two experiments. Those individuals who relied heavily on lexical biases to resolve a complement ambiguity in reading (The man heard/realized the story had been...) showed increased sensitivity to both lexical and contextual constraints in the put-task; i.e., increased consideration of the Goal analysis in 1-Referent Scenes, but also adeptness at using spatial constraints of prepositions (in vs. on) to restrict referential alternatives in 2-Referent Scenes. These findings cross-validate visual world and reading methods and support multiple-constraint theories of sentence processing in which individuals differ in their sensitivity to lexical contingencies.

  5. Representation of Colour Concepts in Bilingual Cognition: The Case of Japanese Blues

    ERIC Educational Resources Information Center

    Athanasopoulos, Panos; Damjanovic, Ljubica; Krajciova, Andrea; Sasaki, Miho

    2011-01-01

    Previous studies demonstrate that lexical coding of colour influences categorical perception of colour, such that participants are more likely to rate two colours to be more similar if they belong to the same linguistic category (Roberson et al., 2000, 2005). Recent work shows changes in Greek-English bilinguals' perception of within and…

  6. On Resolving a Paradox in the Analysis of Navajo Syntax.

    ERIC Educational Resources Information Center

    Jelinek, Eloise

    An analysis of relative clauses in Navajo looks at a paradox that is rooted in the assumption that in Navajo, as in English, argument positions not occupied by some free lexical item must be occupied categorically by an EC. It examines patterns of and constraints on nominals with relation to the relative clause, theory concerning argumental…

  7. The Role of Polysemy in Masked Semantic and Translation Priming

    ERIC Educational Resources Information Center

    Finkbeiner, Matthew; Forster, Kenneth; Nicol, Janet; Nakamura, Kumiko

    2004-01-01

    A well-known asymmetry exists in the bilingual masked priming literature in which lexical decision is used: namely, masked primes in the dominant language (L1) facilitate decision times on targets in the less dominant language (L2), but not vice versa. In semantic categorization, on the other hand, priming is symmetrical. In Experiments 1-3 we…

  8. Lexical Activation during Sentence Comprehension in Adolescents with History of Specific Language Impairment

    PubMed Central

    Borovsky, Arielle; Burns, Erin; Elman, Jeffrey L.; Evans, Julia L.

    2015-01-01

    One remarkable characteristic of speech comprehension in typically developing (TD) children and adults is the speed with which the listener can integrate information across multiple lexical items to anticipate upcoming referents. Although children with Specific Language Impairment (SLI) show lexical deficits (Sheng & McGregor, 2010) and slower speed of processing (Leonard et al., 2007), relatively little is known about how these deficits manifest in real-time sentence comprehension. In this study, we examine lexical activation in the comprehension of simple transitive sentences in adolescents with a history of SLI and age-matched, TD peers. Participants listened to sentences that consisted of the form, Article-Agent-Action-Article-Theme, (e.g., The pirate chases the ship) while viewing pictures of four objects that varied in their relationship to the Agent and Action of the sentence (e.g., Target, Agent-Related, Action-Related, and Unrelated). Adolescents with SLI were as fast as their TD peers to fixate on the sentence’s final item (the Target) but differed in their post-action onset visual fixations to the Action-Related item. Additional exploratory analyses of the spatial distribution of their visual fixations revealed that the SLI group had a qualitatively different pattern of fixations to object images than did the control group. The findings indicate that adolescents with SLI integrate lexical information across words to anticipate likely or expected meanings with the same relative fluency and speed as do their TD peers. However, the failure of the SLI group to show increased fixations to Action-Related items after the onset of the action suggests lexical integration deficits that result in failure to consider alternate sentence interpretations. PMID:24099807

  9. A taste for words and sounds: a case of lexical-gustatory and sound-gustatory synesthesia

    PubMed Central

    Colizoli, Olympia; Murre, Jaap M. J.; Rouw, Romke

    2013-01-01

    Gustatory forms of synesthesia involve the automatic and consistent experience of tastes that are triggered by non-taste related inducers. We present a case of lexical-gustatory and sound-gustatory synesthesia within one individual, SC. Most words and a subset of non-linguistic sounds induce the experience of taste, smell and physical sensations for SC. SC's lexical-gustatory associations were significantly more consistent than those of a group of controls. We tested for effects of presentation modality (visual vs. auditory), taste-related congruency, and synesthetic inducer-concurrent direction using a priming task. SC's performance did not differ significantly from a trained control group. We used functional magnetic resonance imaging to investigate the neural correlates of SC's synesthetic experiences by comparing her brain activation to the literature on brain networks related to language, music, and sound processing, in addition to synesthesia. Words that induced a strong taste were contrasted to words that induced weak-to-no tastes (“tasty” vs. “tasteless” words). Brain activation was also measured during passive listening to music and environmental sounds. Brain activation patterns showed evidence that two regions are implicated in SC's synesthetic experience of taste and smell: the left anterior insula and left superior parietal lobe. Anterior insula activation may reflect the synesthetic taste experience. The superior parietal lobe is proposed to be involved in binding sensory information across sub-types of synesthetes. We conclude that SC's synesthesia is genuine and reflected in her brain activation. The type of inducer (visual-lexical, auditory-lexical, and non-lexical auditory stimuli) could be differentiated based on patterns of brain activity. PMID:24167497

  10. How brand names are special: brands, words, and hemispheres.

    PubMed

    Gontijo, Possidonia F D; Rayman, Janice; Zhang, Shi; Zaidel, Eran

    2002-09-01

    Previous research has consistently shown differences between the processing of proper names and of common nouns, leading to the belief that proper names possess a special neuropsychological status. We investigate the category of brand names and suggest that brand names also have a special neuropsychological status, but one which is different from proper names. The findings suggest that the hemispheric lexical status of the brand names is mixed--they behave like words in some respects and like nonwords in others. Our study used familiar upper case brand names, common nouns, and two different types of nonwords ("weird" and "normal") differing in length, as stimuli in a lateralized lexical decision task (LDT). Common nouns, brand names, weird nonwords, and normal nonwords were recognized in that decreasing order of speed and accuracy. A right visual field (RVF) advantage was found for all four lexical types. Interestingly, brand names, similar to nonwords, were found to be less lateralized than common nouns, consistent with theories of category-specific lexical processing. Further, brand names were the only type of lexical items to show a capitalization effect: brand names were recognized faster when they were presented in upper case than in lower case. In addition, while string length affected the recognition of common nouns only in the left visual field (LVF) and the recognition of nonwords only in the RVF, brand names behaved like common nouns in exhibiting length effects only in the LVF. Copyright 2002 Elsevier Science (USA)

  11. The Timing of Visual Object Categorization

    PubMed Central

    Mack, Michael L.; Palmeri, Thomas J.

    2011-01-01

    An object can be categorized at different levels of abstraction: as natural or man-made, animal or plant, bird or dog, or as a Northern Cardinal or Pyrrhuloxia. There has been growing interest in understanding how quickly categorizations at different levels are made and how the timing of those perceptual decisions changes with experience. We specifically contrast two perspectives on the timing of object categorization at different levels of abstraction. By one account, the relative timing implies a relative timing of stages of visual processing that are tied to particular levels of object categorization: Fast categorizations are fast because they precede other categorizations within the visual processing hierarchy. By another account, the relative timing reflects when perceptual features are available over time and the quality of perceptual evidence used to drive a perceptual decision process: Fast simply means fast, it does not mean first. Understanding the short-term and long-term temporal dynamics of object categorizations is key to developing computational models of visual object recognition. We briefly review a number of models of object categorization and outline how they explain the timing of visual object categorization at different levels of abstraction. PMID:21811480

  12. Got Rhythm...For Better and for Worse. Cross-Modal Effects of Auditory Rhythm on Visual Word Recognition

    ERIC Educational Resources Information Center

    Brochard, Renaud; Tassin, Maxime; Zagar, Daniel

    2013-01-01

    The present research aimed to investigate whether, as previously observed with pictures, background auditory rhythm would also influence visual word recognition. In a lexical decision task, participants were presented with bisyllabic visual words, segmented into two successive groups of letters, while an irrelevant strongly metric auditory…

  13. Naming and categorizing objects: task differences modulate the polarity of semantic effects in the picture-word interference paradigm.

    PubMed

    Hantsch, Ansgar; Jescheniak, Jörg D; Mädebach, Andreas

    2012-07-01

    The picture-word interference paradigm is a prominent tool for studying lexical retrieval during speech production. When participants name the pictures, interference from semantically related distractor words has regularly been shown. By contrast, when participants categorize the pictures, facilitation from semantically related distractors has typically been found. In the extant studies, however, differences in the task instructions (naming vs. categorizing) were confounded with the response level: While responses in naming were typically located at the basic level (e.g., "dog"), responses were located at the superordinate level in categorization (e.g., "animal"). The present study avoided this confound by having participants respond at the basic level in both naming and categorization, using the same pictures, distractors, and verbal responses. Our findings confirm the polarity reversal of the semantic effects--that is, semantic interference in naming, and semantic facilitation in categorization. These findings show that the polarity reversal of the semantic effect is indeed due to the different tasks and is not an artifact of the different response levels used in previous studies. Implications for current models of language production are discussed.

  14. Size matters: bigger is faster.

    PubMed

    Sereno, Sara C; O'Donnell, Patrick J; Sereno, Margaret E

    2009-06-01

    A largely unexplored aspect of lexical access in visual word recognition is "semantic size"--namely, the real-world size of an object to which a word refers. A total of 42 participants performed a lexical decision task on concrete nouns denoting either big or small objects (e.g., bookcase or teaspoon). Items were matched pairwise on relevant lexical dimensions. Participants' reaction times were reliably faster to semantically "big" versus "small" words. The results are discussed in terms of possible mechanisms, including more active representations for "big" words, due to the ecological importance attributed to large objects in the environment and the relative speed of neural responses to large objects.

  15. Auditory perception modulated by word reading.

    PubMed

    Cao, Liyu; Klepp, Anne; Schnitzler, Alfons; Gross, Joachim; Biermann-Ruben, Katja

    2016-10-01

    Theories of embodied cognition positing that sensorimotor areas are indispensable during language comprehension are supported by neuroimaging and behavioural studies. Among others, the auditory system has been suggested to be important for understanding sound-related words (visually presented) and the motor system for action-related words. In this behavioural study, using a sound detection task embedded in a lexical decision task, we show that in participants with high lexical decision performance sound verbs improve auditory perception. The amount of modulation was correlated with lexical decision performance. Our study provides convergent behavioural evidence of auditory cortex involvement in word processing, supporting the view of embodied language comprehension concerning the auditory domain.

  16. The Characteristics and Limits of Rapid Visual Categorization

    PubMed Central

    Fabre-Thorpe, Michèle

    2011-01-01

    Visual categorization appears both effortless and virtually instantaneous. The study by Thorpe et al. (1996) was the first to estimate the processing time necessary to perform fast visual categorization of animals in briefly flashed (20 ms) natural photographs. They observed a large differential EEG activity between target and distracter correct trials that developed from 150 ms after stimulus onset, a value that was later shown to be even shorter in monkeys! With such strong processing time constraints, it was difficult to escape the conclusion that rapid visual categorization was relying on massively parallel, essentially feed-forward processing of visual information. Since 1996, we have conducted a large number of studies to determine the characteristics and limits of fast visual categorization. The present chapter will review some of the main results obtained. I will argue that rapid object categorizations in natural scenes can be done without focused attention and are most likely based on coarse and unconscious visual representations activated with the first available (magnocellular) visual information. Fast visual processing proved efficient for the categorization of large superordinate object or scene categories, but shows its limits when more detailed basic representations are required. The representations for basic objects (dogs, cars) or scenes (mountain or sea landscapes) need additional processing time to be activated. This finding is at odds with the widely accepted idea that such basic representations are at the entry level of the system. Interestingly, focused attention is still not required to perform these time consuming basic categorizations. Finally we will show that object and context processing can interact very early in an ascending wave of visual information processing. We will discuss how such data could result from our experience with a highly structured and predictable surrounding world that shaped neuronal visual selectivity. PMID:22007180

  17. Is the Go/No-Go Lexical Decision Task Preferable to the Yes/No Task with Developing Readers?

    ERIC Educational Resources Information Center

    Moret-Tatay, Carmen; Perea, Manuel

    2011-01-01

    The lexical decision task is probably the most common laboratory visual word identification task together with the naming task. In the usual setup, participants need to press the "yes" button when the stimulus is a word and the "no" button when the stimulus is not a word. A number of studies have employed this task with developing readers;…

  18. Rapid interactions between lexical semantic and word form analysis during word recognition in context: evidence from ERPs.

    PubMed

    Kim, Albert; Lai, Vicky

    2012-05-01

    We used ERPs to investigate the time course of interactions between lexical semantic and sublexical visual word form processing during word recognition. Participants read sentence-embedded pseudowords that orthographically resembled a contextually supported real word (e.g., "She measured the flour so she could bake a ceke…") or did not (e.g., "She measured the flour so she could bake a tont…") along with nonword consonant strings (e.g., "She measured the flour so she could bake a srdt…"). Pseudowords that resembled a contextually supported real word ("ceke") elicited an enhanced positivity at 130 msec (P130), relative to real words (e.g., "She measured the flour so she could bake a cake…"). Pseudowords that did not resemble a plausible real word ("tont") enhanced the N170 component, as did nonword consonant strings ("srdt"). The effect pattern shows that the visual word recognition system is, perhaps, counterintuitively, more rapidly sensitive to minor than to flagrant deviations from contextually predicted inputs. The findings are consistent with rapid interactions between lexical and sublexical representations during word recognition, in which rapid lexical access of a contextually supported word (CAKE) provides top-down excitation of form features ("cake"), highlighting the anomaly of an unexpected word "ceke."

  19. Does the advantage of the upper part of words occur at the lexical level?

    PubMed

    Perea, Manuel; Comesaña, Montserrat; Soares, Ana P

    2012-11-01

    Several recent studies have shown that the upper part of words is more important than the lower part in visual word recognition. Here, we examine whether or not this advantage arises at the lexical or at the letter (letter feature) level. To examine this issue, we conducted two lexical decision experiments in which words/pseudowords were preceded by a very brief (50-ms) presentation of their upper or lower parts (e.g., ). If the advantage for the upper part of words arises at the letter (letter feature) level, the effect should occur for both words and pseudowords. Results revealed an advantage for the upper part of words, but not for pseudowords. This suggests that the advantage for the upper part of words occurs at the lexical level, rather than at the letter (or letter feature) level.

  20. English Listeners Use Suprasegmental Cues to Lexical Stress Early During Spoken-Word Recognition

    PubMed Central

    Poellmann, Katja; Kong, Ying-Yee

    2017-01-01

    Purpose We used an eye-tracking technique to investigate whether English listeners use suprasegmental information about lexical stress to speed up the recognition of spoken words in English. Method In a visual world paradigm, 24 young English listeners followed spoken instructions to choose 1 of 4 printed referents on a computer screen (e.g., “Click on the word admiral”). Displays contained a critical pair of words (e.g., ˈadmiral–ˌadmiˈration) that were segmentally identical for their first 2 syllables but differed suprasegmentally in their 1st syllable: One word began with primary lexical stress, and the other began with secondary lexical stress. All words had phrase-level prominence. Listeners' relative proportion of eye fixations on these words indicated their ability to differentiate them over time. Results Before critical word pairs became segmentally distinguishable in their 3rd syllables, participants fixated target words more than their stress competitors, but only if targets had initial primary lexical stress. The degree to which stress competitors were fixated was independent of their stress pattern. Conclusions Suprasegmental information about lexical stress modulates the time course of spoken-word recognition. Specifically, suprasegmental information on the primary-stressed syllable of words with phrase-level prominence helps in distinguishing the word from phonological competitors with secondary lexical stress. PMID:28056135

  1. Biological origins of color categorization.

    PubMed

    Skelton, Alice E; Catchpole, Gemma; Abbott, Joshua T; Bosten, Jenny M; Franklin, Anna

    2017-05-23

    The biological basis of the commonality in color lexicons across languages has been hotly debated for decades. Prior evidence that infants categorize color could provide support for the hypothesis that color categorization systems are not purely constructed by communication and culture. Here, we investigate the relationship between infants' categorization of color and the commonality across color lexicons, and the potential biological origin of infant color categories. We systematically mapped infants' categorical recognition memory for hue onto a stimulus array used previously to document the color lexicons of 110 nonindustrialized languages. Following familiarization to a given hue, infants' response to a novel hue indicated that their recognition memory parses the hue continuum into red, yellow, green, blue, and purple categories. Infants' categorical distinctions aligned with common distinctions in color lexicons and are organized around hues that are commonly central to lexical categories across languages. The boundaries between infants' categorical distinctions also aligned, relative to the adaptation point, with the cardinal axes that describe the early stages of color representation in retinogeniculate pathways, indicating that infant color categorization may be partly organized by biological mechanisms of color vision. The findings suggest that color categorization in language and thought is partially biologically constrained and have implications for broader debate on how biology, culture, and communication interact in human cognition.

  2. Biological origins of color categorization

    PubMed Central

    Catchpole, Gemma; Abbott, Joshua T.; Bosten, Jenny M.; Franklin, Anna

    2017-01-01

    The biological basis of the commonality in color lexicons across languages has been hotly debated for decades. Prior evidence that infants categorize color could provide support for the hypothesis that color categorization systems are not purely constructed by communication and culture. Here, we investigate the relationship between infants’ categorization of color and the commonality across color lexicons, and the potential biological origin of infant color categories. We systematically mapped infants’ categorical recognition memory for hue onto a stimulus array used previously to document the color lexicons of 110 nonindustrialized languages. Following familiarization to a given hue, infants’ response to a novel hue indicated that their recognition memory parses the hue continuum into red, yellow, green, blue, and purple categories. Infants’ categorical distinctions aligned with common distinctions in color lexicons and are organized around hues that are commonly central to lexical categories across languages. The boundaries between infants’ categorical distinctions also aligned, relative to the adaptation point, with the cardinal axes that describe the early stages of color representation in retinogeniculate pathways, indicating that infant color categorization may be partly organized by biological mechanisms of color vision. The findings suggest that color categorization in language and thought is partially biologically constrained and have implications for broader debate on how biology, culture, and communication interact in human cognition. PMID:28484022

  3. Effects of Fundamental Frequency and Duration Variation on the Perception of South Kyungsang Korean Tones

    ERIC Educational Resources Information Center

    Chang, Seung-Eun

    2013-01-01

    The perception of lexical tones is addressed through research on South Kyungsang Korean, spoken in the southeastern part of Korea. Based on an earlier production study (Chang, 2008a, 2008b), a categorization experiment was conducted to determine the perceptually salient aspects of the perceptual nature of a high tone and a rising tone. The…

  4. Lexical Stress and Phonetic Processing in Word Learning in 20- to 24-Month-Old English-Learning Children

    ERIC Educational Resources Information Center

    Floccia, Caroline; Nazzi, Thierry; Austin, Keith; Arreckx, Frederique; Goslin, Jeremy

    2011-01-01

    To investigate the interaction between segmental and supra-segmental stress-related information in early word learning, two experiments were conducted with 20- to 24-month-old English-learning children. In an adaptation of the object categorization study designed by Nazzi and Gopnik (2001), children were presented with pairs of novel objects whose…

  5. Effects of perceptual and conceptual similarity in lexical priming of young children who stutter: preliminary findings.

    PubMed

    Hartfield, Kia N; Conture, Edward G

    2006-01-01

    The purpose of this study was to investigate the influence of conceptual and perceptual properties of words on the speed and accuracy of lexical retrieval of children who do (CWS) and do not stutter (CWNS) during a picture-naming task. Participants consisted of 13 3-5-year-old CWS and the same number of CWNS. All participants had speech, language, and hearing development within normal limits, with the exception of stuttering for CWS. Both talker groups participated in a picture-naming task where they named, one at a time, computer-presented, black-on-white drawings of common age-appropriate objects. These pictures were named during four auditory priming conditions: (a) a neutral prime consisting of a tone, (b) a word prime physically related to the target word, (c) a word prime functionally related to the target word, and (d) a word prime categorically related to the target word. Speech reaction time (SRT) was measured from the offset of presentation of the picture target to the onset of participant's verbal speech response. Results indicated that CWS were slower than CWNS across priming conditions (i.e., neutral, physical, function, category) and that the speed of lexical retrieval of CWS was more influenced by functional than perceptual aspects of target pictures named. Findings were taken to suggest that CWS tend to organize lexical information functionally more so than physically and that this tendency may relate to difficulties establishing normally fluent speech and language. The reader will learn about and be able to (1) communicate the relevance of examining lexical retrieval in relation to childhood stuttering and (2) describe the method of measuring speech reaction times of accurate and fluent responses during a picture-naming task as a means of assessing lexical retrieval skills.

  6. Additive and interactive effects in semantic priming: Isolating lexical and decision processes in the lexical decision task.

    PubMed

    Yap, Melvin J; Balota, David A; Tan, Sarah E

    2013-01-01

    The present study sheds light on the interplay between lexical and decision processes in the lexical decision task by exploring the effects of lexical decision difficulty on semantic priming effects. In 2 experiments, we increased lexical decision difficulty by either using transposed letter wordlike nonword distracters (e.g., JUGDE; Experiment 1) or by visually degrading targets (Experiment 2). Although target latencies were considerably slowed by both difficulty manipulations, stimulus quality-but not nonword type-moderated priming effects, consistent with recent work by Lupker and Pexman (2010). To characterize these results in a more fine-grained manner, data were also analyzed at the level of response time (RT) distributions, using a combination of ex-Gaussian, quantile, and diffusion model analyses. The results indicate that for clear targets, priming was reflected by distributional shifting of comparable magnitude across different nonword types. In contrast, priming of degraded targets was reflected by shifting and an increase in the tail of the distribution. We discuss how these findings, along with others, can be accommodated by an embellished multistage activation model that incorporates retrospective prime retrieval and decision-based mechanisms.

  7. Syllable Transposition Effects in Korean Word Recognition

    ERIC Educational Resources Information Center

    Lee, Chang H.; Kwon, Youan; Kim, Kyungil; Rastle, Kathleen

    2015-01-01

    Research on the impact of letter transpositions in visual word recognition has yielded important clues about the nature of orthographic representations. This study investigated the impact of syllable transpositions on the recognition of Korean multisyllabic words. Results showed that rejection latencies in visual lexical decision for…

  8. On the costs of parallel processing in dual-task performance: The case of lexical processing in word production.

    PubMed

    Paucke, Madlen; Oppermann, Frank; Koch, Iring; Jescheniak, Jörg D

    2015-12-01

    Previous dual-task picture-naming studies suggest that lexical processes require capacity-limited processes and prevent other tasks to be carried out in parallel. However, studies involving the processing of multiple pictures suggest that parallel lexical processing is possible. The present study investigated the specific costs that may arise when such parallel processing occurs. We used a novel dual-task paradigm by presenting 2 visual objects associated with different tasks and manipulating between-task similarity. With high similarity, a picture-naming task (T1) was combined with a phoneme-decision task (T2), so that lexical processes were shared across tasks. With low similarity, picture-naming was combined with a size-decision T2 (nonshared lexical processes). In Experiment 1, we found that a manipulation of lexical processes (lexical frequency of T1 object name) showed an additive propagation with low between-task similarity and an overadditive propagation with high between-task similarity. Experiment 2 replicated this differential forward propagation of the lexical effect and showed that it disappeared with longer stimulus onset asynchronies. Moreover, both experiments showed backward crosstalk, indexed as worse T1 performance with high between-task similarity compared with low similarity. Together, these findings suggest that conditions of high between-task similarity can lead to parallel lexical processing in both tasks, which, however, does not result in benefits but rather in extra performance costs. These costs can be attributed to crosstalk based on the dual-task binding problem arising from parallel processing. Hence, the present study reveals that capacity-limited lexical processing can run in parallel across dual tasks but only at the expense of extraordinary high costs. (c) 2015 APA, all rights reserved).

  9. Does viotin activate violin more than viocin? On the use of visual cues during visual-word recognition.

    PubMed

    Perea, Manuel; Panadero, Victoria

    2014-01-01

    The vast majority of neural and computational models of visual-word recognition assume that lexical access is achieved via the activation of abstract letter identities. Thus, a word's overall shape should play no role in this process. In the present lexical decision experiment, we compared word-like pseudowords like viotín (same shape as its base word: violín) vs. viocín (different shape) in mature (college-aged skilled readers), immature (normally reading children), and immature/impaired (young readers with developmental dyslexia) word-recognition systems. Results revealed similar response times (and error rates) to consistent-shape and inconsistent-shape pseudowords for both adult skilled readers and normally reading children - this is consistent with current models of visual-word recognition. In contrast, young readers with developmental dyslexia made significantly more errors to viotín-like pseudowords than to viocín-like pseudowords. Thus, unlike normally reading children, young readers with developmental dyslexia are sensitive to a word's visual cues, presumably because of poor letter representations.

  10. Individual differences in the joint effects of semantic priming and word frequency: The role of lexical integrity

    PubMed Central

    Yap, Melvin J.; Tse, Chi-Shing; Balota, David A.

    2009-01-01

    Word frequency and semantic priming effects are among the most robust effects in visual word recognition, and it has been generally assumed that these two variables produce interactive effects in lexical decision performance, with larger priming effects for low-frequency targets. The results from four lexical decision experiments indicate that the joint effects of semantic priming and word frequency are critically dependent upon differences in the vocabulary knowledge of the participants. Specifically, across two Universities, additive effects of the two variables were observed in participants with more vocabulary knowledge, while interactive effects were observed in participants with less vocabulary knowledge. These results are discussed with reference to Borowsky and Besner’s (1993) multistage account and Plaut and Booth’s (2000) single-mechanism model. In general, the findings are also consistent with a flexible lexical processing system that optimizes performance based on processing fluency and task demands. PMID:20161653

  11. The influence of emotion on lexical processing: insights from RT distributional analysis.

    PubMed

    Yap, Melvin J; Seow, Cui Shan

    2014-04-01

    In two lexical decision experiments, the present study was designed to examine emotional valence effects on visual lexical decision (standard and go/no-go) performance, using traditional analyses of means and distributional analyses of response times. Consistent with an earlier study by Kousta, Vinson, and Vigliocco (Cognition 112:473-481, 2009), we found that emotional words (both negative and positive) were responded to faster than neutral words. Finer-grained distributional analyses further revealed that the facilitation afforded by valence was reflected by a combination of distributional shifting and an increase in the slow tail of the distribution. This suggests that emotional valence effects in lexical decision are unlikely to be entirely mediated by early, preconscious processes, which are associated with pure distributional shifting. Instead, our results suggest a dissociation between early preconscious processes and a later, more task-specific effect that is driven by feedback from semantically rich representations.

  12. Differential processing of consonants and vowels in lexical access through reading.

    PubMed

    New, Boris; Araújo, Verónica; Nazzi, Thierry

    2008-12-01

    Do consonants and vowels have the same importance during reading? Recently, it has been proposed that consonants play a more important role than vowels for language acquisition and adult speech processing. This proposal has started receiving developmental support from studies showing that infants are better at processing specific consonantal than vocalic information while learning new words. This proposal also received support from adult speech processing. In our study, we directly investigated the relative contributions of consonants and vowels to lexical access while reading by using a visual masked-priming lexical decision task. Test items were presented following four different primes: identity (e.g., for the word joli, joli), unrelated (vabu), consonant-related (jalu), and vowel-related (vobi). Priming was found for the identity and consonant-related conditions, but not for the vowel-related condition. These results establish the privileged role of consonants during lexical access while reading.

  13. Sizing up the competition: quantifying the influence of the mental lexicon on auditory and visual spoken word recognition.

    PubMed

    Strand, Julia F; Sommers, Mitchell S

    2011-09-01

    Much research has explored how spoken word recognition is influenced by the architecture and dynamics of the mental lexicon (e.g., Luce and Pisoni, 1998; McClelland and Elman, 1986). A more recent question is whether the processes underlying word recognition are unique to the auditory domain, or whether visually perceived (lipread) speech may also be sensitive to the structure of the mental lexicon (Auer, 2002; Mattys, Bernstein, and Auer, 2002). The current research was designed to test the hypothesis that both aurally and visually perceived spoken words are isolated in the mental lexicon as a function of their modality-specific perceptual similarity to other words. Lexical competition (the extent to which perceptually similar words influence recognition of a stimulus word) was quantified using metrics that are well-established in the literature, as well as a statistical method for calculating perceptual confusability based on the phi-square statistic. Both auditory and visual spoken word recognition were influenced by modality-specific lexical competition as well as stimulus word frequency. These findings extend the scope of activation-competition models of spoken word recognition and reinforce the hypothesis (Auer, 2002; Mattys et al., 2002) that perceptual and cognitive properties underlying spoken word recognition are not specific to the auditory domain. In addition, the results support the use of the phi-square statistic as a better predictor of lexical competition than metrics currently used in models of spoken word recognition. © 2011 Acoustical Society of America

  14. Evidence for the activation of sensorimotor information during visual word recognition: the body-object interaction effect.

    PubMed

    Siakaluk, Paul D; Pexman, Penny M; Aguilera, Laura; Owen, William J; Sears, Christopher R

    2008-01-01

    We examined the effects of sensorimotor experience in two visual word recognition tasks. Body-object interaction (BOI) ratings were collected for a large set of words. These ratings assess perceptions of the ease with which a human body can physically interact with a word's referent. A set of high BOI words (e.g., mask) and a set of low BOI words (e.g., ship) were created, matched on imageability and concreteness. Facilitatory BOI effects were observed in lexical decision and phonological lexical decision tasks: responses were faster for high BOI words than for low BOI words. We discuss how our findings may be accounted for by (a) semantic feedback within the visual word recognition system, and (b) an embodied view of cognition (e.g., Barsalou's perceptual symbol systems theory), which proposes that semantic knowledge is grounded in sensorimotor interactions with the environment.

  15. Sensory experience ratings (SERs) for 1,659 French words: Relationships with other psycholinguistic variables and visual word recognition.

    PubMed

    Bonin, Patrick; Méot, Alain; Ferrand, Ludovic; Bugaïska, Aurélia

    2015-09-01

    We collected sensory experience ratings (SERs) for 1,659 French words in adults. Sensory experience for words is a recently introduced variable that corresponds to the degree to which words elicit sensory and perceptual experiences (Juhasz & Yap Behavior Research Methods, 45, 160-168, 2013; Juhasz, Yap, Dicke, Taylor, & Gullick Quarterly Journal of Experimental Psychology, 64, 1683-1691, 2011). The relationships of the sensory experience norms with other psycholinguistic variables (e.g., imageability and age of acquisition) were analyzed. We also investigated the degree to which SER predicted performance in visual word recognition tasks (lexical decision, word naming, and progressive demasking). The analyses indicated that SER reliably predicted response times in lexical decision, but not in word naming or progressive demasking. The findings are discussed in relation to the status of SER, the role of semantic code activation in visual word recognition, and the embodied view of cognition.

  16. Automatic Activation of Phonological Code during Visual Word Recognition in Children: A Masked Priming Study in Grades 3 and 5

    ERIC Educational Resources Information Center

    Sauval, Karinne; Perre, Laetitia; Casalis, Séverine

    2017-01-01

    The present study aimed to investigate the development of automatic phonological processes involved in visual word recognition during reading acquisition in French. A visual masked priming lexical decision experiment was carried out with third, fifth graders and adult skilled readers. Three different types of partial overlap between the prime and…

  17. Alternating-script priming in Japanese: Are Katakana and Hiragana characters interchangeable?

    PubMed

    Perea, Manuel; Nakayama, Mariko; Lupker, Stephen J

    2017-07-01

    Models of written word recognition in languages using the Roman alphabet assume that a word's visual form is quickly mapped onto abstract units. This proposal is consistent with the finding that masked priming effects are of similar magnitude from lowercase, uppercase, and alternating-case primes (e.g., beard-BEARD, BEARD-BEARD, and BeArD-BEARD). We examined whether this claim can be readily generalized to the 2 syllabaries of Japanese Kana (Hiragana and Katakana). The specific rationale was that if the visual form of Kana words is lost early in the lexical access process, alternating-script repetition primes should be as effective as same-script repetition primes at activating a target word. Results showed that alternating-script repetition primes were less effective at activating lexical representations of Katakana words than same-script repetition primes-indeed, they were no more effective than partial primes that contained only the Katakana characters from the alternating-script primes. Thus, the idiosyncrasies of each writing system do appear to shape the pathways to lexical access. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  18. Individual Differences in Visual Word Recognition: Insights from the English Lexicon Project

    PubMed Central

    Yap, Melvin J.; Balota, David A.; Sibley, Daragh E.; Ratcliff, Roger

    2011-01-01

    Empirical work and models of visual word recognition have traditionally focused on group-level performance. Despite the emphasis on the prototypical reader, there is clear evidence that variation in reading skill modulates word recognition performance. In the present study, we examined differences between individuals who contributed to the English Lexicon Project (http://elexicon.wustl.edu), an online behavioral database containing nearly four million word recognition (speeded pronunciation and lexical decision) trials from over 1,200 participants. We observed considerable within- and between-session reliability across distinct sets of items, in terms of overall mean response time (RT), RT distributional characteristics, diffusion model parameters (Ratcliff, Gomez, & McKoon, 2004), and sensitivity to underlying lexical dimensions. This indicates reliably detectable individual differences in word recognition performance. In addition, higher vocabulary knowledge was associated with faster, more accurate word recognition performance, attenuated sensitivity to stimuli characteristics, and more efficient accumulation of information. Finally, in contrast to suggestions in the literature, we did not find evidence that individuals were trading-off in their utilization of lexical and nonlexical information. PMID:21728459

  19. Reading difficulties in Albanian.

    PubMed

    Avdyli, Rrezarta; Cuetos, Fernando

    2012-10-01

    Albanian is an Indo-European language with a shallow orthography, in which there is an absolute correspondence between graphemes and phonemes. We aimed to know reading strategies used by Albanian disabled children during word and pseudoword reading. A pool of 114 Kosovar reading disabled children matched with 150 normal readers aged 6 to 11 years old were tested. They had to read 120 stimuli varied in lexicality, frequency, and length. The results in terms of reading accuracy as well as in reading times show that both groups were affected by lexicality and length effects. In both groups, length and lexicality effects were significantly modulated by school year being greater in early grades and later diminish in length and just the opposite in lexicality. However, the reading difficulties group was less accurate and slower than the control group across all school grades. Analyses of the error patterns showed that phonological errors, when the letter replacement leading to new nonwords, are the most common error type in both groups, although as grade rises, visual errors and lexicalizations increased more in the control group than the reading difficulties group. These findings suggest that Albanian normal children use both routes (lexical and sublexical) from the beginning of reading despite of the complete regularity of Albanian, while children with reading difficulties start using sublexical reading and the lexical reading takes more time to acquire, but finally both routes are functional.

  20. The cognitive mechanisms underlying perspective taking between conversational partners: Evidence from speakers with Alzheimer’s disease

    PubMed Central

    Wardlow, Liane; Ivanova, Iva; Gollan, Tamar H.

    2014-01-01

    Successful communication requires speakers to consider their listeners’ perspectives. Little is known about how this ability changes in Alzheimer’s Disease (AD) although such knowledge could reveal the cognitive mechanisms fundamental to perspective-taking ability, and reveal which cognitive deficits are fundamental to communication disorders in AD. Patients with mild to moderate AD and age and education matched controls were tested in a communicative perspective-taking task, and on measures of executive control, general cognitive functioning, and lexical retrieval. Patients’ ability to perform the perspective-taking task was significantly correlated with performance on measures of general cognitive functioning, visual scanning and construction, response conflict and attention. Measures of lexical retrieval tended not to be correlated with performance on the communication task with one exception: semantic but not letter fluency predicted a derived score of perspective-taking ability. These findings broaden our understanding of the cognitive mechanisms underlying perspective taking, and suggest that impairments in perspective taking in AD occur during utterance planning, and at a relatively early processing stage which involves rapid visual scanning and problem solving, rather than during retrieval of lexical items needed to speak. More broadly, these data reveal executive function and semantic deficits, but not problems with lexical retrieval, as more fundamental to the basis of cognitive changes associated with AD. PMID:24467889

  1. Adult Word Recognition and Visual Sequential Memory

    ERIC Educational Resources Information Center

    Holmes, V. M.

    2012-01-01

    Two experiments were conducted investigating the role of visual sequential memory skill in the word recognition efficiency of undergraduate university students. Word recognition was assessed in a lexical decision task using regularly and strangely spelt words, and nonwords that were either standard orthographically legal strings or items made from…

  2. Short-Term and Long-Term Effects on Visual Word Recognition

    ERIC Educational Resources Information Center

    Protopapas, Athanassios; Kapnoula, Efthymia C.

    2016-01-01

    Effects of lexical and sublexical variables on visual word recognition are often treated as homogeneous across participants and stable over time. In this study, we examine the modulation of frequency, length, syllable and bigram frequency, orthographic neighborhood, and graphophonemic consistency effects by (a) individual differences, and (b) item…

  3. Facilitative Orthographic Neighborhood Effects: The SERIOL Model Account

    ERIC Educational Resources Information Center

    Whitney, Carol; Lavidor, Michal

    2005-01-01

    A large orthographic neighborhood (N) facilitates lexical decision for central and left visual field/right hemisphere (LVF/RH) presentation, but not for right visual field/left hemisphere (RVF/LH) presentation. Based on the SERIOL model of letter-position encoding, this asymmetric N effect is explained by differential activation patterns at the…

  4. The functional architecture of the ventral temporal cortex and its role in categorization

    PubMed Central

    Grill-Spector, Kalanit; Weiner, Kevin S.

    2014-01-01

    Visual categorization is thought to occur in the human ventral temporal cortex (VTC), but how this categorization is achieved is still largely unknown. In this Review, we consider the computations and representations that are necessary for categorization and examine how the microanatomical and macroanatomical layout of the VTC might optimize them to achieve rapid and flexible visual categorization. We propose that efficient categorization is achieved by organizing representations in a nested spatial hierarchy in the VTC. This spatial hierarchy serves as a neural infrastructure for the representational hierarchy of visual information in the VTC and thereby enables flexible access to category information at several levels of abstraction. PMID:24962370

  5. The effect of integration masking on visual processing in perceptual categorization.

    PubMed

    Hélie, Sébastien

    2017-08-01

    Learning to recognize and categorize objects is an essential cognitive skill allowing animals to function in the world. However, animals rarely have access to a canonical view of an object in an uncluttered environment. Hence, it is essential to study categorization under noisy, degraded conditions. In this article, we explore how the brain processes categorization stimuli in low signal-to-noise conditions using multivariate pattern analysis. We used an integration masking paradigm with mask opacity of 50%, 60%, and 70% inside a magnetic resonance imaging scanner. The results show that mask opacity affects blood-oxygen-level dependent (BOLD) signal in visual processing areas (V1, V2, V3, and V4) but does not affect the BOLD signal in brain areas traditionally associated with categorization (prefrontal cortex, striatum, hippocampus). This suggests that when a stimulus is difficult to extract from its background (e.g., low signal-to-noise ratio), the visual system extracts the stimulus and that activity in areas typically associated with categorization are not affected by the difficulty level of the visual conditions. We conclude with implications of this result for research on visual attention, categorization, and the integration of these fields. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Eye-tracking the time-course of novel word learning and lexical competition in adults and children.

    PubMed

    Weighall, A R; Henderson, L M; Barr, D J; Cairney, S A; Gaskell, M G

    2017-04-01

    Lexical competition is a hallmark of proficient, automatic word recognition. Previous research suggests that there is a delay before a new spoken word becomes engaged in this process, with sleep playing an important role. However, data from one method - the visual world paradigm - consistently show competition without a delay. We trained 42 adults and 40 children (aged 7-8) on novel word-object pairings, and employed this paradigm to measure the time-course of lexical competition. Fixations to novel objects upon hearing existing words (e.g., looks to the novel object biscal upon hearing "click on the biscuit") were compared to fixations on untrained objects. Novel word-object pairings learned immediately before testing and those learned the previous day exhibited significant competition effects, with stronger competition for the previous day pairings for children but not adults. Crucially, this competition effect was significantly smaller for novel than existing competitors (e.g., looks to candy upon hearing "click on the candle"), suggesting that novel items may not compete for recognition like fully-fledged lexical items, even after 24h. Explicit memory (cued recall) was superior for words learned the day before testing, particularly for children; this effect (but not the lexical competition effects) correlated with sleep-spindle density. Together, the results suggest that different aspects of new word learning follow different time courses: visual world competition effects can emerge swiftly, but are qualitatively different from those observed with established words, and are less reliant upon sleep. Furthermore, the findings fit with the view that word learning earlier in development is boosted by sleep to a greater degree. Copyright © 2016. Published by Elsevier Inc.

  7. Lexical-Access Ability and Cognitive Predictors of Speech Recognition in Noise in Adult Cochlear Implant Users

    PubMed Central

    Smits, Cas; Merkus, Paul; Festen, Joost M.; Goverts, S. Theo

    2017-01-01

    Not all of the variance in speech-recognition performance of cochlear implant (CI) users can be explained by biographic and auditory factors. In normal-hearing listeners, linguistic and cognitive factors determine most of speech-in-noise performance. The current study explored specifically the influence of visually measured lexical-access ability compared with other cognitive factors on speech recognition of 24 postlingually deafened CI users. Speech-recognition performance was measured with monosyllables in quiet (consonant-vowel-consonant [CVC]), sentences-in-noise (SIN), and digit-triplets in noise (DIN). In addition to a composite variable of lexical-access ability (LA), measured with a lexical-decision test (LDT) and word-naming task, vocabulary size, working-memory capacity (Reading Span test [RSpan]), and a visual analogue of the SIN test (text reception threshold test) were measured. The DIN test was used to correct for auditory factors in SIN thresholds by taking the difference between SIN and DIN: SRTdiff. Correlation analyses revealed that duration of hearing loss (dHL) was related to SIN thresholds. Better working-memory capacity was related to SIN and SRTdiff scores. LDT reaction time was positively correlated with SRTdiff scores. No significant relationships were found for CVC or DIN scores with the predictor variables. Regression analyses showed that together with dHL, RSpan explained 55% of the variance in SIN thresholds. When controlling for auditory performance, LA, LDT, and RSpan separately explained, together with dHL, respectively 37%, 36%, and 46% of the variance in SRTdiff outcome. The results suggest that poor verbal working-memory capacity and to a lesser extent poor lexical-access ability limit speech-recognition ability in listeners with a CI. PMID:29205095

  8. The Effects of Visual Complexity for Japanese Kanji Processing with High and Low Frequencies

    ERIC Educational Resources Information Center

    Tamaoka, Katsuo; Kiyama, Sachiko

    2013-01-01

    The present study investigated the effects of visual complexity for kanji processing by selecting target kanji from different stroke ranges of visually simple (2-6 strokes), medium (8-12 strokes), and complex (14-20 strokes) kanji with high and low frequencies. A kanji lexical decision task in Experiment 1 and a kanji naming task in Experiment 2…

  9. Re-Examining Format Distortion and Orthographic Neighbourhood Size Effects in the Left, Central and Right Visual Fields

    ERIC Educational Resources Information Center

    Mano, Quintino R.; Patrick, Cory J.; Andresen, Elizabeth N.; Capizzi, Kyle; Biagioli, Raschel; Osmon, David C.

    2010-01-01

    Research has shown orthographic neighbourhood size effects (ONS) in the left visual field (LVF) but not in the right visual field (RVF). An earlier study examined the combined effects of ONS and font distortion in the LVF and RVF, but did not find an interaction. The current lexical decision experiment re-examined the interaction between ONS and…

  10. Individual differences in automatic semantic priming.

    PubMed

    Andrews, Sally; Lo, Steson; Xia, Violet

    2017-05-01

    This research investigated whether masked semantic priming in a semantic categorization task that required classification of words as animals or nonanimals was modulated by individual differences in lexical proficiency. A sample of 89 skilled readers, assessed on reading comprehension, vocabulary and spelling ability, classified target words preceded by brief (50 ms) masked primes that were either congruent or incongruent with the category of the target. Congruent primes were also selected to be either high (e.g., hawk EAGLE, pistol RIFLE) or low (e.g., mole EAGLE, boots RIFLE) in semantic feature overlap with the target. "Overall proficiency," indexed by high performance on both a "semantic composite" measure of reading comprehension and vocabulary and a "spelling composite," was associated with stronger congruence priming from both high and low feature overlap primes for animal exemplars, but only predicted priming from low overlap primes for nonexemplars. Classification of high frequency nonexemplars was also significantly modulated by an independent "spelling-meaning" factor, indexed by the discrepancy between the semantic and spelling composites, because relatively higher scores on the semantic than the spelling composite were associated with stronger semantic priming. These findings show that higher lexical proficiency is associated with stronger evidence of automatic semantic priming and suggest that individual differences in lexical quality modulate the division of labor between orthographic and semantic processing in early lexical retrieval. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  11. Linguistic Skills of Adult Native Speakers, as a Function of Age and Level of Education

    ERIC Educational Resources Information Center

    Mulder, Kimberley; Hulstijn, Jan H.

    2011-01-01

    This study assessed, in a sample of 98 adult native speakers of Dutch, how their lexical skills and their speaking proficiency varied as a function of their age and level of education and profession (EP). Participants, categorized in terms of their age (18-35, 36-50, and 51-76 years old) and the level of their EP (low versus high), were tested on…

  12. Lexical interference effects in sentence processing: Evidence from the visual world paradigm and self-organizing models

    PubMed Central

    Kukona, Anuenue; Cho, Pyeong Whan; Magnuson, James S.; Tabor, Whitney

    2014-01-01

    Psycholinguistic research spanning a number of decades has produced diverging results with regard to the nature of constraint integration in online sentence processing. For example, evidence that language users anticipatorily fixate likely upcoming referents in advance of evidence in the speech signal supports rapid context integration. By contrast, evidence that language users activate representations that conflict with contextual constraints, or only indirectly satisfy them, supports non-integration or late integration. Here, we report on a self-organizing neural network framework that addresses one aspect of constraint integration: the integration of incoming lexical information (i.e., an incoming word) with sentence context information (i.e., from preceding words in an unfolding utterance). In two simulations, we show that the framework predicts both classic results concerned with lexical ambiguity resolution (Swinney, 1979; Tanenhaus, Leiman, & Seidenberg, 1979), which suggest late context integration, and results demonstrating anticipatory eye movements (e.g., Altmann & Kamide, 1999), which support rapid context integration. We also report two experiments using the visual world paradigm that confirm a new prediction of the framework. Listeners heard sentences like “The boy will eat the white…,” while viewing visual displays with objects like a white cake (i.e., a predictable direct object of “eat”), white car (i.e., an object not predicted by “eat,” but consistent with “white”), and distractors. Consistent with our simulation predictions, we found that while listeners fixated white cake most, they also fixated white car more than unrelated distractors in this highly constraining sentence (and visual) context. PMID:24245535

  13. Does the Visual Attention Span Play a Role in Reading in Arabic?

    ERIC Educational Resources Information Center

    Lallier, Marie; Abu Mallouh, Reem; Mohammed, Ahmed M.; Khalifa, Batoul; Perea, Manuel; Carreiras, Manuel

    2018-01-01

    It is unclear whether the association between the visual attention (VA) span and reading differs across languages. Here we studied this relationship in Arabic, where the use of specific reading strategies depends on the amount of diacritics on words: reading vowelized and nonvowelized Arabic scripts favor sublexical and lexical strategies,…

  14. Repetition Priming within and between the Two Cerebral Hemispheres

    ERIC Educational Resources Information Center

    Weems, S.A.; Zaidel, E.

    2005-01-01

    Two experiments explored repetition priming benefits in the left and right cerebral hemispheres. In both experiments, a lateralized lexical decision task was employed using repeated target stimuli. In the first experiment, all targets were repeated in the same visual field, and in the second experiment the visual field of presentation was switched…

  15. Neighborhood Effects on Nonword Visual Processing in a Language with Shallow Orthography

    ERIC Educational Resources Information Center

    Arduino, Lisa S.; Burani, Cristina

    2004-01-01

    Neighborhood size and neighborhood frequency were orthogonally varied in two experiments on Italian nonwords. In Experiment 1, an inhibitory effect of neighborhood frequency on visual lexical decision was found: The presence of one high-frequency neighbor increased response latencies and error rates to nonwords. By contrast, no effect of…

  16. Identifiable Orthographically Similar Word Primes Interfere in Visual Word Identification

    ERIC Educational Resources Information Center

    Burt, Jennifer S.

    2009-01-01

    University students participated in five experiments concerning the effects of unmasked, orthographically similar, primes on visual word recognition in the lexical decision task (LDT) and naming tasks. The modal prime-target stimulus onset asynchrony (SOA) was 350 ms. When primes were words that were orthographic neighbors of the targets, and…

  17. Is Syntactic-Category Processing Obligatory in Visual Word Recognition? Evidence from Chinese

    ERIC Educational Resources Information Center

    Wong, Andus Wing-Kuen; Chen, Hsuan-Chih

    2012-01-01

    Three experiments were conducted to investigate how syntactic-category and semantic information is processed in visual word recognition. The stimuli were two-character Chinese words in which semantic and syntactic-category ambiguities were factorially manipulated. A lexical decision task was employed in Experiment 1, whereas a semantic relatedness…

  18. Evidence for Early Morphological Decomposition in Visual Word Recognition

    ERIC Educational Resources Information Center

    Solomyak, Olla; Marantz, Alec

    2010-01-01

    We employ a single-trial correlational MEG analysis technique to investigate early processing in the visual recognition of morphologically complex words. Three classes of affixed words were presented in a lexical decision task: free stems (e.g., taxable), bound roots (e.g., tolerable), and unique root words (e.g., vulnerable, the root of which…

  19. When canary primes yellow: effects of semantic memory on overt attention.

    PubMed

    Léger, Laure; Chauvet, Elodie

    2015-02-01

    This study explored how overt attention is influenced by the colour that is primed when a target word is read during a lexical visual search task. Prior studies have shown that attention can be influenced by conceptual or perceptual overlap between a target word and distractor pictures: attention is attracted to pictures that have the same form (rope--snake) or colour (green--frog) as the spoken target word or is drawn to an object from the same category as the spoken target word (trumpet--piano). The hypothesis for this study was that attention should be attracted to words displayed in the colour that is primed by reading a target word (for example, yellow for canary). An experiment was conducted in which participants' eye movements were recorded whilst they completed a lexical visual search task. The primary finding was that participants' eye movements were mainly directed towards words displayed in the colour primed by reading the target word, even though this colour was not relevant to completing the visual search task. This result is discussed in terms of top-down guidance of overt attention in visual search for words.

  20. Visual Word Recognition Across the Adult Lifespan

    PubMed Central

    Cohen-Shikora, Emily R.; Balota, David A.

    2016-01-01

    The current study examines visual word recognition in a large sample (N = 148) across the adult lifespan and across a large set of stimuli (N = 1187) in three different lexical processing tasks (pronunciation, lexical decision, and animacy judgments). Although the focus of the present study is on the influence of word frequency, a diverse set of other variables are examined as the system ages and acquires more experience with language. Computational models and conceptual theories of visual word recognition and aging make differing predictions for age-related changes in the system. However, these have been difficult to assess because prior studies have produced inconsistent results, possibly due to sample differences, analytic procedures, and/or task-specific processes. The current study confronts these potential differences by using three different tasks, treating age and word variables as continuous, and exploring the influence of individual differences such as vocabulary, vision, and working memory. The primary finding is remarkable stability in the influence of a diverse set of variables on visual word recognition across the adult age spectrum. This pattern is discussed in reference to previous inconsistent findings in the literature and implications for current models of visual word recognition. PMID:27336629

  1. Working Memory and Speech Recognition in Noise Under Ecologically Relevant Listening Conditions: Effects of Visual Cues and Noise Type Among Adults With Hearing Loss.

    PubMed

    Miller, Christi W; Stewart, Erin K; Wu, Yu-Hsiang; Bishop, Christopher; Bentler, Ruth A; Tremblay, Kelly

    2017-08-16

    This study evaluated the relationship between working memory (WM) and speech recognition in noise with different noise types as well as in the presence of visual cues. Seventy-six adults with bilateral, mild to moderately severe sensorineural hearing loss (mean age: 69 years) participated. Using a cross-sectional design, 2 measures of WM were taken: a reading span measure, and Word Auditory Recognition and Recall Measure (Smith, Pichora-Fuller, & Alexander, 2016). Speech recognition was measured with the Multi-Modal Lexical Sentence Test for Adults (Kirk et al., 2012) in steady-state noise and 4-talker babble, with and without visual cues. Testing was under unaided conditions. A linear mixed model revealed visual cues and pure-tone average as the only significant predictors of Multi-Modal Lexical Sentence Test outcomes. Neither WM measure nor noise type showed a significant effect. The contribution of WM in explaining unaided speech recognition in noise was negligible and not influenced by noise type or visual cues. We anticipate that with audibility partially restored by hearing aids, the effects of WM will increase. For clinical practice to be affected, more significant effect sizes are needed.

  2. Semantic priming from crowded words.

    PubMed

    Yeh, Su-Ling; He, Sheng; Cavanagh, Patrick

    2012-06-01

    Vision in a cluttered scene is extremely inefficient. This damaging effect of clutter, known as crowding, affects many aspects of visual processing (e.g., reading speed). We examined observers' processing of crowded targets in a lexical decision task, using single-character Chinese words that are compact but carry semantic meaning. Despite being unrecognizable and indistinguishable from matched nonwords, crowded prime words still generated robust semantic-priming effects on lexical decisions for test words presented in isolation. Indeed, the semantic-priming effect of crowded primes was similar to that of uncrowded primes. These findings show that the meanings of words survive crowding even when the identities of the words do not, suggesting that crowding does not prevent semantic activation, a process that may have evolved in the context of a cluttered visual environment.

  3. Combining Semantic and Lexical Methods for Mapping MedDRA to VCM Icons.

    PubMed

    Lamy, Jean-Baptiste; Tsopra, Rosy

    2018-01-01

    VCM (Visualization of Concept in Medicine) is an iconic language that represents medical concepts, such as disorders, by icons. VCM has a formal semantics described by an ontology. The icons can be used in medical software for providing a visual summary or enriching texts. However, the use of VCM icons in user interfaces requires to map standard medical terminologies to VCM. Here, we present a method combining semantic and lexical approaches for mapping MedDRA to VCM. The method takes advantage of the hierarchical relations in MedDRA. It also analyzes the groups of lemmas in the term's labels, and relies on a manual mapping of these groups to the concepts in the VCM ontology. We evaluate the method on 50 terms. Finally, we discuss the method and suggest perspectives.

  4. Beyond the initial 140 ms, lexical decision and reading aloud are different tasks: An ERP study with topographic analysis.

    PubMed

    Mahé, Gwendoline; Zesiger, Pascal; Laganaro, Marina

    2015-11-15

    Most of our knowledge on the time-course of the mechanisms involved in reading derived from electrophysiological studies is based on lexical decision tasks. By contrast, very few ERP studies investigated the processes involved in reading aloud. It has been suggested that the lexical decision task provides a good index of the processes occurring during reading aloud, with only late processing differences related to task response modalities. However, some behavioral studies reported different sensitivity to psycholinguistic factors between the two tasks, suggesting that print processing could differ at earlier processing stages. The aim of the present study was thus to carry out an ERP comparison between lexical decision and reading aloud in order to determine when print processing differs between these two tasks. Twenty native French speakers performed a lexical decision task and a reading aloud task with the same written stimuli. Results revealed different electrophysiological patterns on both waveform amplitudes and global topography between lexical decision and reading aloud from about 140 ms after stimulus presentation for both words and pseudowords, i.e., as early as the N170 component. These results suggest that only very early, low-level visual processes are common to the two tasks which differ in core processes. Taken together, our main finding questions the use of the lexical decision task as an appropriate paradigm to investigate reading processes and warns against generalizing its results to word reading. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Priming and the guidance by visual and categorical templates in visual search.

    PubMed

    Wilschut, Anna; Theeuwes, Jan; Olivers, Christian N L

    2014-01-01

    Visual search is thought to be guided by top-down templates that are held in visual working memory. Previous studies have shown that a search-guiding template can be rapidly and strongly implemented from a visual cue, whereas templates are less effective when based on categorical cues. Direct visual priming from cue to target may underlie this difference. In two experiments we first asked observers to remember two possible target colors. A postcue then indicated which of the two would be the relevant color. The task was to locate a briefly presented and masked target of the cued color among irrelevant distractor items. Experiment 1 showed that overall search accuracy improved more rapidly on the basis of a direct visual postcue that carried the target color, compared to a neutral postcue that pointed to the memorized color. However, selectivity toward the target feature, i.e., the extent to which observers searched selectively among items of the cued vs. uncued color, was found to be relatively unaffected by the presence of the visual signal. In Experiment 2 we compared search that was based on either visual or categorical information, but now controlled for direct visual priming. This resulted in no differences in overall performance nor selectivity. Altogether the results suggest that perceptual processing of visual search targets is facilitated by priming from visual cues, whereas attentional selectivity is enhanced by a working memory template that can formed from both visual and categorical input. Furthermore, if the priming is controlled for, categorical- and visual-based templates similarly enhance search guidance.

  6. On the Dissociation of Word/Nonword Repetition Effects in Lexical Decision: An Evidence Accumulation Account.

    PubMed

    Perea, Manuel; Marcet, Ana; Vergara-Martínez, Marta; Gomez, Pablo

    2016-01-01

    A number of models of visual-word recognition assume that the repetition of an item in a lexical decision experiment increases that item's familiarity/wordness. This would produce not only a facilitative repetition effect for words, but also an inhibitory effect for nonwords (i.e., more familiarity/wordness makes the negative decision slower). We conducted a two-block lexical decision experiment to examine word/nonword repetition effects in the framework of a leading "familiarity/wordness" model of the lexical decision task, namely, the diffusion model (Ratcliff et al., 2004). Results showed that while repeated words were responded to faster than the unrepeated words, repeated nonwords were responded to more slowly than the nonrepeated nonwords. Fits from the diffusion model revealed that the repetition effect for words/nonwords was mainly due to differences in the familiarity/wordness (drift rate) parameter. This word/nonword dissociation favors those accounts that posit that the previous presentation of an item increases its degree of familiarity/wordness.

  7. On the Dissociation of Word/Nonword Repetition Effects in Lexical Decision: An Evidence Accumulation Account

    PubMed Central

    Perea, Manuel; Marcet, Ana; Vergara-Martínez, Marta; Gomez, Pablo

    2016-01-01

    A number of models of visual-word recognition assume that the repetition of an item in a lexical decision experiment increases that item's familiarity/wordness. This would produce not only a facilitative repetition effect for words, but also an inhibitory effect for nonwords (i.e., more familiarity/wordness makes the negative decision slower). We conducted a two-block lexical decision experiment to examine word/nonword repetition effects in the framework of a leading “familiarity/wordness” model of the lexical decision task, namely, the diffusion model (Ratcliff et al., 2004). Results showed that while repeated words were responded to faster than the unrepeated words, repeated nonwords were responded to more slowly than the nonrepeated nonwords. Fits from the diffusion model revealed that the repetition effect for words/nonwords was mainly due to differences in the familiarity/wordness (drift rate) parameter. This word/nonword dissociation favors those accounts that posit that the previous presentation of an item increases its degree of familiarity/wordness. PMID:26925021

  8. A challenging dissociation in masked identity priming with the lexical decision task.

    PubMed

    Perea, Manuel; Jiménez, María; Gómez, Pablo

    2014-05-01

    The masked priming technique has been used extensively to explore the early stages of visual-word recognition. One key phenomenon in masked priming lexical decision is that identity priming is robust for words, whereas it is small/unreliable for nonwords. This dissociation has usually been explained on the basis that masked priming effects are lexical in nature, and hence there should not be an identity prime facilitation for nonwords. We present two experiments whose results are at odds with the assumption made by models that postulate that identity priming is purely lexical, and also challenge the assumption that word and nonword responses are based on the same information. Our experiments revealed that for nonwords, but not for words, matched-case identity PRIME-TARGET pairs were responded to faster than mismatched-case identity prime-TARGET pairs, and this phenomenon was not modulated by the lowercase/uppercase feature similarity of the stimuli. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. Manipulating Color and Other Visual Information Influences Picture Naming at Different Levels of Processing: Evidence from Alzheimer Subjects and Normal Controls

    ERIC Educational Resources Information Center

    Zannino, Gian Daniele; Perri, Roberta; Salamone, Giovanna; Di Lorenzo, Concetta; Caltagirone, Carlo; Carlesimo, Giovanni A.

    2010-01-01

    There is now a large body of evidence suggesting that color and photographic detail exert an effect on recognition of visually presented familiar objects. However, an unresolved issue is whether these factors act at the visual, the semantic or lexical level of the recognition process. In the present study, we investigated this issue by having…

  10. Asymmetries in Infants’ Attention Toward and Categorization of Male Faces: The Potential Role of Experience

    PubMed Central

    Rennels, Jennifer L.; Kayl, Andrea J.; Langlois, Judith H.; Davis, Rachel E.; Orlewicz, Mateusz

    2015-01-01

    Infants typically have a preponderance of experience with females, resulting in visual preferences for female faces, particularly high attractive females, and in better categorization of female relative to male faces. We examined whether these abilities generalized to infants’ visual preferences for and categorization of perceptually similar male faces (i.e., low masculine males). Twelve-month-olds visually preferred high attractive relative to low attractive male faces within low masculine pairs only (Exp. 1), but did not visually prefer low masculine relative to high masculine male faces (Exp. 2). Lack of visual preferences was not due to infants’ inability to discriminate between the male faces (Exps. 3 & 4). Twelve-month-olds categorized low masculine, but not high masculine, male faces (Exp. 5). Infants could individuate male faces within each of the categories (Exp. 6). Twelve-month-olds’ attention toward and categorization of male faces may reflect a generalization of their female facial expertise. PMID:26547249

  11. Acquisition of linguistic procedures for printed words: neuropsychological implications for learning.

    PubMed

    Berninger, V W

    1988-10-01

    A microcomputerized experiment, administered to 45 children in the 2nd, 5th, and 8th month of first grade, manipulated three variables: (a) stimulus unit (whole word or letter-by-letter presentation), (b) nature of stimulus information (phonically regular words, phonically irregular words, nonsense words, and letter strings, which differ in whether phonemic, orthographic, semantic, and/or name codes are available), and (c) linguistic task (lexical decision, naming, and written reproduction). Letter-by-letter presentation resulted in more accurate lexical decision and naming but not more accurate written reproduction. Interactions between nature of stimulus information and linguistic task occurred. Throughout the year, accuracy was greater for lexical decision than for naming or written reproduction. The superiority of lexical decision cannot be attributed to the higher probability of correct responses on a binary choice task because only consistently correct responses on repeated trials were analyzed. The earlier development of lexical decision, a receptive task, than of naming or written reproduction, production tasks, suggests that hidden units (Hinton & Sejnowski, 1986) in tertiary cortical areas may abstract visual-linguistic associations in printed words before production units in primary cortical areas can produce printed words orally or graphically.

  12. The role of lexical variables in the visual recognition of Chinese characters: A megastudy analysis.

    PubMed

    Sze, Wei Ping; Yap, Melvin J; Rickard Liow, Susan J

    2015-01-01

    Logographic Chinese orthography partially represents both phonology and semantics. By capturing the online processing of a large pool of Chinese characters, we were able to examine the relative salience of specific lexical variables when this nonalphabetic script is read. Using a sample of native mainland Chinese speakers (N = 35), lexical decision latencies for 1560 single characters were collated into a database, before the effects of a comprehensive range of variables were explored. Hierarchical regression analyses determined the unique item-level variance explained by orthographic (frequency, stroke count), semantic (age of learning, imageability, number of meanings), and phonological (consistency, phonological frequency) factors. Orthographic and semantic variables, respectively, accounted for more collective variance than the phonological variables. Significant main effects were further observed for the individual orthographic and semantic predictors. These results are consistent with the idea that skilled readers tend to rely on orthographic and semantic information when processing visually presented characters. This megastudy approach marks an important extension to existing work on Chinese character recognition, which hitherto has relied on factorial designs. Collectively, the findings reported here represent a useful set of empirical constraints for future computational models of character recognition.

  13. Identifying missing dictionary entries with frequency-conserving context models.

    PubMed

    Williams, Jake Ryland; Clark, Eric M; Bagrow, James P; Danforth, Christopher M; Dodds, Peter Sheridan

    2015-10-01

    In an effort to better understand meaning from natural language texts, we explore methods aimed at organizing lexical objects into contexts. A number of these methods for organization fall into a family defined by word ordering. Unlike demographic or spatial partitions of data, these collocation models are of special importance for their universal applicability. While we are interested here in text and have framed our treatment appropriately, our work is potentially applicable to other areas of research (e.g., speech, genomics, and mobility patterns) where one has ordered categorical data (e.g., sounds, genes, and locations). Our approach focuses on the phrase (whether word or larger) as the primary meaning-bearing lexical unit and object of study. To do so, we employ our previously developed framework for generating word-conserving phrase-frequency data. Upon training our model with the Wiktionary, an extensive, online, collaborative, and open-source dictionary that contains over 100000 phrasal definitions, we develop highly effective filters for the identification of meaningful, missing phrase entries. With our predictions we then engage the editorial community of the Wiktionary and propose short lists of potential missing entries for definition, developing a breakthrough, lexical extraction technique and expanding our knowledge of the defined English lexicon of phrases.

  14. Speaking rate affects the perception of duration as a suprasegmental lexical-stress cue.

    PubMed

    Reinisch, Eva; Jesse, Alexandra; McQueen, James M

    2011-06-01

    Three categorization experiments investigated whether the speaking rate of a preceding sentence influences durational cues to the perception of suprasegmental lexical-stress patterns. Dutch two-syllable word fragments had to be judged as coming from one of two longer words that matched the fragment segmentally but differed in lexical stress placement. Word pairs contrasted primary stress on either the first versus the second syllable or the first versus the third syllable. Duration of the initial or the second syllable of the fragments and rate of the preceding context (fast vs. slow) were manipulated. Listeners used speaking rate to decide about the degree of stress on initial syllables whether the syllables' absolute durations were informative about stress (Experiment Ia) or not (Experiment Ib). Rate effects on the second syllable were visible only when the initial syllable was ambiguous in duration with respect to the preceding rate context (Experiment 2). Absolute second syllable durations contributed little to stress perception (Experiment 3). These results suggest that speaking rate is used to disambiguate words and that rate-modulated stress cues are more important on initial than noninitial syllables. Speaking rate affects perception of suprasegmental information.

  15. Bilingual processing of ASL-English code-blends: The consequences of accessing two lexical representations simultaneously

    PubMed Central

    Emmorey, Karen; Petrich, Jennifer; Gollan, Tamar H.

    2012-01-01

    Bilinguals who are fluent in American Sign Language (ASL) and English often produce code-blends - simultaneously articulating a sign and a word while conversing with other ASL-English bilinguals. To investigate the cognitive mechanisms underlying code-blend processing, we compared picture-naming times (Experiment 1) and semantic categorization times (Experiment 2) for code-blends versus ASL signs and English words produced alone. In production, code-blending did not slow lexical retrieval for ASL and actually facilitated access to low-frequency signs. However, code-blending delayed speech production because bimodal bilinguals synchronized English and ASL lexical onsets. In comprehension, code-blending speeded access to both languages. Bimodal bilinguals’ ability to produce code-blends without any cost to ASL implies that the language system either has (or can develop) a mechanism for switching off competition to allow simultaneous production of close competitors. Code-blend facilitation effects during comprehension likely reflect cross-linguistic (and cross-modal) integration at the phonological and/or semantic levels. The absence of any consistent processing costs for code-blending illustrates a surprising limitation on dual-task costs and may explain why bimodal bilinguals code-blend more often than they code-switch. PMID:22773886

  16. Language and Short-Term Memory: The Role of Perceptual-Motor Affordance

    PubMed Central

    2014-01-01

    The advantage for real words over nonwords in serial recall—the lexicality effect—is typically attributed to support for item-level phonology, either via redintegration, whereby partially degraded short-term traces are “cleaned up” via support from long-term representations of the phonological material or via the more robust temporary activation of long-term lexical phonological knowledge that derives from its combination with established lexical and semantic levels of representation. The much smaller effect of lexicality in serial recognition, where the items are re-presented in the recognition cue, is attributed either to the minimal role for redintegration from long-term memory or to the minimal role for item memory itself in such retrieval conditions. We show that the reduced lexicality effect in serial recognition is not a function of the retrieval conditions, but rather because previous demonstrations have used auditory presentation, and we demonstrate a robust lexicality effect for visual serial recognition in a setting where auditory presentation produces no such effect. Furthermore, this effect is abolished under conditions of articulatory suppression. We argue that linguistic knowledge affects the readiness with which verbal material is segmentally recoded via speech motor processes that support rehearsal and therefore affects tasks that involve recoding. On the other hand, auditory perceptual organization affords sequence matching in the absence of such a requirement for segmental recoding and therefore does not show such effects of linguistic knowledge. PMID:24797440

  17. Multiple priming of lexically ambiguous and unambiguous targets in the cerebral hemispheres: the coarse coding hypothesis revisited

    PubMed Central

    Kandhadai, Padmapriya; Federmeier, Kara D.

    2009-01-01

    The coarse coding hypothesis (Jung-Beeman 2005) postulates that the cerebral hemispheres differ in their breadth of semantic activation, with the left hemisphere (LH) activating a narrow, focused semantic field and the right (RH) weakly activating a broader semantic field. In support of coarse coding, studies (e.g., Faust and Lavidor 2003) investigating priming for multiple senses of a lexically ambiguous word have reported a RH benefit. However, studies of mediated priming (Livesay and Burgess 2003; Richards and Chiarello 1995) have failed to find a RH advantage for processing distantly-linked, unambiguous words. To address this debate, the present study made use of a multiple priming paradigm (Balota and Paul, 1996) in which two primes either converged onto the single meaning of an unambiguous, lexically-associated target (LION-STRIPES-TIGER) or diverged onto different meanings of an ambiguous target (KIDNEY-PIANO-ORGAN). In two experiments, participants either made lexical decisions to targets (Experiment 1) or made a semantic relatedness judgment between primes and targets (Experiment 2). In both tasks, for both ambiguous and unambiguous triplets we found equivalent priming strengths and patterns across the two visual fields, counter to the predictions of the coarse coding hypothesis. Priming patterns further suggested that both hemispheres made use of lexical level representations in the lexical decision task and semantic representations in the semantic judgment task. PMID:17459344

  18. Language and short-term memory: the role of perceptual-motor affordance.

    PubMed

    Macken, Bill; Taylor, John C; Jones, Dylan M

    2014-09-01

    The advantage for real words over nonwords in serial recall--the lexicality effect--is typically attributed to support for item-level phonology, either via redintegration, whereby partially degraded short-term traces are "cleaned up" via support from long-term representations of the phonological material or via the more robust temporary activation of long-term lexical phonological knowledge that derives from its combination with established lexical and semantic levels of representation. The much smaller effect of lexicality in serial recognition, where the items are re-presented in the recognition cue, is attributed either to the minimal role for redintegration from long-term memory or to the minimal role for item memory itself in such retrieval conditions. We show that the reduced lexicality effect in serial recognition is not a function of the retrieval conditions, but rather because previous demonstrations have used auditory presentation, and we demonstrate a robust lexicality effect for visual serial recognition in a setting where auditory presentation produces no such effect. Furthermore, this effect is abolished under conditions of articulatory suppression. We argue that linguistic knowledge affects the readiness with which verbal material is segmentally recoded via speech motor processes that support rehearsal and therefore affects tasks that involve recoding. On the other hand, auditory perceptual organization affords sequence matching in the absence of such a requirement for segmental recoding and therefore does not show such effects of linguistic knowledge.

  19. The Impact of Orthographic Connectivity on Visual Word Recognition in Arabic: A Cross-Sectional Study

    ERIC Educational Resources Information Center

    Khateb, Asaid; Khateb-Abdelgani, Manal; Taha, Haitham Y.; Ibrahim, Raphiq

    2014-01-01

    This study aimed at assessing the effects of letters' connectivity in Arabic on visual word recognition. For this purpose, reaction times (RTs) and accuracy scores were collected from ninety-third, sixth and ninth grade native Arabic speakers during a lexical decision task, using fully connected (Cw), partially connected (PCw) and…

  20. Neural Correlates of Morphological Decomposition in a Morphologically Rich Language: An fMRI Study

    ERIC Educational Resources Information Center

    Lehtonen, Minna; Vorobyev, Victor A.; Hugdahl, Kenneth; Tuokkola, Terhi; Laine, Matti

    2006-01-01

    By employing visual lexical decision and functional MRI, we studied the neural correlates of morphological decomposition in a highly inflected language (Finnish) where most inflected noun forms elicit a consistent processing cost during word recognition. This behavioral effect could reflect suffix stripping at the visual word form level and/or…

  1. The Influence of Semantic Neighbours on Visual Word Recognition

    ERIC Educational Resources Information Center

    Yates, Mark

    2012-01-01

    Although it is assumed that semantics is a critical component of visual word recognition, there is still much that we do not understand. One recent way of studying semantic processing has been in terms of semantic neighbourhood (SN) density, and this research has shown that semantic neighbours facilitate lexical decisions. However, it is not clear…

  2. Efficiency of Lexical Access in Children with Autism Spectrum Disorders: Does Modality Matter?

    ERIC Educational Resources Information Center

    Harper-Hill, Keely; Copland, David; Arnott, Wendy

    2014-01-01

    The provision of visual support to individuals with an autism spectrum disorder (ASD) is widely recommended. We explored one mechanism underlying the use of visual supports: efficiency of language processing. Two groups of children, one with and one without an ASD, participated. The groups had comparable oral and written language skills and…

  3. The Role of Derivative Suffix Productivity in the Visual Word Recognition of Complex Words

    ERIC Educational Resources Information Center

    Lázaro, Miguel; Sainz, Javier; Illera, Víctor

    2015-01-01

    In this article we present two lexical decision experiments that examine the role of base frequency and of derivative suffix productivity in visual recognition of Spanish words. In the first experiment we find that complex words with productive derivative suffixes result in lower response times than those with unproductive derivative suffixes.…

  4. Morphological Processing during Visual Word Recognition in Hebrew as a First and a Second Language

    ERIC Educational Resources Information Center

    Norman, Tal; Degani, Tamar; Peleg, Orna

    2017-01-01

    The present study examined whether sublexical morphological processing takes place during visual word-recognition in Hebrew, and whether morphological decomposition of written words depends on lexical activation of the complete word. Furthermore, it examined whether morphological processing is similar when reading Hebrew as a first language (L1)…

  5. Space-by-time manifold representation of dynamic facial expressions for emotion categorization

    PubMed Central

    Delis, Ioannis; Chen, Chaona; Jack, Rachael E.; Garrod, Oliver G. B.; Panzeri, Stefano; Schyns, Philippe G.

    2016-01-01

    Visual categorization is the brain computation that reduces high-dimensional information in the visual environment into a smaller set of meaningful categories. An important problem in visual neuroscience is to identify the visual information that the brain must represent and then use to categorize visual inputs. Here we introduce a new mathematical formalism—termed space-by-time manifold decomposition—that describes this information as a low-dimensional manifold separable in space and time. We use this decomposition to characterize the representations used by observers to categorize the six classic facial expressions of emotion (happy, surprise, fear, disgust, anger, and sad). By means of a Generative Face Grammar, we presented random dynamic facial movements on each experimental trial and used subjective human perception to identify the facial movements that correlate with each emotion category. When the random movements projected onto the categorization manifold region corresponding to one of the emotion categories, observers categorized the stimulus accordingly; otherwise they selected “other.” Using this information, we determined both the Action Unit and temporal components whose linear combinations lead to reliable categorization of each emotion. In a validation experiment, we confirmed the psychological validity of the resulting space-by-time manifold representation. Finally, we demonstrated the importance of temporal sequencing for accurate emotion categorization and identified the temporal dynamics of Action Unit components that cause typical confusions between specific emotions (e.g., fear and surprise) as well as those resolving these confusions. PMID:27305521

  6. Scene and human face recognition in the central vision of patients with glaucoma

    PubMed Central

    Aptel, Florent; Attye, Arnaud; Guyader, Nathalie; Boucart, Muriel; Chiquet, Christophe; Peyrin, Carole

    2018-01-01

    Primary open-angle glaucoma (POAG) firstly mainly affects peripheral vision. Current behavioral studies support the idea that visual defects of patients with POAG extend into parts of the central visual field classified as normal by static automated perimetry analysis. This is particularly true for visual tasks involving processes of a higher level than mere detection. The purpose of this study was to assess visual abilities of POAG patients in central vision. Patients were assigned to two groups following a visual field examination (Humphrey 24–2 SITA-Standard test). Patients with both peripheral and central defects and patients with peripheral but no central defect, as well as age-matched controls, participated in the experiment. All participants had to perform two visual tasks where low-contrast stimuli were presented in the central 6° of the visual field. A categorization task of scene images and human face images assessed high-level visual recognition abilities. In contrast, a detection task using the same stimuli assessed low-level visual function. The difference in performance between detection and categorization revealed the cost of high-level visual processing. Compared to controls, patients with a central visual defect showed a deficit in both detection and categorization of all low-contrast images. This is consistent with the abnormal retinal sensitivity as assessed by perimetry. However, the deficit was greater for categorization than detection. Patients without a central defect showed similar performances to the controls concerning the detection and categorization of faces. However, while the detection of scene images was well-maintained, these patients showed a deficit in their categorization. This suggests that the simple loss of peripheral vision could be detrimental to scene recognition, even when the information is displayed in central vision. This study revealed subtle defects in the central visual field of POAG patients that cannot be predicted by static automated perimetry assessment using Humphrey 24–2 SITA-Standard test. PMID:29481572

  7. The Processing Speed of Scene Categorization at Multiple Levels of Description: The Superordinate Advantage Revisited.

    PubMed

    Banno, Hayaki; Saiki, Jun

    2015-03-01

    Recent studies have sought to determine which levels of categories are processed first in visual scene categorization and have shown that the natural and man-made superordinate-level categories are understood faster than are basic-level categories. The current study examined the robustness of the superordinate-level advantage in a visual scene categorization task. A go/no-go categorization task was evaluated with response time distribution analysis using an ex-Gaussian template. A visual scene was categorized as either superordinate or basic level, and two basic-level categories forming a superordinate category were judged as either similar or dissimilar to each other. First, outdoor/ indoor groups and natural/man-made were used as superordinate categories to investigate whether the advantage could be generalized beyond the natural/man-made boundary. Second, a set of images forming a superordinate category was manipulated. We predicted that decreasing image set similarity within the superordinate-level category would work against the speed advantage. We found that basic-level categorization was faster than outdoor/indoor categorization when the outdoor category comprised dissimilar basic-level categories. Our results indicate that the superordinate-level advantage in visual scene categorization is labile across different categories and category structures. © 2015 SAGE Publications.

  8. Murder, She Wrote

    PubMed Central

    Nasrallah, Maha; Carmel, David; Lavie, Nilli

    2009-01-01

    Enhanced sensitivity to information of negative (compared to positive) valence has an adaptive value, for example, by expediting the correct choice of avoidance behavior. However, previous evidence for such enhanced sensitivity has been inconclusive. Here we report a clear advantage for negative over positive words in categorizing them as emotional. In 3 experiments, participants classified briefly presented (33 ms or 22 ms) masked words as emotional or neutral. Categorization accuracy and valence-detection sensitivity were both higher for negative than for positive words. The results were not due to differences between emotion categories in either lexical frequency, extremeness of valence ratings, or arousal. These results conclusively establish enhanced sensitivity for negative over positive words, supporting the hypothesis that negative stimuli enjoy preferential access to perceptual processing. PMID:19803583

  9. NICE: A Computational Solution to Close the Gap from Colour Perception to Colour Categorization

    PubMed Central

    Parraga, C. Alejandro; Akbarinia, Arash

    2016-01-01

    The segmentation of visible electromagnetic radiation into chromatic categories by the human visual system has been extensively studied from a perceptual point of view, resulting in several colour appearance models. However, there is currently a void when it comes to relate these results to the physiological mechanisms that are known to shape the pre-cortical and cortical visual pathway. This work intends to begin to fill this void by proposing a new physiologically plausible model of colour categorization based on Neural Isoresponsive Colour Ellipsoids (NICE) in the cone-contrast space defined by the main directions of the visual signals entering the visual cortex. The model was adjusted to fit psychophysical measures that concentrate on the categorical boundaries and are consistent with the ellipsoidal isoresponse surfaces of visual cortical neurons. By revealing the shape of such categorical colour regions, our measures allow for a more precise and parsimonious description, connecting well-known early visual processing mechanisms to the less understood phenomenon of colour categorization. To test the feasibility of our method we applied it to exemplary images and a popular ground-truth chart obtaining labelling results that are better than those of current state-of-the-art algorithms. PMID:26954691

  10. NICE: A Computational Solution to Close the Gap from Colour Perception to Colour Categorization.

    PubMed

    Parraga, C Alejandro; Akbarinia, Arash

    2016-01-01

    The segmentation of visible electromagnetic radiation into chromatic categories by the human visual system has been extensively studied from a perceptual point of view, resulting in several colour appearance models. However, there is currently a void when it comes to relate these results to the physiological mechanisms that are known to shape the pre-cortical and cortical visual pathway. This work intends to begin to fill this void by proposing a new physiologically plausible model of colour categorization based on Neural Isoresponsive Colour Ellipsoids (NICE) in the cone-contrast space defined by the main directions of the visual signals entering the visual cortex. The model was adjusted to fit psychophysical measures that concentrate on the categorical boundaries and are consistent with the ellipsoidal isoresponse surfaces of visual cortical neurons. By revealing the shape of such categorical colour regions, our measures allow for a more precise and parsimonious description, connecting well-known early visual processing mechanisms to the less understood phenomenon of colour categorization. To test the feasibility of our method we applied it to exemplary images and a popular ground-truth chart obtaining labelling results that are better than those of current state-of-the-art algorithms.

  11. Evidence from neglect dyslexia for morphological decomposition at the early stages of orthographic-visual analysis

    PubMed Central

    Reznick, Julia; Friedmann, Naama

    2015-01-01

    This study examined whether and how the morphological structure of written words affects reading in word-based neglect dyslexia (neglexia), and what can be learned about morphological decomposition in reading from the effect of morphology on neglexia. The oral reading of 7 Hebrew-speaking participants with acquired neglexia at the word level—6 with left neglexia and 1 with right neglexia—was evaluated. The main finding was that the morphological role of the letters on the neglected side of the word affected neglect errors: When an affix appeared on the neglected side, it was neglected significantly more often than when the neglected side was part of the root; root letters on the neglected side were never omitted, whereas affixes were. Perceptual effects of length and final letter form were found for words with an affix on the neglected side, but not for words in which a root letter appeared in the neglected side. Semantic and lexical factors did not affect the participants' reading and error pattern, and neglect errors did not preserve the morpho-lexical characteristics of the target words. These findings indicate that an early morphological decomposition of words to their root and affixes occurs before access to the lexicon and to semantics, at the orthographic-visual analysis stage, and that the effects did not result from lexical feedback. The same effects of morphological structure on reading were manifested by the participants with left- and right-sided neglexia. Since neglexia is a deficit at the orthographic-visual analysis level, the effect of morphology on reading patterns in neglexia further supports that morphological decomposition occurs in the orthographic-visual analysis stage, prelexically, and that the search for the three letters of the root in Hebrew is a trigger for attention shift in neglexia. PMID:26528159

  12. Speaker and Accent Variation Are Handled Differently: Evidence in Native and Non-Native Listeners

    PubMed Central

    Kriengwatana, Buddhamas; Terry, Josephine; Chládková, Kateřina; Escudero, Paola

    2016-01-01

    Listeners are able to cope with between-speaker variability in speech that stems from anatomical sources (i.e. individual and sex differences in vocal tract size) and sociolinguistic sources (i.e. accents). We hypothesized that listeners adapt to these two types of variation differently because prior work indicates that adapting to speaker/sex variability may occur pre-lexically while adapting to accent variability may require learning from attention to explicit cues (i.e. feedback). In Experiment 1, we tested our hypothesis by training native Dutch listeners and Australian-English (AusE) listeners without any experience with Dutch or Flemish to discriminate between the Dutch vowels /I/ and /ε/ from a single speaker. We then tested their ability to classify /I/ and /ε/ vowels of a novel Dutch speaker (i.e. speaker or sex change only), or vowels of a novel Flemish speaker (i.e. speaker or sex change plus accent change). We found that both Dutch and AusE listeners could successfully categorize vowels if the change involved a speaker/sex change, but not if the change involved an accent change. When AusE listeners were given feedback on their categorization responses to the novel speaker in Experiment 2, they were able to successfully categorize vowels involving an accent change. These results suggest that adapting to accents may be a two-step process, whereby the first step involves adapting to speaker differences at a pre-lexical level, and the second step involves adapting to accent differences at a contextual level, where listeners have access to word meaning or are given feedback that allows them to appropriately adjust their perceptual category boundaries. PMID:27309889

  13. Early lexical and phonological acquisition and its relationships.

    PubMed

    Wiethan, Fernanda Marafiga; Nóro, Letícia Arruda; Mota, Helena Bolli

    2014-01-01

    Verifying likely relationships between lexical and phonological development of children aged between 1 year to 1 year, 11 months and 29 days, who were enrolled in public kindergarten schools of Santa Maria (RS). The sample consisted of 18 children of both genders, with typical language development and aged between 1 year to 1 year, 11 months and 29 days, separated in three age subgroups. Visual recordings of spontaneous speech of each child were collected and then lexical analysis regarding the types of the said lexical items and phonological assessment were performed. The number of sounds acquired and partially acquired were counted together, and the 19 sounds and two all phones of Brazilian Portuguese were considered. To the statistical analysis, the tests of Kruskal-Wallis and Wilcoxon were used, with significance level of prelace_LT0.05. When compared the means relating to the acquired sounds and mean of the acquired and partially acquired sounds percentages, there was difference between the first and the second age subgroup, and between the first and the third subgroup. In the comparison of the said lexical items means among the age subgroups, there was difference between the first and the second subgroup, and between the first and the third subgroup again. In the comparison between the said lexical items and acquired and partially acquired sounds in each age subgroup, there was difference only in the age subgroup of 1 year and 8 months to 1 year, 11 months and 29 days, in which the sounds highlighted. The phonological and lexical domains develop as a growing process and influence each other. The Phonology has a little advantage.

  14. Effects of Perceptual and Conceptual Similarity in Lexical Priming of Young Children Who Stutter: Preliminary Findings

    PubMed Central

    Hartfield, Kia N.; Conture, Edward G.

    2007-01-01

    The purpose of this study was to investigate the influence of conceptual and perceptual properties of words on the speed and accuracy of lexical retrieval of children who do (CWS) and do not stutter (CWNS) during a picture-naming task. Participants consisted of 13 3- to 5-year-old CWS and the same number of CWNS. All participants had speech, language, and hearing development within normal limits, with the exception of stuttering for CWS. Both talker groups participated in a picture-naming task where they named, one at a time, computer-presented, black-on-white drawings of common age-appropriate objects. These pictures were named during four auditory priming conditions: (a) a neutral prime consisting of a tone, (b) a word prime physically related to the target word, (c) a word prime functionally related to the target word, and (d) a word prime categorically related to the target word. Speech reaction time (SRT) was measured from the offset of presentation of the picture target to the onset of participant’s verbal speech response. Results indicated that CWS were slower than CWNS across priming conditions (i.e., neutral, physical, function, category) and that the speed of lexical retrieval of CWS was more influenced by functional than perceptual aspects of target pictures named. Findings were taken to suggest that CWS tend to organize lexical information functionally more so than physically and that this tendency may relate to difficulties establishing normally fluent speech and language. PMID:17010422

  15. A comparison of haptic material perception in blind and sighted individuals.

    PubMed

    Baumgartner, Elisabeth; Wiebel, Christiane B; Gegenfurtner, Karl R

    2015-10-01

    We investigated material perception in blind participants to explore the influence of visual experience on material representations and the relationship between visual and haptic material perception. In a previous study with sighted participants, we had found participants' visual and haptic judgments of material properties to be very similar (Baumgartner, Wiebel, & Gegenfurtner, 2013). In a categorization task, however, visual exploration had led to higher categorization accuracy than haptic exploration. Here, we asked congenitally blind participants to explore different materials haptically and rate several material properties in order to assess the role of the visual sense for the emergence of haptic material perception. Principal components analyses combined with a procrustes superimposition showed that the material representations of blind and blindfolded sighted participants were highly similar. We also measured haptic categorization performance, which was equal for the two groups. We conclude that haptic material representations can emerge independently of visual experience, and that there are no advantages for either group of observers in haptic categorization. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Spatial frequency supports the emergence of categorical representations in visual cortex during natural scene perception.

    PubMed

    Dima, Diana C; Perry, Gavin; Singh, Krish D

    2018-06-11

    In navigating our environment, we rapidly process and extract meaning from visual cues. However, the relationship between visual features and categorical representations in natural scene perception is still not well understood. Here, we used natural scene stimuli from different categories and filtered at different spatial frequencies to address this question in a passive viewing paradigm. Using representational similarity analysis (RSA) and cross-decoding of magnetoencephalography (MEG) data, we show that categorical representations emerge in human visual cortex at ∼180 ms and are linked to spatial frequency processing. Furthermore, dorsal and ventral stream areas reveal temporally and spatially overlapping representations of low and high-level layer activations extracted from a feedforward neural network. Our results suggest that neural patterns from extrastriate visual cortex switch from low-level to categorical representations within 200 ms, highlighting the rapid cascade of processing stages essential in human visual perception. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  17. Deep Residual Network Predicts Cortical Representation and Organization of Visual Features for Rapid Categorization.

    PubMed

    Wen, Haiguang; Shi, Junxing; Chen, Wei; Liu, Zhongming

    2018-02-28

    The brain represents visual objects with topographic cortical patterns. To address how distributed visual representations enable object categorization, we established predictive encoding models based on a deep residual network, and trained them to predict cortical responses to natural movies. Using this predictive model, we mapped human cortical representations to 64,000 visual objects from 80 categories with high throughput and accuracy. Such representations covered both the ventral and dorsal pathways, reflected multiple levels of object features, and preserved semantic relationships between categories. In the entire visual cortex, object representations were organized into three clusters of categories: biological objects, non-biological objects, and background scenes. In a finer scale specific to each cluster, object representations revealed sub-clusters for further categorization. Such hierarchical clustering of category representations was mostly contributed by cortical representations of object features from middle to high levels. In summary, this study demonstrates a useful computational strategy to characterize the cortical organization and representations of visual features for rapid categorization.

  18. Holistic Face Categorization in Higher Order Visual Areas of the Normal and Prosopagnosic Brain: Toward a Non-Hierarchical View of Face Perception

    PubMed Central

    Rossion, Bruno; Dricot, Laurence; Goebel, Rainer; Busigny, Thomas

    2011-01-01

    How a visual stimulus is initially categorized as a face in a network of human brain areas remains largely unclear. Hierarchical neuro-computational models of face perception assume that the visual stimulus is first decomposed in local parts in lower order visual areas. These parts would then be combined into a global representation in higher order face-sensitive areas of the occipito-temporal cortex. Here we tested this view in fMRI with visual stimuli that are categorized as faces based on their global configuration rather than their local parts (two-tones Mooney figures and Arcimboldo's facelike paintings). Compared to the same inverted visual stimuli that are not categorized as faces, these stimuli activated the right middle fusiform gyrus (“Fusiform face area”) and superior temporal sulcus (pSTS), with no significant activation in the posteriorly located inferior occipital gyrus (i.e., no “occipital face area”). This observation is strengthened by behavioral and neural evidence for normal face categorization of these stimuli in a brain-damaged prosopagnosic patient whose intact right middle fusiform gyrus and superior temporal sulcus are devoid of any potential face-sensitive inputs from the lesioned right inferior occipital cortex. Together, these observations indicate that face-preferential activation may emerge in higher order visual areas of the right hemisphere without any face-preferential inputs from lower order visual areas, supporting a non-hierarchical view of face perception in the visual cortex. PMID:21267432

  19. Working Memory and Speech Recognition in Noise Under Ecologically Relevant Listening Conditions: Effects of Visual Cues and Noise Type Among Adults With Hearing Loss

    PubMed Central

    Stewart, Erin K.; Wu, Yu-Hsiang; Bishop, Christopher; Bentler, Ruth A.; Tremblay, Kelly

    2017-01-01

    Purpose This study evaluated the relationship between working memory (WM) and speech recognition in noise with different noise types as well as in the presence of visual cues. Method Seventy-six adults with bilateral, mild to moderately severe sensorineural hearing loss (mean age: 69 years) participated. Using a cross-sectional design, 2 measures of WM were taken: a reading span measure, and Word Auditory Recognition and Recall Measure (Smith, Pichora-Fuller, & Alexander, 2016). Speech recognition was measured with the Multi-Modal Lexical Sentence Test for Adults (Kirk et al., 2012) in steady-state noise and 4-talker babble, with and without visual cues. Testing was under unaided conditions. Results A linear mixed model revealed visual cues and pure-tone average as the only significant predictors of Multi-Modal Lexical Sentence Test outcomes. Neither WM measure nor noise type showed a significant effect. Conclusion The contribution of WM in explaining unaided speech recognition in noise was negligible and not influenced by noise type or visual cues. We anticipate that with audibility partially restored by hearing aids, the effects of WM will increase. For clinical practice to be affected, more significant effect sizes are needed. PMID:28744550

  20. Evaluating a Split Processing Model of Visual Word Recognition: Effects of Orthographic Neighborhood Size

    ERIC Educational Resources Information Center

    Lavidor, Michal; Hayes, Adrian; Shillcock, Richard; Ellis, Andrew W.

    2004-01-01

    The split fovea theory proposes that visual word recognition of centrally presented words is mediated by the splitting of the foveal image, with letters to the left of fixation being projected to the right hemisphere (RH) and letters to the right of fixation being projected to the left hemisphere (LH). Two lexical decision experiments aimed to…

  1. Phonological Contribution during Visual Word Recognition in Child Readers. An Intermodal Priming Study in Grades 3 and 5

    ERIC Educational Resources Information Center

    Sauval, Karinne; Casalis, Séverine; Perre, Laetitia

    2017-01-01

    This study investigated the phonological contribution during visual word recognition in child readers as a function of general reading expertise (third and fifth grades) and specific word exposure (frequent and less-frequent words). An intermodal priming in lexical decision task was performed. Auditory primes (identical and unrelated) were used in…

  2. Language Non-Selective Activation of Orthography during Spoken Word Processing in Hindi-English Sequential Bilinguals: An Eye Tracking Visual World Study

    ERIC Educational Resources Information Center

    Mishra, Ramesh Kumar; Singh, Niharika

    2014-01-01

    Previous psycholinguistic studies have shown that bilinguals activate lexical items of both the languages during auditory and visual word processing. In this study we examined if Hindi-English bilinguals activate the orthographic forms of phonological neighbors of translation equivalents of the non target language while listening to words either…

  3. Brief Communication: visual-field superiority as a function of stimulus type and content: further evidence.

    PubMed

    Basu, Anamitra; Mandal, Manas K

    2004-07-01

    The present study examined visual-field advantage as a function of presentation mode (unilateral, bilateral), stimulus structure (facial, lexical), and stimulus content (emotional, neutral). The experiment was conducted in a split visual-field paradigm using a JAVA-based computer program with recognition accuracy as the dependent measure. Unilaterally, rather than bilaterally, presented stimuli were significantly better recognized. Words were significantly better recognized than faces in the right visual-field; the difference was nonsignificant in the left visual-field. Emotional content elicited left visual-field and neutral content elicited right visual-field advantages. Copyright Taylor and Francis Inc.

  4. A neuroimaging study of conflict during word recognition.

    PubMed

    Riba, Jordi; Heldmann, Marcus; Carreiras, Manuel; Münte, Thomas F

    2010-08-04

    Using functional magnetic resonance imaging the neural activity associated with error commission and conflict monitoring in a lexical decision task was assessed. In a cohort of 20 native speakers of Spanish conflict was introduced by presenting words with high and low lexical frequency and pseudo-words with high and low syllabic frequency for the first syllable. Erroneous versus correct responses showed activation in the frontomedial and left inferior frontal cortex. A similar pattern was found for correctly classified words of low versus high lexical frequency and for correctly classified pseudo-words of high versus low syllabic frequency. Conflict-related activations for language materials largely overlapped with error-induced activations. The effect of syllabic frequency underscores the role of sublexical processing in visual word recognition and supports the view that the initial syllable mediates between the letter and word level.

  5. The effect of voice onset time differences on lexical access in Dutch.

    PubMed

    van Alphen, Petra M; McQueen, James M

    2006-02-01

    Effects on spoken-word recognition of prevoicing differences in Dutch initial voiced plosives were examined. In 2 cross-modal identity-priming experiments, participants heard prime words and nonwords beginning with voiced plosives with 12, 6, or 0 periods of prevoicing or matched items beginning with voiceless plosives and made lexical decisions to visual tokens of those items. Six-period primes had the same effect as 12-period primes. Zero-period primes had a different effect, but only when their voiceless counterparts were real words. Listeners could nevertheless discriminate the 6-period primes from the 12- and 0-period primes. Phonetic detail appears to influence lexical access only to the extent that it is useful: In Dutch, presence versus absence of prevoicing is more informative than amount of prevoicing. ((c) 2006 APA, all rights reserved).

  6. Responses on a lateralized lexical decision task relate to both reading times and comprehension.

    PubMed

    Michael, Mary

    2009-12-01

    Research over the last few years has shown that the dominance of the left hemisphere in language processing is less complete than previously thought [Beeman, M. (1993). Semantic processing in the right hemisphere may contribute to drawing inferences from discourse. Brain and Language, 44, 80-120; Faust, M., & Chiarello, C. (1998). Sentence context and lexical ambiguity resolution by the two hemispheres. Neuropsychologia, 36(9), 827-835; Weems, S. A., & Zaidel, E. (2004). The relationship between reading ability and lateralized lexical decision. Brain and Cognition, 55(3), 507-515]. Engaging the right brain in language processing is required for processing speaker/writer intention, particularly in those subtle interpretive processes that help in deciphering humor, irony, and emotional inference. In two experiments employing a divided field or lateralized lexical decision task (LLDT), accuracy and reaction times (RTs) were related to reading times and comprehension on sentence reading. Differences seen in RTs and error rates by visual fields were found to relate to performance. Smaller differences in performance between fields tended to be related to better performance on the LLDT in both experiments and, in Experiment 1, to reading measures. Readers who can exploit both hemispheres for language processing equally appear to be at an advantage in lexical access and possibly also in reading performance.

  7. Impact of feature saliency on visual category learning.

    PubMed

    Hammer, Rubi

    2015-01-01

    People have to sort numerous objects into a large number of meaningful categories while operating in varying contexts. This requires identifying the visual features that best predict the 'essence' of objects (e.g., edibility), rather than categorizing objects based on the most salient features in a given context. To gain this capacity, visual category learning (VCL) relies on multiple cognitive processes. These may include unsupervised statistical learning, that requires observing multiple objects for learning the statistics of their features. Other learning processes enable incorporating different sources of supervisory information, alongside the visual features of the categorized objects, from which the categorical relations between few objects can be deduced. These deductions enable inferring that objects from the same category may differ from one another in some high-saliency feature dimensions, whereas lower-saliency feature dimensions can best differentiate objects from distinct categories. Here I illustrate how feature saliency affects VCL, by also discussing kinds of supervisory information enabling reflective categorization. Arguably, principles debated here are often being ignored in categorization studies.

  8. Impact of feature saliency on visual category learning

    PubMed Central

    Hammer, Rubi

    2015-01-01

    People have to sort numerous objects into a large number of meaningful categories while operating in varying contexts. This requires identifying the visual features that best predict the ‘essence’ of objects (e.g., edibility), rather than categorizing objects based on the most salient features in a given context. To gain this capacity, visual category learning (VCL) relies on multiple cognitive processes. These may include unsupervised statistical learning, that requires observing multiple objects for learning the statistics of their features. Other learning processes enable incorporating different sources of supervisory information, alongside the visual features of the categorized objects, from which the categorical relations between few objects can be deduced. These deductions enable inferring that objects from the same category may differ from one another in some high-saliency feature dimensions, whereas lower-saliency feature dimensions can best differentiate objects from distinct categories. Here I illustrate how feature saliency affects VCL, by also discussing kinds of supervisory information enabling reflective categorization. Arguably, principles debated here are often being ignored in categorization studies. PMID:25954220

  9. Domain Differences in the Weights of Perceptual and Conceptual Information in Children's Categorization

    ERIC Educational Resources Information Center

    Diesendruck, Gil; Peretz, Shimon

    2013-01-01

    Visual appearance is one of the main cues children rely on when categorizing novel objects. In 3 studies, testing 128 3-year-olds and 192 5-year-olds, we investigated how various kinds of information may differentially lead children to overlook visual appearance in their categorization decisions across domains. Participants saw novel animals or…

  10. Psychocentricity and participant profiles: implications for lexical processing among multilinguals

    PubMed Central

    Libben, Gary; Curtiss, Kaitlin; Weber, Silke

    2014-01-01

    Lexical processing among bilinguals is often affected by complex patterns of individual experience. In this paper we discuss the psychocentric perspective on language representation and processing, which highlights the centrality of individual experience in psycholinguistic experimentation. We discuss applications to the investigation of lexical processing among multilinguals and explore the advantages of using high-density experiments with multilinguals. High density experiments are designed to co-index measures of lexical perception and production, as well as participant profiles. We discuss the challenges associated with the characterization of participant profiles and present a new data visualization technique, that we term Facial Profiles. This technique is based on Chernoff faces developed over 40 years ago. The Facial Profile technique seeks to overcome some of the challenges associated with the use of Chernoff faces, while maintaining the core insight that recoding multivariate data as facial features can engage the human face recognition system and thus enhance our ability to detect and interpret patterns within multivariate datasets. We demonstrate that Facial Profiles can code participant characteristics in lexical processing studies by recoding variables such as reading ability, speaking ability, and listening ability into iconically-related relative sizes of eye, mouth, and ear, respectively. The balance of ability in bilinguals can be captured by creating composite facial profiles or Janus Facial Profiles. We demonstrate the use of Facial Profiles and Janus Facial Profiles in the characterization of participant effects in the study of lexical perception and production. PMID:25071614

  11. Functional MRI evidence for the decline of word retrieval and generation during normal aging.

    PubMed

    Baciu, M; Boudiaf, N; Cousin, E; Perrone-Bertolotti, M; Pichat, C; Fournet, N; Chainay, H; Lamalle, L; Krainik, A

    2016-02-01

    This fMRI study aimed to explore the effect of normal aging on word retrieval and generation. The question addressed is whether lexical production decline is determined by a direct mechanism, which concerns the language operations or is rather indirectly induced by a decline of executive functions. Indeed, the main hypothesis was that normal aging does not induce loss of lexical knowledge, but there is only a general slowdown in retrieval mechanisms involved in lexical processing, due to possible decline of the executive functions. We used three tasks (verbal fluency, object naming, and semantic categorization). Two groups of participants were tested (Young, Y and Aged, A), without cognitive and psychiatric impairment and showing similar levels of vocabulary. Neuropsychological testing revealed that older participants had lower executive function scores, longer processing speeds, and tended to have lower verbal fluency scores. Additionally, older participants showed higher scores for verbal automatisms and overlearned information. In terms of behavioral data, older participants performed as accurate as younger adults, but they were significantly slower for the semantic categorization and were less fluent for verbal fluency task. Functional MRI analyses suggested that older adults did not simply activate fewer brain regions involved in word production, but they actually showed an atypical pattern of activation. Significant correlations between the BOLD (Blood Oxygen Level Dependent) signal of aging-related (A > Y) regions and cognitive scores suggested that this atypical pattern of the activation may reveal several compensatory mechanisms (a) to overcome the slowdown in retrieval, due to the decline of executive functions and processing speed and (b) to inhibit verbal automatic processes. The BOLD signal measured in some other aging-dependent regions did not correlate with the behavioral and neuropsychological scores, and the overactivation of these uncorrelated regions would simply reveal dedifferentiation that occurs with aging. Altogether, our results suggest that normal aging is associated with a more difficult access to lexico-semantic operations and representations by a slowdown in executive functions, without any conceptual loss.

  12. The Developmental Lexicon Project: A behavioral database to investigate visual word recognition across the lifespan.

    PubMed

    Schröter, Pauline; Schroeder, Sascha

    2017-12-01

    With the Developmental Lexicon Project (DeveL), we present a large-scale study that was conducted to collect data on visual word recognition in German across the lifespan. A total of 800 children from Grades 1 to 6, as well as two groups of younger and older adults, participated in the study and completed a lexical decision and a naming task. We provide a database for 1,152 German words, comprising behavioral data from seven different stages of reading development, along with sublexical and lexical characteristics for all stimuli. The present article describes our motivation for this project, explains the methods we used to collect the data, and reports analyses on the reliability of our results. In addition, we explored developmental changes in three marker effects in psycholinguistic research: word length, word frequency, and orthographic similarity. The database is available online.

  13. AdjScales: Visualizing Differences between Adjectives for Language Learners

    NASA Astrophysics Data System (ADS)

    Sheinman, Vera; Tokunaga, Takenobu

    In this study we introduce AdjScales, a method for scaling similar adjectives by their strength. It combines existing Web-based computational linguistic techniques in order to automatically differentiate between similar adjectives that describe the same property by strength. Though this kind of information is rarely present in most of the lexical resources and dictionaries, it may be useful for language learners that try to distinguish between similar words. Additionally, learners might gain from a simple visualization of these differences using unidimensional scales. The method is evaluated by comparison with annotation on a subset of adjectives from WordNet by four native English speakers. It is also compared against two non-native speakers of English. The collected annotation is an interesting resource in its own right. This work is a first step toward automatic differentiation of meaning between similar words for language learners. AdjScales can be useful for lexical resource enhancement.

  14. Is the masked priming same-different task a pure measure of prelexical processing?

    PubMed

    Kelly, Andrew N; van Heuven, Walter J B; Pitchford, Nicola J; Ledgeway, Timothy

    2013-01-01

    To study prelexical processes involved in visual word recognition a task is needed that only operates at the level of abstract letter identities. The masked priming same-different task has been purported to do this, as the same pattern of priming is shown for words and nonwords. However, studies using this task have consistently found a processing advantage for words over nonwords, indicating a lexicality effect. We investigated the locus of this word advantage. Experiment 1 used conventional visually-presented reference stimuli to test previous accounts of the lexicality effect. Results rule out the use of different strategies, or strength of representations, for words and nonwords. No interaction was shown between prime type and word type, but a consistent word advantage was found. Experiment 2 used novel auditorally-presented reference stimuli to restrict nonword matching to the sublexical level. This abolished scrambled priming for nonwords, but not words. Overall this suggests the processing advantage for words over nonwords results from activation of whole-word, lexical representations. Furthermore, the number of shared open-bigrams between primes and targets could account for scrambled priming effects. These results have important implications for models of orthographic processing and studies that have used this task to investigate prelexical processes.

  15. How strongly do word reading times and lexical decision times correlate? Combining data from eye movement corpora and megastudies.

    PubMed

    Kuperman, Victor; Drieghe, Denis; Keuleers, Emmanuel; Brysbaert, Marc

    2013-01-01

    We assess the amount of shared variance between three measures of visual word recognition latencies: eye movement latencies, lexical decision times, and naming times. After partialling out the effects of word frequency and word length, two well-documented predictors of word recognition latencies, we see that 7-44% of the variance is uniquely shared between lexical decision times and naming times, depending on the frequency range of the words used. A similar analysis of eye movement latencies shows that the percentage of variance they uniquely share either with lexical decision times or with naming times is much lower. It is 5-17% for gaze durations and lexical decision times in studies with target words presented in neutral sentences, but drops to 0.2% for corpus studies in which eye movements to all words are analysed. Correlations between gaze durations and naming latencies are lower still. These findings suggest that processing times in isolated word processing and continuous text reading are affected by specific task demands and presentation format, and that lexical decision times and naming times are not very informative in predicting eye movement latencies in text reading once the effect of word frequency and word length are taken into account. The difference between controlled experiments and natural reading suggests that reading strategies and stimulus materials may determine the degree to which the immediacy-of-processing assumption and the eye-mind assumption apply. Fixation times are more likely to exclusively reflect the lexical processing of the currently fixated word in controlled studies with unpredictable target words rather than in natural reading of sentences or texts.

  16. Processes of conscious and unconscious memory: evidence from current research on dissociation of memories within a test.

    PubMed

    Cheng, Chao-Ming; Huang, Chin-Lan

    2011-01-01

    The processes of conscious memory (CM) and unconscious memory (UM) are explored, based on the results of the current and previous studies in which the 2 forms of memory within a test were separated by either the process dissociation or metacognition-based dissociation procedure. The results assessing influences of shallow and deep processing, association, and self-generation on CM in explicit and implicit tests are taken as evidence that CM in a test is driven not only conceptually but also by the driving nature of the test, and CM benefits from an encoding condition to the extent that information processing for CM recapitulates that engaged in the encoding condition.Those influences on UM in explicit and implicit tests are taken to support the view that UM in a test is driven by the nature of the test itself, and UM benefits from an encoding condition to the extent that the cognitive environments at test and at study match to activate the same type of information (e.g., visual, lexical, or semantic) about memory items or the same content of a preexisting association or categorical structure.

  17. When semantics aids phonology: A processing advantage for iconic word forms in aphasia.

    PubMed

    Meteyard, Lotte; Stoppard, Emily; Snudden, Dee; Cappa, Stefano F; Vigliocco, Gabriella

    2015-09-01

    Iconicity is the non-arbitrary relation between properties of a phonological form and semantic content (e.g. "moo", "splash"). It is a common feature of both spoken and signed languages, and recent evidence shows that iconic forms confer an advantage during word learning. We explored whether iconic forms conferred a processing advantage for 13 individuals with aphasia following left-hemisphere stroke. Iconic and control words were compared in four different tasks: repetition, reading aloud, auditory lexical decision and visual lexical decision. An advantage for iconic words was seen for some individuals in all tasks, with consistent group effects emerging in reading aloud and auditory lexical decision. Both these tasks rely on mapping between semantics and phonology. We conclude that iconicity aids spoken word processing for individuals with aphasia. This advantage is due to a stronger connection between semantic information and phonological forms. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Combining Video, Audio and Lexical Indicators of Affect in Spontaneous Conversation via Particle Filtering

    PubMed Central

    Savran, Arman; Cao, Houwei; Shah, Miraj; Nenkova, Ani; Verma, Ragini

    2013-01-01

    We present experiments on fusing facial video, audio and lexical indicators for affect estimation during dyadic conversations. We use temporal statistics of texture descriptors extracted from facial video, a combination of various acoustic features, and lexical features to create regression based affect estimators for each modality. The single modality regressors are then combined using particle filtering, by treating these independent regression outputs as measurements of the affect states in a Bayesian filtering framework, where previous observations provide prediction about the current state by means of learned affect dynamics. Tested on the Audio-visual Emotion Recognition Challenge dataset, our single modality estimators achieve substantially higher scores than the official baseline method for every dimension of affect. Our filtering-based multi-modality fusion achieves correlation performance of 0.344 (baseline: 0.136) and 0.280 (baseline: 0.096) for the fully continuous and word level sub challenges, respectively. PMID:25300451

  19. Combining Video, Audio and Lexical Indicators of Affect in Spontaneous Conversation via Particle Filtering.

    PubMed

    Savran, Arman; Cao, Houwei; Shah, Miraj; Nenkova, Ani; Verma, Ragini

    2012-01-01

    We present experiments on fusing facial video, audio and lexical indicators for affect estimation during dyadic conversations. We use temporal statistics of texture descriptors extracted from facial video, a combination of various acoustic features, and lexical features to create regression based affect estimators for each modality. The single modality regressors are then combined using particle filtering, by treating these independent regression outputs as measurements of the affect states in a Bayesian filtering framework, where previous observations provide prediction about the current state by means of learned affect dynamics. Tested on the Audio-visual Emotion Recognition Challenge dataset, our single modality estimators achieve substantially higher scores than the official baseline method for every dimension of affect. Our filtering-based multi-modality fusion achieves correlation performance of 0.344 (baseline: 0.136) and 0.280 (baseline: 0.096) for the fully continuous and word level sub challenges, respectively.

  20. Using Student Writing and Lexical Analysis to Reveal Student Thinking about the Role of Stop Codons in the Central Dogma.

    PubMed

    Prevost, Luanna B; Smith, Michelle K; Knight, Jennifer K

    2016-01-01

    Previous work has shown that students have persistent difficulties in understanding how central dogma processes can be affected by a stop codon mutation. To explore these difficulties, we modified two multiple-choice questions from the Genetics Concept Assessment into three open-ended questions that asked students to write about how a stop codon mutation potentially impacts replication, transcription, and translation. We then used computer-assisted lexical analysis combined with human scoring to categorize student responses. The lexical analysis models showed high agreement with human scoring, demonstrating that this approach can be successfully used to analyze large numbers of student written responses. The results of this analysis show that students' ideas about one process in the central dogma can affect their thinking about subsequent and previous processes, leading to mixed models of conceptual understanding. © 2016 L. B. Prevost et al. CBE—Life Sciences Education © 2016 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  1. Early object labels: the case for a developmental lexical principles framework.

    PubMed

    Golinkoff, R M; Mervis, C B; Hirsh-Pasek, K

    1994-02-01

    Universally, object names make up the largest proportion of any word type found in children's early lexicons. Here we present and critically evaluate a set of six lexical principles (some previously proposed and some new) for making object label learning a manageable task. Overall, the principles have the effect of reducing the amount of information that language-learning children must consider for what a new word might mean. These principles are constructed by children in a two-tiered developmental sequence, as a function of their sensitivity to linguistic input, contextual information, and social-interactional cues. Thus, the process of lexical acquisition changes as a result of the particular principles a given child has at his or her disposal. For children who have only the principles of the first tier (reference, extendibility, and object scope), word learning has a deliberate and laborious look. The principles of the second tier (categorical scope, novel name-nameless category' or N3C, and conventionality) enable the child to acquire many new labels rapidly. The present unified account is argued to have a number of advantages over treating such principles separately and non-developmentally. Further, the explicit recognition that the acquisition and operation of these principles is influenced by the child's interpretation of both linguistic and non-linguistic input is seen as an advance.

  2. Identifying missing dictionary entries with frequency-conserving context models

    NASA Astrophysics Data System (ADS)

    Williams, Jake Ryland; Clark, Eric M.; Bagrow, James P.; Danforth, Christopher M.; Dodds, Peter Sheridan

    2015-10-01

    In an effort to better understand meaning from natural language texts, we explore methods aimed at organizing lexical objects into contexts. A number of these methods for organization fall into a family defined by word ordering. Unlike demographic or spatial partitions of data, these collocation models are of special importance for their universal applicability. While we are interested here in text and have framed our treatment appropriately, our work is potentially applicable to other areas of research (e.g., speech, genomics, and mobility patterns) where one has ordered categorical data (e.g., sounds, genes, and locations). Our approach focuses on the phrase (whether word or larger) as the primary meaning-bearing lexical unit and object of study. To do so, we employ our previously developed framework for generating word-conserving phrase-frequency data. Upon training our model with the Wiktionary, an extensive, online, collaborative, and open-source dictionary that contains over 100 000 phrasal definitions, we develop highly effective filters for the identification of meaningful, missing phrase entries. With our predictions we then engage the editorial community of the Wiktionary and propose short lists of potential missing entries for definition, developing a breakthrough, lexical extraction technique and expanding our knowledge of the defined English lexicon of phrases.

  3. Developmental differences in masked form priming are not driven by vocabulary growth.

    PubMed

    Bhide, Adeetee; Schlaggar, Bradley L; Barnes, Kelly Anne

    2014-01-01

    As children develop into skilled readers, they are able to more quickly and accurately distinguish between words with similar visual forms (i.e., they develop precise lexical representations). The masked form priming lexical decision task is used to test the precision of lexical representations. In this paradigm, a prime (which differs by one letter from the target) is briefly flashed before the target is presented. Participants make a lexical decision to the target. Primes can facilitate reaction time by partially activating the lexical entry for the target. If a prime is unable to facilitate reaction time, it is assumed that participants have a precise orthographic representation of the target and thus the prime is not a close enough match to activate its lexical entry. Previous developmental work has shown that children and adults' lexical decision times are facilitated by form primes preceding words from small neighborhoods (i.e., very few words can be formed by changing one letter in the original word; low N words), but only children are facilitated by form primes preceding words from large neighborhoods (high N words). It has been hypothesized that written vocabulary growth drives the increase in the precision of the orthographic representations; children may not know all of the neighbors of the high N words, making the words effectively low N for them. We tested this hypothesis by (1) equating the effective orthographic neighborhood size of the targets for children and adults and (2) testing whether age or vocabulary size was a better predictor of the extent of form priming. We found priming differences even when controlling for effective neighborhood size. Furthermore, age was a better predictor of form priming effects than was vocabulary size. Our findings provide no support for the hypothesis that growth in written vocabulary size gives rise to more precise lexical representations. We propose that the development of spelling ability may be a more important factor.

  4. Masked immediate-repetition-priming effect on the early lexical process in the bilateral anterior temporal areas assessed by neuromagnetic responses.

    PubMed

    Fujimaki, Norio; Hayakawa, Tomoe; Ihara, Aya; Matani, Ayumu; Wei, Qiang; Terazono, Yasushi; Murata, Tsutomu

    2010-10-01

    A masked priming paradigm has been used to measure unconscious and automatic context effects on the processing of words. However, its spatiotemporal neural basis has not yet been clarified. To test the hypothesis that masked repetition priming causes enhancement of neural activation, we conducted a magnetoencephalography experiment in which a prime was visually presented for a short duration (50 ms), preceded by a mask pattern, and followed by a target word that was represented by a Japanese katakana syllabogram. The prime, which was identical to the target, was represented by another hiragana syllabogram in the "Repeated" condition, whereas it was a string of unreadable pseudocharacters in the "Unrepeated" condition. Subjects executed a categorical decision task on the target. Activation was significantly larger for the Repeated condition than for the Unrepeated condition at a time window of 150-250 ms in the right occipital area, 200-250 ms in the bilateral ventral occipitotemporal areas, and 200-250 ms and 200-300 ms in the left and right anterior temporal areas, respectively. These areas have been reported to be related to processing of visual-form/orthography and lexico-semantics, and the enhanced activation supports the hypothesis. However, the absence of the priming effect in the areas related to phonological processing implies that automatic phonological priming effect depends on task requirements. 2010 Elsevier Ireland Ltd and the Japan Neuroscience Society. All rights reserved.

  5. A neurocomputational account of taxonomic responding and fast mapping in early word learning.

    PubMed

    Mayor, Julien; Plunkett, Kim

    2010-01-01

    We present a neurocomputational model with self-organizing maps that accounts for the emergence of taxonomic responding and fast mapping in early word learning, as well as a rapid increase in the rate of acquisition of words observed in late infancy. The quality and efficiency of generalization of word-object associations is directly related to the quality of prelexical, categorical representations in the model. We show how synaptogenesis supports coherent generalization of word-object associations and show that later synaptic pruning minimizes metabolic costs without being detrimental to word learning. The role played by joint-attentional activities is identified in the model, both at the level of selecting efficient cross-modal synapses and at the behavioral level, by accelerating and refining overall vocabulary acquisition. The model can account for the qualitative shift in the way infants use words, from an associative to a referential-like use, for the pattern of overextension errors in production and comprehension observed during early childhood and typicality effects observed in lexical development. Interesting by-products of the model include a potential explanation of the shift from prototype to exemplar-based effects reported for adult category formation, an account of mispronunciation effects in early lexical development, and extendability to include accounts of individual differences in lexical development and specific disorders such as Williams syndrome. The model demonstrates how an established constraint on lexical learning, which has often been regarded as domain-specific, can emerge from domain-general learning principles that are simultaneously biologically, psychologically, and socially plausible.

  6. Differential effect of visual masking in perceptual categorization.

    PubMed

    Hélie, Sébastien; Cousineau, Denis

    2015-06-01

    This article explores the visual information used to categorize stimuli drawn from a common stimulus space into verbal and nonverbal categories using 2 experiments. Experiment 1 explores the effect of target duration on verbal and nonverbal categorization using backward masking to interrupt visual processing. With categories equated for difficulty for long and short target durations, intermediate target duration shows an advantage for verbal categorization over nonverbal categorization. Experiment 2 tests whether the results of Experiment 1 can be explained by shorter target duration resulting in a smaller signal-to-noise ratio of the categorization stimulus. To test for this possibility, Experiment 2 used integration masking with the same stimuli, categories, and masks as Experiment 1 with a varying level of mask opacity. As predicted, low mask opacity yielded similar results to long target duration while high mask opacity yielded similar results to short target duration. Importantly, intermediate mask opacity produced an advantage for verbal categorization over nonverbal categorization, similar to intermediate target duration. These results suggest that verbal and nonverbal categorization are affected differently by manipulations affecting the signal-to-noise ratio of the stimulus, consistent with multiple-system theories of categorizations. The results further suggest that verbal categorization may be more digital (and more robust to low signal-to-noise ratio) while the information used in nonverbal categorization may be more analog (and less robust to lower signal-to-noise ratio). This article concludes with a discussion of how these new results affect the use of masking in perceptual categorization and multiple-system theories of perceptual category learning. (c) 2015 APA, all rights reserved).

  7. What makes a word so attractive? Disclosing the urge to read while bisecting.

    PubMed

    Girelli, Luisa; Previtali, Paola; Arduino, Lisa S

    2018-04-22

    Expert readers have been repeatedly reported to misperceive the centre of visual stimuli, shifting systematically to the left the bisection of any lines (pseudoneglect) while showing a cross-over effect while bisecting different types of orthographic strings (Arduino et al., 2010, Neuropsychologia, 48, 2140). This difference has been attributed to asymmetrical allocation of attention that visuo-verbal material receives when lexical access occurs (e.g., Fischer, 2004, Cognitive Brain Research, 4, 163). The aim of this study was to further examine which visual features guide recognition of potentially orthographic materials. To disentangle the role of orthography, heterogeneity, and visuo-perceptual discreteness, we presented Italian unimpaired adults with four experiments exploiting the bisection paradigm. The results showed that a cross-over effect emerges in most discrete strings, especially when their internal structure, that is being composed of heterogeneous elements, is suggestive of orthographically relevant material. Interestingly, the cross-over effect systematically characterized the processing of letter strings (Experiment 2) and words (Experiments 3 and 4), whether visually discrete or not. Overall, this pattern of results suggests that neither discreteness nor heterogeneity per se are responsible for activating visual scanning mechanisms implied in text exploration, although both contribute to increasing the chance of a visual stimulus undergoing a perceptual analysis dedicated to pre-lexical processing. © 2018 The British Psychological Society.

  8. Early Visual Word Processing Is Flexible: Evidence from Spatiotemporal Brain Dynamics.

    PubMed

    Chen, Yuanyuan; Davis, Matthew H; Pulvermüller, Friedemann; Hauk, Olaf

    2015-09-01

    Visual word recognition is often described as automatic, but the functional locus of top-down effects is still a matter of debate. Do task demands modulate how information is retrieved, or only how it is used? We used EEG/MEG recordings to assess whether, when, and how task contexts modify early retrieval of specific psycholinguistic information in occipitotemporal cortex, an area likely to contribute to early stages of visual word processing. Using a parametric approach, we analyzed the spatiotemporal response patterns of occipitotemporal cortex for orthographic, lexical, and semantic variables in three psycholinguistic tasks: silent reading, lexical decision, and semantic decision. Task modulation of word frequency and imageability effects occurred simultaneously in ventral occipitotemporal regions-in the vicinity of the putative visual word form area-around 160 msec, following task effects on orthographic typicality around 100 msec. Frequency and typicality also produced task-independent effects in anterior temporal lobe regions after 200 msec. The early task modulation for several specific psycholinguistic variables indicates that occipitotemporal areas integrate perceptual input with prior knowledge in a task-dependent manner. Still, later task-independent effects in anterior temporal lobes suggest that word recognition eventually leads to retrieval of semantic information irrespective of task demands. We conclude that even a highly overlearned visual task like word recognition should be described as flexible rather than automatic.

  9. Handwriting generates variable visual output to facilitate symbol learning.

    PubMed

    Li, Julia X; James, Karin H

    2016-03-01

    Recent research has demonstrated that handwriting practice facilitates letter categorization in young children. The present experiments investigated why handwriting practice facilitates visual categorization by comparing 2 hypotheses: that handwriting exerts its facilitative effect because of the visual-motor production of forms, resulting in a direct link between motor and perceptual systems, or because handwriting produces variable visual instances of a named category in the environment that then changes neural systems. We addressed these issues by measuring performance of 5-year-old children on a categorization task involving novel, Greek symbols across 6 different types of learning conditions: 3 involving visual-motor practice (copying typed symbols independently, tracing typed symbols, tracing handwritten symbols) and 3 involving visual-auditory practice (seeing and saying typed symbols of a single typed font, of variable typed fonts, and of handwritten examples). We could therefore compare visual-motor production with visual perception both of variable and similar forms. Comparisons across the 6 conditions (N = 72) demonstrated that all conditions that involved studying highly variable instances of a symbol facilitated symbol categorization relative to conditions where similar instances of a symbol were learned, regardless of visual-motor production. Therefore, learning perceptually variable instances of a category enhanced performance, suggesting that handwriting facilitates symbol understanding by virtue of its environmental output: supporting the notion of developmental change though brain-body-environment interactions. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  10. Handwriting generates variable visual input to facilitate symbol learning

    PubMed Central

    Li, Julia X.; James, Karin H.

    2015-01-01

    Recent research has demonstrated that handwriting practice facilitates letter categorization in young children. The present experiments investigated why handwriting practice facilitates visual categorization by comparing two hypotheses: That handwriting exerts its facilitative effect because of the visual-motor production of forms, resulting in a direct link between motor and perceptual systems, or because handwriting produces variable visual instances of a named category in the environment that then changes neural systems. We addressed these issues by measuring performance of 5 year-old children on a categorization task involving novel, Greek symbols across 6 different types of learning conditions: three involving visual-motor practice (copying typed symbols independently, tracing typed symbols, tracing handwritten symbols) and three involving visual-auditory practice (seeing and saying typed symbols of a single typed font, of variable typed fonts, and of handwritten examples). We could therefore compare visual-motor production with visual perception both of variable and similar forms. Comparisons across the six conditions (N=72) demonstrated that all conditions that involved studying highly variable instances of a symbol facilitated symbol categorization relative to conditions where similar instances of a symbol were learned, regardless of visual-motor production. Therefore, learning perceptually variable instances of a category enhanced performance, suggesting that handwriting facilitates symbol understanding by virtue of its environmental output: supporting the notion of developmental change though brain-body-environment interactions. PMID:26726913

  11. Evidence for a basal temporal visual language center: cortical stimulation producing pure alexia.

    PubMed

    Mani, J; Diehl, B; Piao, Z; Schuele, S S; Lapresto, E; Liu, P; Nair, D R; Dinner, D S; Lüders, H O

    2008-11-11

    Dejerine and Benson and Geschwind postulated disconnection of the dominant angular gyrus from both visual association cortices as the basis for pure alexia, emphasizing disruption of white matter tracts in the dominant temporooccipital region. Recently functional imaging studies provide evidence for direct participation of basal temporal and occipital cortices in the cognitive process of reading. The exact location and function of these areas remain a matter of debate. To confirm the participation of the basal temporal region in reading. Extraoperative electrical stimulation of the dominant hemisphere was performed in three subjects using subdural electrodes, as part of presurgical evaluation for refractory epilepsy. Pure alexia was reproduced during cortical stimulation of the dominant posterior fusiform and inferior temporal gyri in all three patients. Stimulation resulted in selective reading difficulty with intact auditory comprehension and writing. Reading difficulty involved sentences and words with intact letter by letter reading. Picture naming difficulties were also noted at some electrodes. This region is located posterior to and contiguous with the basal temporal language area (BTLA) where stimulation resulted in global language dysfunction in visual and auditory realms. The location corresponded with the visual word form area described on functional MRI. These observations support the existence of a visual language area in the dominant fusiform and occipitotemporal gyri, contiguous with basal temporal language area. A portion of visual language area was exclusively involved in lexical processing while the other part of this region processed both lexical and nonlexical symbols.

  12. The neural correlates of morphological complexity processing: Detecting structure in pseudowords.

    PubMed

    Schuster, Swetlana; Scharinger, Mathias; Brooks, Colin; Lahiri, Aditi; Hartwigsen, Gesa

    2018-06-01

    Morphological complexity is a highly debated issue in visual word recognition. Previous neuroimaging studies have shown that speakers are sensitive to degrees of morphological complexity. Two-step derived complex words (bridging through bridge N  > bridge V  > bridging) led to more enhanced activation in the left inferior frontal gyrus than their 1-step derived counterparts (running through run V  > running). However, it remains unclear whether sensitivity to degrees of morphological complexity extends to pseudowords. If this were the case, it would indicate that abstract knowledge of morphological structure is independent of lexicality. We addressed this question by investigating the processing of two sets of pseudowords in German. Both sets contained morphologically viable two-step derived pseudowords differing in the number of derivational steps required to access an existing lexical representation and therefore the degree of structural analysis expected during processing. Using a 2 × 2 factorial design, we found lexicality effects to be distinct from processing signatures relating to structural analysis in pseudowords. Semantically-driven processes such as lexical search showed a more frontal distribution while combinatorial processes related to structural analysis engaged more parietal parts of the network. Specifically, more complex pseudowords showed increased activation in parietal regions (right superior parietal lobe and left precuneus) relative to pseudowords that required less structural analysis to arrive at an existing lexical representation. As the two sets were matched on cohort size and surface form, these results highlight the role of internal levels of morphological structure even in forms that do not possess a lexical representation. © 2018 Wiley Periodicals, Inc.

  13. Evidence for a differential interference of noise in sub-lexical and lexical reading routes in healthy participants and dyslexics.

    PubMed

    Pina Rodrigues, Ana; Rebola, José; Jorge, Helena; Ribeiro, Maria José; Pereira, Marcelino; Castelo-Branco, Miguel; van Asselen, Marieke

    The ineffective exclusion of surrounding noise has been proposed to underlie the reading deficits in developmental dyslexia. However, previous studies supporting this hypothesis focused on low-level visual tasks, providing only an indirect link of noise interference on reading processes. In this study, we investigated the effect of noise on regular, irregular, and pseudoword reading in 23 dyslexic children and 26 age- and IQ-matched controls, by applying the white noise displays typically used to validate this theory to a lexical decision task. Reading performance and eye movements were measured. Results showed that white noise did not consistently affect dyslexic readers more than typical readers. Noise affected more dyslexic than typical readers in terms of reading accuracy, but it affected more typical than dyslexic readers in terms of response time and eye movements (number of fixations and regressions). Furthermore, in typical readers, noise affected more the speed of reading of pseudowords than real words. These results suggest a particular impact of noise on the sub-lexical reading route where attention has to be deployed to individual letters. The use of a lexical route would reduce the effect of noise. A differential impact of noise between words and pseudowords may therefore not be evident in dyslexic children if they are not yet proficient in using the lexical route. These findings indicate that the type of reading stimuli and consequent reading strategies play an important role in determining the effects of noise interference in reading processing and should be taken into account by further studies.

  14. A normative database and determinants of lexical retrieval for 186 Arabic nouns: effects of psycholinguistic and morpho-syntactic variables on naming latency.

    PubMed

    Khwaileh, Tariq; Body, Richard; Herbert, Ruth

    2014-12-01

    Research into lexical retrieval requires pictorial stimuli standardised for key psycholinguistic variables. Such databases exist in a number of languages but not in Arabic. In addition there are few studies of the effects of psycholinguistic and morpho-syntactic variables on Arabic lexical retrieval. The current study identified a set of culturally and linguistically appropriate concept labels, and corresponding photographic representations for Levantine Arabic. The set included masculine and feminine nouns, nouns from both types of plural formation (sound and broken), and both rational and irrational nouns. Levantine Arabic speakers provided norms for visual complexity, imageability, age of acquisition, naming latency and name agreement. This delivered a normative database for a set of 186 Arabic nouns. The effects of the morpho-syntactic and the psycholinguistic variables on lexical retrieval were explored using the database. Imageability and age of acquisition were the only significant determinants of successful lexical retrieval in Arabic. None of the other variables, including all the linguistic variables, had any effect on production time. The normative database is available for the use of clinicians and researchers in the Arab world in the domains of speech and language pathology, neurolinguistics and psycholinguistics. The database and the photographic representations will be soon available for free download from the first author's personal webpage or via email.

  15. Beyond the VWFA: The orthography-semantics interface in spelling and reading

    PubMed Central

    Purcell, Jeremy J.; Shea, Jennifer; Rapp, Brenda

    2014-01-01

    Lexical orthographic information provides the basis for recovering the meanings of words in reading and for generating correct word spellings in writing. Research has provided evidence that an area of the left ventral temporal cortex, a sub-region of what is often referred to as the Visual Word Form Area (VWFA), plays a significant role specifically in lexical orthographic processing. The current investigation goes beyond this previous work by examining the neurotopography of the interface of lexical orthography with semantics. We apply a novel lesion mapping approach with three individuals with acquired dysgraphia and dyslexia who suffered lesions to left ventral temporal cortex. To map cognitive processes to their neural substrates, this lesion mapping approach applies similar logical constraints as used in cognitive neuropsychological research. Using this approach, this investigation: (1) Identifies a region anterior to the VWFA that is important in the interface of orthographic information with semantics for reading and spelling; (2) Determines that, within this Orthography-Semantics Interface Region (OSIR), access to orthography from semantics (spelling) is topographically distinct from access to semantics from orthography (reading); (3) Provides evidence that, within this region, there is modality-specific access to and from lexical semantics for both spoken and written modalities, in both word production and comprehension. Overall, this study contributes to our understanding of the neural architecture at the lexical orthography-semantic-phonological interface within left ventral temporal cortex. PMID:24833190

  16. The Role of Auditory and Visual Speech in Word Learning at 18 Months and in Adulthood

    ERIC Educational Resources Information Center

    Havy, Mélanie; Foroud, Afra; Fais, Laurel; Werker, Janet F.

    2017-01-01

    Visual information influences speech perception in both infants and adults. It is still unknown whether lexical representations are multisensory. To address this question, we exposed 18-month-old infants (n = 32) and adults (n = 32) to new word-object pairings: Participants either heard the acoustic form of the words or saw the talking face in…

  17. Real-time processing of ASL signs: Delayed first language acquisition affects organization of the mental lexicon

    PubMed Central

    Lieberman, Amy M.; Borovsky, Arielle; Hatrak, Marla; Mayberry, Rachel I.

    2014-01-01

    Sign language comprehension requires visual attention to the linguistic signal and visual attention to referents in the surrounding world, whereas these processes are divided between the auditory and visual modalities for spoken language comprehension. Additionally, the age-onset of first language acquisition and the quality and quantity of linguistic input and for deaf individuals is highly heterogeneous, which is rarely the case for hearing learners of spoken languages. Little is known about how these modality and developmental factors affect real-time lexical processing. In this study, we ask how these factors impact real-time recognition of American Sign Language (ASL) signs using a novel adaptation of the visual world paradigm in deaf adults who learned sign from birth (Experiment 1), and in deaf individuals who were late-learners of ASL (Experiment 2). Results revealed that although both groups of signers demonstrated rapid, incremental processing of ASL signs, only native-signers demonstrated early and robust activation of sub-lexical features of signs during real-time recognition. Our findings suggest that the organization of the mental lexicon into units of both form and meaning is a product of infant language learning and not the sensory and motor modality through which the linguistic signal is sent and received. PMID:25528091

  18. Anatomical connections of the visual word form area.

    PubMed

    Bouhali, Florence; Thiebaut de Schotten, Michel; Pinel, Philippe; Poupon, Cyril; Mangin, Jean-François; Dehaene, Stanislas; Cohen, Laurent

    2014-11-12

    The visual word form area (VWFA), a region systematically involved in the identification of written words, occupies a reproducible location in the left occipitotemporal sulcus in expert readers of all cultures. Such a reproducible localization is paradoxical, given that reading is a recent invention that could not have influenced the genetic evolution of the cortex. Here, we test the hypothesis that the VWFA recycles a region of the ventral visual cortex that shows a high degree of anatomical connectivity to perisylvian language areas, thus providing an efficient circuit for both grapheme-phoneme conversion and lexical access. In two distinct experiments, using high-resolution diffusion-weighted data from 75 human subjects, we show that (1) the VWFA, compared with the fusiform face area, shows higher connectivity to left-hemispheric perisylvian superior temporal, anterior temporal and inferior frontal areas; (2) on a posterior-to-anterior axis, its localization within the left occipitotemporal sulcus maps onto a peak of connectivity with language areas, with slightly distinct subregions showing preferential projections to areas respectively involved in grapheme-phoneme conversion and lexical access. In agreement with functional data on the VWFA in blind subjects, the results suggest that connectivity to language areas, over and above visual factors, may be the primary determinant of VWFA localization. Copyright © 2014 the authors 0270-6474/14/3415402-13$15.00/0.

  19. An ERP Investigation of Visual Word Recognition in Syllabary Scripts

    PubMed Central

    Okano, Kana; Grainger, Jonathan; Holcomb, Phillip J.

    2013-01-01

    The bi-modal interactive-activation model has been successfully applied to understanding the neuro-cognitive processes involved in reading words in alphabetic scripts, as reflected in the modulation of ERP components in masked repetition priming. In order to test the generalizability of this approach, the current study examined word recognition in a different writing system, the Japanese syllabary scripts Hiragana and Katakana. Native Japanese participants were presented with repeated or unrelated pairs of Japanese words where the prime and target words were both in the same script (within-script priming, Experiment 1) or were in the opposite script (cross-script priming, Experiment 2). As in previous studies with alphabetic scripts, in both experiments the N250 (sub-lexical processing) and N400 (lexical-semantic processing) components were modulated by priming, although the time-course was somewhat delayed. The earlier N/P150 effect (visual feature processing) was present only in Experiment 1 where prime and target words shared visual features. Overall, the results provide support for the hypothesis that visual word recognition involves a generalizable set of neuro-cognitive processes that operate in a similar manner across different writing systems and languages, as well as pointing to the viability of the bi-modal interactive activation framework for modeling such processes. PMID:23378278

  20. Observer Bias: An Interaction of Temperament Traits with Biases in the Semantic Perception of Lexical Material

    PubMed Central

    Trofimova, Ira

    2014-01-01

    The lexical approach is a method in differential psychology that uses people's estimations of verbal descriptors of human behavior in order to derive the structure of human individuality. The validity of the assumptions of this method about the objectivity of people's estimations is rarely questioned. Meanwhile the social nature of language and the presence of emotionality biases in cognition are well-recognized in psychology. A question remains, however, as to whether such an emotionality-capacities bias is strong enough to affect semantic perception of verbal material. For the lexical approach to be valid as a method of scientific investigations, such biases should not exist in semantic perception of the verbal material that is used by this approach. This article reports on two studies investigating differences between groups contrasted by 12 temperament traits (i.e. by energetic and other capacities, as well as emotionality) in the semantic perception of very general verbal material. Both studies contrasted the groups by a variety of capacities: endurance, lability and emotionality separately in physical, social-verbal and mental aspects of activities. Hypotheses of “background emotionality” and a “projection through capacities” were supported. Non-evaluative criteria for categorization (related to complexity, organization, stability and probability of occurrence of objects) followed the polarity of evaluative criteria, and did not show independence from this polarity. Participants with stronger physical or social endurance gave significantly more positive ratings to a variety of concepts, and participants with faster physical tempo gave more positive ratings to timing-related concepts. The results suggest that people's estimations of lexical material related to human behavior have emotionality, language- and dynamical capacities-related biases and therefore are unreliable. This questions the validity of the lexical approach as a method for the objective study of stable individual differences. PMID:24475048

  1. Color names, color categories, and color-cued visual search: Sometimes, color perception is not categorical

    PubMed Central

    Brown, Angela M; Lindsey, Delwin T; Guckes, Kevin M

    2011-01-01

    The relation between colors and their names is a classic case-study for investigating the Sapir-Whorf hypothesis that categorical perception is imposed on perception by language. Here, we investigate the Sapir-Whorf prediction that visual search for a green target presented among blue distractors (or vice versa) should be faster than search for a green target presented among distractors of a different color of green (or for a blue target among different blue distractors). Gilbert, Regier, Kay & Ivry (2006) reported that this Sapir-Whorf effect is restricted to the right visual field (RVF), because the major brain language centers are in the left cerebral hemisphere. We found no categorical effect at the Green|Blue color boundary, and no categorical effect restricted to the RVF. Scaling of perceived color differences by Maximum Likelihood Difference Scaling (MLDS) also showed no categorical effect, including no effect specific to the RVF. Two models fit the data: a color difference model based on MLDS and a standard opponent-colors model of color discrimination based on the spectral sensitivities of the cones. Neither of these models, nor any of our data, suggested categorical perception of colors at the Green|Blue boundary, in either visual field. PMID:21980188

  2. Prediction and constraint in audiovisual speech perception

    PubMed Central

    Peelle, Jonathan E.; Sommers, Mitchell S.

    2015-01-01

    During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing precision of prediction. Electrophysiological studies demonstrate oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to auditory information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration, supported by distinct neuroanatomical mechanisms. PMID:25890390

  3. Metrical expectations from preceding prosody influence perception of lexical stress

    PubMed Central

    Brown, Meredith; Salverda, Anne Pier; Dilley, Laura C.; Tanenhaus, Michael K.

    2015-01-01

    Two visual-world experiments tested the hypothesis that expectations based on preceding prosody influence the perception of suprasegmental cues to lexical stress. The results demonstrate that listeners’ consideration of competing alternatives with different stress patterns (e.g., ‘jury/gi’raffe) can be influenced by the fundamental frequency and syllable timing patterns across material preceding a target word. When preceding stressed syllables distal to the target word shared pitch and timing characteristics with the first syllable of the target word, pictures of alternatives with primary lexical stress on the first syllable (e.g., jury) initially attracted more looks than alternatives with unstressed initial syllables (e.g., giraffe). This effect was modulated when preceding unstressed syllables had pitch and timing characteristics similar to the initial syllable of the target word, with more looks to alternatives with unstressed initial syllables (e.g., giraffe) than to those with stressed initial syllables (e.g., jury). These findings suggest that expectations about the acoustic realization of upcoming speech include information about metrical organization and lexical stress, and that these expectations constrain the initial interpretation of suprasegmental stress cues. These distal prosody effects implicate on-line probabilistic inferences about the sources of acoustic-phonetic variation during spoken-word recognition. PMID:25621583

  4. Fronto-striatal contribution to lexical set-shifting.

    PubMed

    Simard, France; Joanette, Yves; Petrides, Michael; Jubault, Thomas; Madjar, Cécile; Monchi, Oury

    2011-05-01

    Fronto-striatal circuits in set-shifting have been examined in neuroimaging studies using the Wisconsin Card Sorting Task (WCST) that requires changing the classification rule for cards containing visual stimuli that differ in color, shape, and number. The present study examined whether this fronto-striatal contribution to the planning and execution of set-shifts is similar in a modified sorting task in which lexical rules are applied to word stimuli. Young healthy adults were scanned with functional magnetic resonance imaging while performing the newly developed lexical version of the WCST: the Wisconsin Word Sorting Task. Significant activation was found in a cortico-striatal loop that includes area 47/12 of the ventrolateral prefrontal cortex (PFC), and the caudate nucleus during the planning of a set-shift, and in another that includes the posterior PFC and the putamen during the execution of a set-shift. However, in the present lexical task, additional activation peaks were observed in area 45 of the ventrolateral PFC area during both matching periods. These results provide evidence that the functional contributions of the various fronto-striatal loops are not dependent on the modality of the information to be manipulated but rather on the specific executive processes required.

  5. Emotion words and categories: evidence from lexical decision.

    PubMed

    Scott, Graham G; O'Donnell, Patrick J; Sereno, Sara C

    2014-05-01

    We examined the categorical nature of emotion word recognition. Positive, negative, and neutral words were presented in lexical decision tasks. Word frequency was additionally manipulated. In Experiment 1, "positive" and "negative" categories of words were implicitly indicated by the blocked design employed. A significant emotion-frequency interaction was obtained, replicating past research. While positive words consistently elicited faster responses than neutral words, only low frequency negative words demonstrated a similar advantage. In Experiments 2a and 2b, explicit categories ("positive," "negative," and "household" items) were specified to participants. Positive words again elicited faster responses than did neutral words. Responses to negative words, however, were no different than those to neutral words, regardless of their frequency. The overall pattern of effects indicates that positive words are always facilitated, frequency plays a greater role in the recognition of negative words, and a "negative" category represents a somewhat disparate set of emotions. These results support the notion that emotion word processing may be moderated by distinct systems.

  6. Reading without words or target detection? A re-analysis and replication fMRI study of the Landolt paradigm.

    PubMed

    Heim, Stefan; von Tongeln, Franziska; Hillen, Rebekka; Horbach, Josefine; Radach, Ralph; Günther, Thomas

    2018-06-19

    The Landolt paradigm is a visual scanning task intended to evoke reading-like eye-movements in the absence of orthographic or lexical information, thus allowing the dissociation of (sub-) lexical vs. visual processing. To that end, all letters in real word sentences are exchanged for closed Landolt rings, with 0, 1, or 2 open Landolt rings as targets in each Landolt sentence. A preliminary fMRI block-design study (Hillen et al. in Front Hum Neurosci 7:1-14, 2013) demonstrated that the Landolt paradigm has a special neural signature, recruiting the right IPS and SPL as part of the endogenous attention network. However, in that analysis, the brain responses to target detection could not be separated from those involved in processing Landolt stimuli without targets. The present study presents two fMRI experiments testing the question whether targets or the Landolt stimuli per se, led to the right IPS/SPL activation. Experiment 1 was an event-related re-analysis of the Hillen et al. (Front Hum Neurosci 7:1-14, 2013) data. Experiment 2 was a replication study with a new sample and identical procedures. In both experiments, the right IPS/SPL were recruited in the Landolt condition as compared to orthographic stimuli even in the absence of any target in the stimulus, indicating that the properties of the Landolt task itself trigger this right parietal activation. These findings are discussed against the background of behavioural and neuroimaging studies of healthy reading as well as developmental and acquired dyslexia. Consequently, this neuroimaging evidence might encourage the use of the Landolt paradigm also in the context of examining reading disorders, as it taps into the orientation of visual attention during reading-like scanning of stimuli without interfering sub-lexical information.

  7. Categorical Perception of Colour in the Left and Right Visual Field Is Verbally Mediated: Evidence from Korean

    ERIC Educational Resources Information Center

    Roberson, Debi; Pak, Hyensou; Hanley, J. Richard

    2008-01-01

    In this study we demonstrate that Korean (but not English) speakers show Categorical perception (CP) on a visual search task for a boundary between two Korean colour categories that is not marked in English. These effects were observed regardless of whether target items were presented to the left or right visual field. Because this boundary is…

  8. PATTERNS OF CLINICALLY SIGNIFICANT COGNITIVE IMPAIRMENT IN HOARDING DISORDER.

    PubMed

    Mackin, R Scott; Vigil, Ofilio; Insel, Philip; Kivowitz, Alana; Kupferman, Eve; Hough, Christina M; Fekri, Shiva; Crothers, Ross; Bickford, David; Delucchi, Kevin L; Mathews, Carol A

    2016-03-01

    The cognitive characteristics of individuals with hoarding disorder (HD) are not well understood. Existing studies are relatively few and somewhat inconsistent but suggest that individuals with HD may have specific dysfunction in the cognitive domains of categorization, speed of information processing, and decision making. However, there have been no studies evaluating the degree to which cognitive dysfunction in these domains reflects clinically significant cognitive impairment (CI). Participants included 78 individuals who met DSM-V criteria for HD and 70 age- and education-matched controls. Cognitive performance on measures of memory, attention, information processing speed, abstract reasoning, visuospatial processing, decision making, and categorization ability was evaluated for each participant. Rates of clinical impairment for each measure were compared, as were age- and education-corrected raw scores for each cognitive test. HD participants showed greater incidence of CI on measures of visual memory, visual detection, and visual categorization relative to controls. Raw-score comparisons between groups showed similar results with HD participants showing lower raw-score performance on each of these measures. In addition, in raw-score comparisons HD participants also demonstrated relative strengths compared to control participants on measures of verbal and visual abstract reasoning. These results suggest that HD is associated with a pattern of clinically significant CI in some visually mediated neurocognitive processes including visual memory, visual detection, and visual categorization. Additionally, these results suggest HD individuals may also exhibit relative strengths, perhaps compensatory, in abstract reasoning in both verbal and visual domains. © 2015 Wiley Periodicals, Inc.

  9. Competition between conceptual relations affects compound recognition: the role of entropy.

    PubMed

    Schmidtke, Daniel; Kuperman, Victor; Gagné, Christina L; Spalding, Thomas L

    2016-04-01

    Previous research has suggested that the conceptual representation of a compound is based on a relational structure linking the compound's constituents. Existing accounts of the visual recognition of modifier-head or noun-noun compounds posit that the process involves the selection of a relational structure out of a set of competing relational structures associated with the same compound. In this article, we employ the information-theoretic metric of entropy to gauge relational competition and investigate its effect on the visual identification of established English compounds. The data from two lexical decision megastudies indicates that greater entropy (i.e., increased competition) in a set of conceptual relations associated with a compound is associated with longer lexical decision latencies. This finding indicates that there exists competition between potential meanings associated with the same complex word form. We provide empirical support for conceptual composition during compound word processing in a model that incorporates the effect of the integration of co-activated and competing relational information.

  10. Does letter rotation slow down orthographic processing in word recognition?

    PubMed

    Perea, Manuel; Marcet, Ana; Fernández-López, María

    2018-02-01

    Leading neural models of visual word recognition assume that letter rotation slows down the conversion of the visual input to a stable orthographic representation (e.g., local detectors combination model; Dehaene, Cohen, Sigman, & Vinckier, 2005, Trends in Cognitive Sciences, 9, 335-341). If this premise is true, briefly presented rotated primes should be less effective at activating word representations than those primes with upright letters. To test this question, we conducted a masked priming lexical decision experiment with vertically presented words either rotated 90° or in marquee format (i.e., vertically but with upright letters). We examined the impact of the format on both letter identity (masked identity priming: identity vs. unrelated) and letter position (masked transposed-letter priming: transposed-letter prime vs. replacement-letter prime). Results revealed sizeable masked identity and transposed-letter priming effects that were similar in magnitude for rotated and marquee words. Therefore, the reading cost from letter rotation does not arise in the initial access to orthographic/lexical representations.

  11. Manipulation of length and lexicality localizes the functional neuroanatomy of phonological processing in adult readers.

    PubMed

    Church, Jessica A; Balota, David A; Petersen, Steven E; Schlaggar, Bradley L

    2011-06-01

    In a previous study of single word reading, regions in the left supramarginal gyrus and left angular gyrus showed positive BOLD activity in children but significantly less activity in adults for high-frequency words [Church, J. A., Coalson, R. S., Lugar, H. M., Petersen, S. E., & Schlaggar, B. L. A developmental fMRI study of reading and repetition reveals changes in phonological and visual mechanisms over age. Cerebral Cortex, 18, 2054-2065, 2008]. This developmental decrease may reflect decreased reliance on phonological processing for familiar stimuli in adults. Therefore, in the present study, variables thought to influence phonological demand (string length and lexicality) were manipulated. Length and lexicality effects in the brain were explored using both ROI and whole-brain approaches. In the ROI analysis, the supramarginal and angular regions from the previous study were applied to this study. The supramarginal region showed a significant positive effect of length, consistent with a role in phonological processing, whereas the angular region showed only negative deflections from baseline with a strong effect of lexicality and other weaker effects. At the whole-brain level, varying effects of length and lexicality and their interactions were observed in 85 regions throughout the brain. The application of hierarchical clustering analysis to the BOLD time course data derived from these regions revealed seven clusters, with potentially revealing anatomical locations. Of note, a left angular gyrus region was the sole constituent of one cluster. Taken together, these findings in adult readers (1) provide support for a widespread set of brain regions affected by lexical variables, (2) corroborate a role for phonological processing in the left supramarginal gyrus, and (3) do not support a strong role for phonological processing in the left angular gyrus.

  12. How the blind "see" Braille: lessons from functional magnetic resonance imaging.

    PubMed

    Sadato, Norihiro

    2005-12-01

    What does the visual cortex of the blind do during Braille reading? This process involves converting simple tactile information into meaningful patterns that have lexical and semantic properties. The perceptual processing of Braille might be mediated by the somatosensory system, whereas visual letter identity is accomplished within the visual system in sighted people. Recent advances in functional neuroimaging techniques, such as functional magnetic resonance imaging, have enabled exploration of the neural substrates of Braille reading. The primary visual cortex of early-onset blind subjects is functionally relevant to Braille reading, suggesting that the brain shows remarkable plasticity that potentially permits the additional processing of tactile information in the visual cortical areas.

  13. Emotionally positive stimuli facilitate lexical decisions-an ERP study.

    PubMed

    Kissler, Johanna; Koessler, Susanne

    2011-03-01

    The influence of briefly presented positive and negative emotional pictures on lexical decisions on positive, negative and neutral words or pseudowords was investigated. Behavioural reactions were the fastest following all positive stimuli and most accurate for positive words. Stimulus-locked ERPs revealed enhanced early posterior and late parietal attention effects following positive pictures. A small neural affective priming effect was reflected by P3 modulation, indicating more attention allocation to affectively incongruent prime-target pairs. N400 was insensitive to emotion. Response-locked ERPs revealed an early fronto-central negativity from 480ms before reactions to positive words. It was generated in both fronto-central and extra-striate visual areas, demonstrating a contribution of perceptual and, notably, motor preparation processes. Thus, no behavioural and little neural evidence for congruency-driven affective priming with emotional pictures was found, but positive stimuli generally facilitated lexical decisions, not only enhancing perception, but also acting rapidly on response preparation and by-passing full semantic analysis. Copyright © 2010 Elsevier B.V. All rights reserved.

  14. Lexical precision in skilled readers: Individual differences in masked neighbor priming.

    PubMed

    Andrews, Sally; Hersch, Jolyn

    2010-05-01

    Two experiments investigated the relationship between masked form priming and individual differences in reading and spelling proficiency among university students. Experiment 1 assessed neighbor priming for 4-letter word targets from high- and low-density neighborhoods in 97 university students. The overall results replicated previous evidence of facilitatory neighborhood priming only for low-neighborhood words. However, analyses including measures of reading and spelling proficiency as covariates revealed that better spellers showed inhibitory priming for high-neighborhood words, while poorer spellers showed facilitatory priming. Experiment 2, with 123 participants, replicated the finding of stronger inhibitory neighbor priming in better spellers using 5-letter words and distinguished facilitatory and inhibitory components of priming by comparing neighbor primes with ambiguous and unambiguous partial-word primes (e.g., crow#, cr#wd, crown CROWD). The results indicate that spelling ability is selectively associated with inhibitory effects of lexical competition. The implications for theories of visual word recognition and the lexical quality hypothesis of reading skill are discussed.

  15. Color Categories: Evidence for the Cultural Relativity Hypothesis

    ERIC Educational Resources Information Center

    Roberson, D.; Davidoff, J.; Davies, I.R.L.; Shapiro, L.R.

    2005-01-01

    The question of whether language affects our categorization of perceptual continua is of particular interest for the domain of color where constraints on categorization have been proposed both within the visual system and in the visual environment. Recent research (Roberson, Davies, & Davidoff, 2000; Roberson et al., in press) found…

  16. The picture superiority effect in categorization: visual or semantic?

    PubMed

    Job, R; Rumiati, R; Lotto, L

    1992-09-01

    Two experiments are reported whose aim was to replicate and generalize the results presented by Snodgrass and McCullough (1986) on the effect of visual similarity in the categorization process. For pictures, Snodgrass and McCullough's results were replicated because Ss took longer to discriminate elements from 2 categories when they were visually similar than when they were visually dissimilar. However, unlike Snodgrass and McCullough, an analogous increase was also observed for word stimuli. The pattern of results obtained here can be explained most parsimoniously with reference to the effect of semantic similarity, or semantic and visual relatedness, rather than to visual similarity alone.

  17. Prediction and constraint in audiovisual speech perception.

    PubMed

    Peelle, Jonathan E; Sommers, Mitchell S

    2015-07-01

    During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing the precision of prediction. Electrophysiological studies demonstrate that oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to acoustic information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration, supported by distinct neuroanatomical mechanisms. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Out of sight, out of mind: Categorization learning and normal aging.

    PubMed

    Schenk, Sabrina; Minda, John P; Lech, Robert K; Suchan, Boris

    2016-10-01

    The present combined EEG and eye tracking study examined the process of categorization learning at different age ranges and aimed to investigate to which degree categorization learning is mediated by visual attention and perceptual strategies. Seventeen young subjects and ten elderly subjects had to perform a visual categorization task with two abstract categories. Each category consisted of prototypical stimuli and an exception. The categorization of prototypical stimuli was learned very early during the experiment, while the learning of exceptions was delayed. The categorization of exceptions was accompanied by higher P150, P250 and P300 amplitudes. In contrast to younger subjects, elderly subjects had problems in the categorization of exceptions, but showed an intact categorization performance for prototypical stimuli. Moreover, elderly subjects showed higher fixation rates for important stimulus features and higher P150 amplitudes, which were positively correlated with the categorization performances. These results indicate that elderly subjects compensate for cognitive decline through enhanced perceptual and attentional processing of individual stimulus features. Additionally, a computational approach has been applied and showed a transition away from purely abstraction-based learning to an exemplar-based learning in the middle block for both groups. However, the calculated models provide a better fit for younger subjects than for elderly subjects. The current study demonstrates that human categorization learning is based on early abstraction-based processing followed by an exemplar-memorization stage. This strategy combination facilitates the learning of real world categories with a nuanced category structure. In addition, the present study suggests that categorization learning is affected by normal aging and modulated by perceptual processing and visual attention. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. The Pleasantness of Visual Symmetry: Always, Never or Sometimes

    PubMed Central

    Pecchinenda, Anna; Bertamini, Marco; Makin, Alexis David James; Ruta, Nicole

    2014-01-01

    There is evidence of a preference for visual symmetry. This is true from mate selection in the animal world to the aesthetic appreciation of works of art. It has been proposed that this preference is due to processing fluency, which engenders positive affect. But is visual symmetry pleasant? Evidence is mixed as explicit preferences show that this is the case. In contrast, implicit measures show that visual symmetry does not spontaneously engender positive affect but it depends on participants intentionally assessing visual regularities. In four experiments using variants of the affective priming paradigm, we investigated when visual symmetry engenders positive affect. Findings showed that, when no Stroop-like effects or post-lexical mechanisms enter into play, visual symmetry spontaneously elicits positive affect and results in affective congruence effects. PMID:24658112

  20. Effects of Visual Complexity and Sublexical Information in the Occipitotemporal Cortex in the Reading of Chinese Phonograms: A Single-Trial Analysis with MEG

    ERIC Educational Resources Information Center

    Hsu, Chun-Hsien; Lee, Chia-Ying; Marantz, Alec

    2011-01-01

    We employ a linear mixed-effects model to estimate the effects of visual form and the linguistic properties of Chinese characters on M100 and M170 MEG responses from single-trial data of Chinese and English speakers in a Chinese lexical decision task. Cortically constrained minimum-norm estimation is used to compute the activation of M100 and M170…

  1. Fine-grained visual marine vessel classification for coastal surveillance and defense applications

    NASA Astrophysics Data System (ADS)

    Solmaz, Berkan; Gundogdu, Erhan; Karaman, Kaan; Yücesoy, Veysel; Koç, Aykut

    2017-10-01

    The need for capabilities of automated visual content analysis has substantially increased due to presence of large number of images captured by surveillance cameras. With a focus on development of practical methods for extracting effective visual data representations, deep neural network based representations have received great attention due to their success in visual categorization of generic images. For fine-grained image categorization, a closely related yet a more challenging research problem compared to generic image categorization due to high visual similarities within subgroups, diverse applications were developed such as classifying images of vehicles, birds, food and plants. Here, we propose the use of deep neural network based representations for categorizing and identifying marine vessels for defense and security applications. First, we gather a large number of marine vessel images via online sources grouping them into four coarse categories; naval, civil, commercial and service vessels. Next, we subgroup naval vessels into fine categories such as corvettes, frigates and submarines. For distinguishing images, we extract state-of-the-art deep visual representations and train support-vector-machines. Furthermore, we fine tune deep representations for marine vessel images. Experiments address two scenarios, classification and verification of naval marine vessels. Classification experiment aims coarse categorization, as well as learning models of fine categories. Verification experiment embroils identification of specific naval vessels by revealing if a pair of images belongs to identical marine vessels by the help of learnt deep representations. Obtaining promising performance, we believe these presented capabilities would be essential components of future coastal and on-board surveillance systems.

  2. Development of A Two-Stage Procedure for the Automatic Recognition of Dysfluencies in the Speech of Children Who Stutter: I. Psychometric Procedures Appropriate for Selection of Training Material for Lexical Dysfluency Classifiers

    PubMed Central

    Howell, Peter; Sackin, Stevie; Glenn, Kazan

    2007-01-01

    This program of work is intended to develop automatic recognition procedures to locate and assess stuttered dysfluencies. This and the following article together, develop and test recognizers for repetitions and prolongations. The automatic recognizers classify the speech in two stages: In the first, the speech is segmented and in the second the segments are categorized. The units that are segmented are words. Here assessments by human judges on the speech of 12 children who stutter are described using a corresponding procedure. The accuracy of word boundary placement across judges, categorization of the words as fluent, repetition or prolongation, and duration of the different fluency categories are reported. These measures allow reliable instances of repetitions and prolongations to be selected for training and assessing the recognizers in the subsequent paper. PMID:9328878

  3. Categorical dimensions of human odor descriptor space revealed by non-negative matrix factorization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chennubhotla, Chakra; Castro, Jason

    2013-01-01

    In contrast to most other sensory modalities, the basic perceptual dimensions of olfaction remain un- clear. Here, we use non-negative matrix factorization (NMF) - a dimensionality reduction technique - to uncover structure in a panel of odor profiles, with each odor defined as a point in multi-dimensional descriptor space. The properties of NMF are favorable for the analysis of such lexical and perceptual data, and lead to a high-dimensional account of odor space. We further provide evidence that odor di- mensions apply categorically. That is, odor space is not occupied homogenously, but rather in a discrete and intrinsically clustered manner.more » We discuss the potential implications of these results for the neural coding of odors, as well as for developing classifiers on larger datasets that may be useful for predicting perceptual qualities from chemical structures.« less

  4. Metrical expectations from preceding prosody influence perception of lexical stress.

    PubMed

    Brown, Meredith; Salverda, Anne Pier; Dilley, Laura C; Tanenhaus, Michael K

    2015-04-01

    Two visual-world experiments tested the hypothesis that expectations based on preceding prosody influence the perception of suprasegmental cues to lexical stress. The results demonstrate that listeners' consideration of competing alternatives with different stress patterns (e.g., 'jury/gi'raffe) can be influenced by the fundamental frequency and syllable timing patterns across material preceding a target word. When preceding stressed syllables distal to the target word shared pitch and timing characteristics with the first syllable of the target word, pictures of alternatives with primary lexical stress on the first syllable (e.g., jury) initially attracted more looks than alternatives with unstressed initial syllables (e.g., giraffe). This effect was modulated when preceding unstressed syllables had pitch and timing characteristics similar to the initial syllable of the target word, with more looks to alternatives with unstressed initial syllables (e.g., giraffe) than to those with stressed initial syllables (e.g., jury). These findings suggest that expectations about the acoustic realization of upcoming speech include information about metrical organization and lexical stress and that these expectations constrain the initial interpretation of suprasegmental stress cues. These distal prosody effects implicate online probabilistic inferences about the sources of acoustic-phonetic variation during spoken-word recognition. (c) 2015 APA, all rights reserved.

  5. The lexical processing of abstract and concrete nouns.

    PubMed

    Papagno, Costanza; Fogliata, Arianna; Catricalà, Eleonora; Miniussi, Carlo

    2009-03-31

    Recent activation studies have suggested different neural correlates for processing concrete and abstract words. However, the precise localization is far from being defined. One reason for the heterogeneity of these results could lie in the extreme variability of experimental paradigms, ranging from explicit semantic judgments to lexical decision tasks (auditory and/or visual). The present study explored the processing of abstract/concrete nouns by using repetitive Transcranial Magnetic Stimulation (rTMS) and a lexical decision paradigm in neurologically-unimpaired subjects. Four sites were investigated: left inferior frontal, bilaterally posterior-superior temporal and left posterior-inferior parietal. An interference on accuracy was found for abstract words when rTMS was applied over the left temporal site, while for concrete words accuracy decreased when rTMS was applied over the right temporal site. Accuracy for abstract words, but not for concrete words, decreased after frontal stimulation as compared to the sham condition. These results suggest that abstract lexical entries are stored in the posterior part of the left temporal superior gyrus and possibly in the left frontal inferior gyrus, while the regions involved in storing concrete items include the right temporal cortex. It cannot be excluded, however, that additional areas, not tested in this experiment, are involved in processing both, concrete and abstract nouns.

  6. Categorically Defined Targets Trigger Spatiotemporal Visual Attention

    ERIC Educational Resources Information Center

    Wyble, Brad; Bowman, Howard; Potter, Mary C.

    2009-01-01

    Transient attention to a visually salient cue enhances processing of a subsequent target in the same spatial location between 50 to 150 ms after cue onset (K. Nakayama & M. Mackeben, 1989). Do stimuli from a categorically defined target set, such as letters or digits, also generate transient attention? Participants reported digit targets among…

  7. Cue Integration in Categorical Tasks: Insights from Audio-Visual Speech Perception

    PubMed Central

    Bejjanki, Vikranth Rao; Clayards, Meghan; Knill, David C.; Aslin, Richard N.

    2011-01-01

    Previous cue integration studies have examined continuous perceptual dimensions (e.g., size) and have shown that human cue integration is well described by a normative model in which cues are weighted in proportion to their sensory reliability, as estimated from single-cue performance. However, this normative model may not be applicable to categorical perceptual dimensions (e.g., phonemes). In tasks defined over categorical perceptual dimensions, optimal cue weights should depend not only on the sensory variance affecting the perception of each cue but also on the environmental variance inherent in each task-relevant category. Here, we present a computational and experimental investigation of cue integration in a categorical audio-visual (articulatory) speech perception task. Our results show that human performance during audio-visual phonemic labeling is qualitatively consistent with the behavior of a Bayes-optimal observer. Specifically, we show that the participants in our task are sensitive, on a trial-by-trial basis, to the sensory uncertainty associated with the auditory and visual cues, during phonemic categorization. In addition, we show that while sensory uncertainty is a significant factor in determining cue weights, it is not the only one and participants' performance is consistent with an optimal model in which environmental, within category variability also plays a role in determining cue weights. Furthermore, we show that in our task, the sensory variability affecting the visual modality during cue-combination is not well estimated from single-cue performance, but can be estimated from multi-cue performance. The findings and computational principles described here represent a principled first step towards characterizing the mechanisms underlying human cue integration in categorical tasks. PMID:21637344

  8. The Mental Lexicon Is Fully Specified: Evidence from Eye-Tracking

    ERIC Educational Resources Information Center

    Mitterer, Holger

    2011-01-01

    Four visual-world experiments, in which listeners heard spoken words and saw printed words, compared an optimal-perception account with the theory of phonological underspecification. This theory argues that default phonological features are not specified in the mental lexicon, leading to asymmetric lexical matching: Mismatching input…

  9. Hemispheric Differences in the Recruitment of Semantic Processing Mechanisms

    ERIC Educational Resources Information Center

    Kandhadai, Padmapriya; Federmeier, Kara D.

    2010-01-01

    This study examined how the two cerebral hemispheres recruit semantic processing mechanisms by combining event-related potential measures and visual half-field methods in a word priming paradigm in which semantic strength and predictability were manipulated using lexically associated word pairs. Activation patterns on the late positive complex…

  10. Interpreting Chicken-Scratch: Lexical Access for Handwritten Words

    ERIC Educational Resources Information Center

    Barnhart, Anthony S.; Goldinger, Stephen D.

    2010-01-01

    Handwritten word recognition is a field of study that has largely been neglected in the psychological literature, despite its prevalence in society. Whereas studies of spoken word recognition almost exclusively employ natural, human voices as stimuli, studies of visual word recognition use synthetic typefaces, thus simplifying the process of word…

  11. What Do Graded Effects of Semantic Transparency Reveal about Morphological Processing?

    ERIC Educational Resources Information Center

    Feldman, Laurie Beth; Soltano, Emily G.; Pastizzo, Matthew J.; Francis, Sarah E.

    2004-01-01

    We examined the influence of semantic transparency on morphological facilitation in English in three lexical decision experiments. Decision latencies to visual targets (e.g., CASUALNESS) were faster after semantically transparent (e.g., CASUALLY) than semantically opaque (e.g., CASUALTY) primes whether primes were auditory and presented…

  12. Independent Effects of Orthographic and Phonological Facilitation on Spoken Word Production in Mandarin

    ERIC Educational Resources Information Center

    Zhang, Qingfang; Chen, Hsuan-Chih; Weekes, Brendan Stuart; Yang, Yufang

    2009-01-01

    A picture-word interference paradigm with visually presented distractors was used to investigate the independent effects of orthographic and phonological facilitation on Mandarin monosyllabic word production. Both the stimulus-onset asynchrony (SOA) and the picture-word relationship along different lexical dimensions were varied. We observed a…

  13. Early, Equivalent ERP Masked Priming Effects for Regular and Irregular Morphology

    ERIC Educational Resources Information Center

    Morris, Joanna; Stockall, Linnaea

    2012-01-01

    Converging evidence from behavioral masked priming (Rastle & Davis, 2008), EEG masked priming (Morris, Frank, Grainger, & Holcomb, 2007) and single word MEG (Zweig & Pylkkanen, 2008) experiments has provided robust support for a model of lexical processing which includes an early, automatic, visual word form based stage of morphological parsing…

  14. Processing of Inflected Nouns in Late Bilinguals

    ERIC Educational Resources Information Center

    Portin, Marja; Lehtonen, Minna; Laine, Matti

    2007-01-01

    This study investigated the recognition of Swedish inflected nouns in two participant groups. Both groups were Finnish-speaking late learners of Swedish, but the groups differed in regard to their Swedish language proficiency. In a visual lexical decision task, inflected Swedish nouns from three frequency ranges were contrasted with corresponding…

  15. How Word Frequency Affects Morphological Processing in Monolinguals and Bilinguals

    ERIC Educational Resources Information Center

    Lehtonen, Minna; Laine, Matti

    2003-01-01

    The present study investigated processing of morphologically complex words in three different frequency ranges in monolingual Finnish speakers and Finnish-Swedish bilinguals. By employing a visual lexical decision task, we found a differential pattern of results in monolinguals vs. bilinguals. Monolingual Finns seemed to process low frequency and…

  16. Dorsal hippocampus is necessary for visual categorization in rats.

    PubMed

    Kim, Jangjin; Castro, Leyre; Wasserman, Edward A; Freeman, John H

    2018-02-23

    The hippocampus may play a role in categorization because of the need to differentiate stimulus categories (pattern separation) and to recognize category membership of stimuli from partial information (pattern completion). We hypothesized that the hippocampus would be more crucial for categorization of low-density (few relevant features) stimuli-due to the higher demand on pattern separation and pattern completion-than for categorization of high-density (many relevant features) stimuli. Using a touchscreen apparatus, rats were trained to categorize multiple abstract stimuli into two different categories. Each stimulus was a pentagonal configuration of five visual features; some of the visual features were relevant for defining the category whereas others were irrelevant. Two groups of rats were trained with either a high (dense, n = 8) or low (sparse, n = 8) number of category-relevant features. Upon reaching criterion discrimination (≥75% correct, on 2 consecutive days), bilateral cannulas were implanted in the dorsal hippocampus. The rats were then given either vehicle or muscimol infusions into the hippocampus just prior to various testing sessions. They were tested with: the previously trained stimuli (trained), novel stimuli involving new irrelevant features (novel), stimuli involving relocated features (relocation), and a single relevant feature (singleton). In training, the dense group reached criterion faster than the sparse group, indicating that the sparse task was more difficult than the dense task. In testing, accuracy of both groups was equally high for trained and novel stimuli. However, both groups showed impaired accuracy in the relocation and singleton conditions, with a greater deficit in the sparse group. The testing data indicate that rats encode both the relevant features and the spatial locations of the features. Hippocampal inactivation impaired visual categorization regardless of the density of the category-relevant features for the trained, novel, relocation, and singleton stimuli. Hippocampus-mediated pattern completion and pattern separation mechanisms may be necessary for visual categorization involving overlapping irrelevant features. © 2018 Wiley Periodicals, Inc.

  17. Semantic Neighborhood Effects for Abstract versus Concrete Words

    PubMed Central

    Danguecan, Ashley N.; Buchanan, Lori

    2016-01-01

    Studies show that semantic effects may be task-specific, and thus, that semantic representations are flexible and dynamic. Such findings are critical to the development of a comprehensive theory of semantic processing in visual word recognition, which should arguably account for how semantic effects may vary by task. It has been suggested that semantic effects are more directly examined using tasks that explicitly require meaning processing relative to those for which meaning processing is not necessary (e.g., lexical decision task). The purpose of the present study was to chart the processing of concrete versus abstract words in the context of a global co-occurrence variable, semantic neighborhood density (SND), by comparing word recognition response times (RTs) across four tasks varying in explicit semantic demands: standard lexical decision task (with non-pronounceable non-words), go/no-go lexical decision task (with pronounceable non-words), progressive demasking task, and sentence relatedness task. The same experimental stimulus set was used across experiments and consisted of 44 concrete and 44 abstract words, with half of these being low SND, and half being high SND. In this way, concreteness and SND were manipulated in a factorial design using a number of visual word recognition tasks. A consistent RT pattern emerged across tasks, in which SND effects were found for abstract (but not necessarily concrete) words. Ultimately, these findings highlight the importance of studying interactive effects in word recognition, and suggest that linguistic associative information is particularly important for abstract words. PMID:27458422

  18. Semantic Neighborhood Effects for Abstract versus Concrete Words.

    PubMed

    Danguecan, Ashley N; Buchanan, Lori

    2016-01-01

    Studies show that semantic effects may be task-specific, and thus, that semantic representations are flexible and dynamic. Such findings are critical to the development of a comprehensive theory of semantic processing in visual word recognition, which should arguably account for how semantic effects may vary by task. It has been suggested that semantic effects are more directly examined using tasks that explicitly require meaning processing relative to those for which meaning processing is not necessary (e.g., lexical decision task). The purpose of the present study was to chart the processing of concrete versus abstract words in the context of a global co-occurrence variable, semantic neighborhood density (SND), by comparing word recognition response times (RTs) across four tasks varying in explicit semantic demands: standard lexical decision task (with non-pronounceable non-words), go/no-go lexical decision task (with pronounceable non-words), progressive demasking task, and sentence relatedness task. The same experimental stimulus set was used across experiments and consisted of 44 concrete and 44 abstract words, with half of these being low SND, and half being high SND. In this way, concreteness and SND were manipulated in a factorial design using a number of visual word recognition tasks. A consistent RT pattern emerged across tasks, in which SND effects were found for abstract (but not necessarily concrete) words. Ultimately, these findings highlight the importance of studying interactive effects in word recognition, and suggest that linguistic associative information is particularly important for abstract words.

  19. Stimulus Type, Level of Categorization, and Spatial-Frequencies Utilization: Implications for Perceptual Categorization Hierarchies

    ERIC Educational Resources Information Center

    Harel, Assaf; Bentin, Shlomo

    2009-01-01

    The type of visual information needed for categorizing faces and nonface objects was investigated by manipulating spatial frequency scales available in the image during a category verification task addressing basic and subordinate levels. Spatial filtering had opposite effects on faces and airplanes that were modulated by categorization level. The…

  20. A physiologically based nonhomogeneous Poisson counter model of visual identification.

    PubMed

    Christensen, Jeppe H; Markussen, Bo; Bundesen, Claus; Kyllingsbæk, Søren

    2018-04-30

    A physiologically based nonhomogeneous Poisson counter model of visual identification is presented. The model was developed in the framework of a Theory of Visual Attention (Bundesen, 1990; Kyllingsbæk, Markussen, & Bundesen, 2012) and meant for modeling visual identification of objects that are mutually confusable and hard to see. The model assumes that the visual system's initial sensory response consists in tentative visual categorizations, which are accumulated by leaky integration of both transient and sustained components comparable with those found in spike density patterns of early sensory neurons. The sensory response (tentative categorizations) feeds independent Poisson counters, each of which accumulates tentative object categorizations of a particular type to guide overt identification performance. We tested the model's ability to predict the effect of stimulus duration on observed distributions of responses in a nonspeeded (pure accuracy) identification task with eight response alternatives. The time courses of correct and erroneous categorizations were well accounted for when the event-rates of competing Poisson counters were allowed to vary independently over time in a way that mimicked the dynamics of receptive field selectivity as found in neurophysiological studies. Furthermore, the initial sensory response yielded theoretical hazard rate functions that closely resembled empirically estimated ones. Finally, supplied with a Naka-Rushton type contrast gain control, the model provided an explanation for Bloch's law. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  1. The visual attention span deficit in dyslexia is visual and not verbal.

    PubMed

    Lobier, Muriel; Zoubrinetzky, Rachel; Valdois, Sylviane

    2012-06-01

    The visual attention (VA) span deficit hypothesis of dyslexia posits that letter string deficits are a consequence of impaired visual processing. Alternatively, some have interpreted this deficit as resulting from a visual-to-phonology code mapping impairment. This study aims to disambiguate between the two interpretations by investigating performance in a non-verbal character string visual categorization task with verbal and non-verbal stimuli. Results show that VA span ability predicts performance for the non-verbal visual processing task in normal reading children. Furthermore, VA span impaired dyslexic children are also impaired for the categorization task independently of stimuli type. This supports the hypothesis that the underlying impairment responsible for the VA span deficit is visual, not verbal. Copyright © 2011 Elsevier Srl. All rights reserved.

  2. Direct comparison of four implicit memory tests.

    PubMed

    Rajaram, S; Roediger, H L

    1993-07-01

    Four verbal implicit memory tests, word identification, word stem completion, word fragment completion, and anagram solution, were directly compared in one experiment and were contrasted with free recall. On all implicit tests, priming was greatest from prior visual presentation of words, less (but significant) from auditory presentation, and least from pictorial presentations. Typefont did not affect priming. In free recall, pictures were recalled better than words. The four implicit tests all largely index perceptual (lexical) operations in recognizing words, or visual word form representations.

  3. Sex differences in verbal and visual-spatial tasks under different hemispheric visual-field presentation conditions.

    PubMed

    Boyle, Gregory J; Neumann, David L; Furedy, John J; Westbury, H Rae

    2010-04-01

    This paper reports sex differences in cognitive task performance that emerged when 39 Australian university undergraduates (19 men, 20 women) were asked to solve verbal (lexical) and visual-spatial cognitive matching tasks which varied in difficulty and visual field of presentation. Sex significantly interacted with task type, task difficulty, laterality, and changes in performance across trials. The results revealed that the significant individual-differences' variable of sex does not always emerge as a significant main effect, but instead in terms of significant interactions with other variables manipulated experimentally. Our results show that sex differences must be taken into account when conducting experiments into human cognitive-task performance.

  4. A supramodal brain substrate of word form processing--an fMRI study on homonym finding with auditory and visual input.

    PubMed

    Balthasar, Andrea J R; Huber, Walter; Weis, Susanne

    2011-09-02

    Homonym processing in German is of theoretical interest as homonyms specifically involve word form information. In a previous study (Weis et al., 2001), we found inferior parietal activation as a correlate of successfully finding a homonym from written stimuli. The present study tries to clarify the underlying mechanism and to examine to what extend the previous homonym effect is dependent on visual in contrast to auditory input modality. 18 healthy subjects were examined using an event-related functional magnetic resonance imaging paradigm. Participants had to find and articulate a homonym in relation to two spoken or written words. A semantic-lexical task - oral naming from two-word definitions - was used as a control condition. When comparing brain activation for solved homonym trials to both brain activation for unsolved homonyms and solved definition trials we obtained two activations patterns, which characterised both auditory and visual processing. Semantic-lexical processing was related to bilateral inferior frontal activation, whereas left inferior parietal activation was associated with finding the correct homonym. As the inferior parietal activation during successful access to the word form of a homonym was independent of input modality, it might be the substrate of access to word form knowledge. Copyright © 2011 Elsevier B.V. All rights reserved.

  5. Relation between brain activation and lexical performance.

    PubMed

    Booth, James R; Burman, Douglas D; Meyer, Joel R; Gitelman, Darren R; Parrish, Todd B; Mesulam, M Marsel

    2003-07-01

    Functional magnetic resonance imaging (fMRI) was used to determine whether performance on lexical tasks was correlated with cerebral activation patterns. We found that such relationships did exist and that their anatomical distribution reflected the neurocognitive processing routes required by the task. Better performance on intramodal tasks (determining if visual words were spelled the same or if auditory words rhymed) was correlated with more activation in unimodal regions corresponding to the modality of sensory input, namely the fusiform gyrus (BA 37) for written words and the superior temporal gyrus (BA 22) for spoken words. Better performance in tasks requiring cross-modal conversions (determining if auditory words were spelled the same or if visual words rhymed), on the other hand, was correlated with more activation in posterior heteromodal regions, including the supramarginal gyrus (BA 40) and the angular gyrus (BA 39). Better performance in these cross-modal tasks was also correlated with greater activation in unimodal regions corresponding to the target modality of the conversion process (i.e., fusiform gyrus for auditory spelling and superior temporal gyrus for visual rhyming). In contrast, performance on the auditory spelling task was inversely correlated with activation in the superior temporal gyrus possibly reflecting a greater emphasis on the properties of the perceptual input rather than on the relevant transmodal conversions. Copyright 2003 Wiley-Liss, Inc.

  6. Visual Categorization of Natural Movies by Rats

    PubMed Central

    Vinken, Kasper; Vermaercke, Ben

    2014-01-01

    Visual categorization of complex, natural stimuli has been studied for some time in human and nonhuman primates. Recent interest in the rodent as a model for visual perception, including higher-level functional specialization, leads to the question of how rodents would perform on a categorization task using natural stimuli. To answer this question, rats were trained in a two-alternative forced choice task to discriminate movies containing rats from movies containing other objects and from scrambled movies (ordinate-level categorization). Subsequently, transfer to novel, previously unseen stimuli was tested, followed by a series of control probes. The results show that the animals are capable of acquiring a decision rule by abstracting common features from natural movies to generalize categorization to new stimuli. Control probes demonstrate that they did not use single low-level features, such as motion energy or (local) luminance. Significant generalization was even present with stationary snapshots from untrained movies. The variability within and between training and test stimuli, the complexity of natural movies, and the control experiments and analyses all suggest that a more high-level rule based on more complex stimulus features than local luminance-based cues was used to classify the novel stimuli. In conclusion, natural stimuli can be used to probe ordinate-level categorization in rats. PMID:25100598

  7. Dissociable Effects of Aging and Mild Cognitive Impairment on Bottom-Up Audiovisual Integration.

    PubMed

    Festa, Elena K; Katz, Andrew P; Ott, Brian R; Tremont, Geoffrey; Heindel, William C

    2017-01-01

    Effective audiovisual sensory integration involves dynamic changes in functional connectivity between superior temporal sulcus and primary sensory areas. This study examined whether disrupted connectivity in early Alzheimer's disease (AD) produces impaired audiovisual integration under conditions requiring greater corticocortical interactions. Audiovisual speech integration was examined in healthy young adult controls (YC), healthy elderly controls (EC), and patients with amnestic mild cognitive impairment (MCI) using McGurk-type stimuli (providing either congruent or incongruent audiovisual speech information) under conditions differing in the strength of bottom-up support and the degree of top-down lexical asymmetry. All groups accurately identified auditory speech under congruent audiovisual conditions, and displayed high levels of visual bias under strong bottom-up incongruent conditions. Under weak bottom-up incongruent conditions, however, EC and amnestic MCI groups displayed opposite patterns of performance, with enhanced visual bias in the EC group and reduced visual bias in the MCI group relative to the YC group. Moreover, there was no overlap between the EC and MCI groups in individual visual bias scores reflecting the change in audiovisual integration from the strong to the weak stimulus conditions. Top-down lexicality influences on visual biasing were observed only in the MCI patients under weaker bottom-up conditions. Results support a deficit in bottom-up audiovisual integration in early AD attributable to disruptions in corticocortical connectivity. Given that this deficit is not simply an exacerbation of changes associated with healthy aging, tests of audiovisual speech integration may serve as sensitive and specific markers of the earliest cognitive change associated with AD.

  8. For a new look at 'lexical errors': evidence from semantic approximations with verbs in aphasia.

    PubMed

    Duvignau, Karine; Tran, Thi Mai; Manchon, Mélanie

    2013-08-01

    The ability to understand the similarity between two phenomena is fundamental for humans. Designated by the term analogy in psychology, this ability plays a role in the categorization of phenomena in the world and in the organisation of the linguistic system. The use of analogy in language often results in non-standard utterances, particularly in speakers with aphasia. These non-standard utterances are almost always studied in a nominal context and considered as errors. We propose a study of the verbal lexicon and present findings that measure, by an action-video naming task, the importance of verb-based non-standard utterances made by 17 speakers with aphasia ("la dame déshabille l'orange"/the lady undresses the orange, "elle casse la tomate"/she breaks the tomato). The first results we have obtained allow us to consider these type of utterances from a new perspective: we propose to eliminate the label of "error", suggesting that they may be viewed as semantic approximations based upon a relationship of inter-domain synonymy and are ingrained in the heart of the lexical system.

  9. The time course of spoken word learning and recognition: studies with artificial lexicons.

    PubMed

    Magnuson, James S; Tanenhaus, Michael K; Aslin, Richard N; Dahan, Delphine

    2003-06-01

    The time course of spoken word recognition depends largely on the frequencies of a word and its competitors, or neighbors (similar-sounding words). However, variability in natural lexicons makes systematic analysis of frequency and neighbor similarity difficult. Artificial lexicons were used to achieve precise control over word frequency and phonological similarity. Eye tracking provided time course measures of lexical activation and competition (during spoken instructions to perform visually guided tasks) both during and after word learning, as a function of word frequency, neighbor type, and neighbor frequency. Apparent shifts from holistic to incremental competitor effects were observed in adults and neural network simulations, suggesting such shifts reflect general properties of learning rather than changes in the nature of lexical representations.

  10. Effects of morphological Family Size for young readers.

    PubMed

    Perdijk, Kors; Schreuder, Robert; Baayen, R Harald; Verhoeven, Ludo

    2012-09-01

    Dutch children, from the second and fourth grade of primary school, were each given a visual lexical decision test on 210 Dutch monomorphemic words. After removing words not recognized by a majority of the younger group, (lexical) decisions were analysed by mixed-model regression methods to see whether morphological Family Size influenced decision times over and above several other covariates. The effect of morphological Family Size on decision time was mixed: larger families led to significantly faster decision times for the second graders but not for the fourth graders. Since facilitative effects on decision times had been found for adults, we offer a developmental account to explain the absence of an effect of Family Size on decision times for fourth graders. ©2011 The British Psychological Society.

  11. Phonological perception by birds: budgerigars can perceive lexical stress.

    PubMed

    Hoeschele, Marisa; Fitch, W Tecumseh

    2016-05-01

    Metrical phonology is the perceptual "strength" in language of some syllables relative to others. The ability to perceive lexical stress is important, as it can help a listener segment speech and distinguish the meaning of words and sentences. Despite this importance, there has been little comparative work on the perception of lexical stress across species. We used a go/no-go operant paradigm to train human participants and budgerigars (Melopsittacus undulatus) to distinguish trochaic (stress-initial) from iambic (stress-final) two-syllable nonsense words. Once participants learned the task, we presented both novel nonsense words, and familiar nonsense words that had certain cues removed (e.g., pitch, duration, loudness, or vowel quality) to determine which cues were most important in stress perception. Members of both species learned the task and were then able to generalize to novel exemplars, showing categorical learning rather than rote memorization. Tests using reduced stimuli showed that humans could identify stress patterns with amplitude and pitch alone, but not with only duration or vowel quality. Budgerigars required more than one cue to be present and had trouble if vowel quality or amplitude were missing as cues. The results suggest that stress patterns in human speech can be decoded by other species. Further comparative stress-perception research with more species could help to determine what species characteristics predict this ability. In addition, tests with a variety of stimuli could help to determine how much this ability depends on general pattern learning processes versus vocalization-specific cues.

  12. Shared Features Dominate Semantic Richness Effects for Concrete Concepts

    ERIC Educational Resources Information Center

    Grondin, Ray; Lupker, Stephen J.; McRae, Ken

    2009-01-01

    When asked to list semantic features for concrete concepts, participants list many features for some concepts and few for others. Concepts with many semantic features are processed faster in lexical and semantic decision tasks [Pexman, P. M., Lupker, S. J., & Hino, Y. (2002). "The impact of feedback semantics in visual word recognition:…

  13. Morphological Processing of Chinese Compounds from a Grammatical View

    ERIC Educational Resources Information Center

    Liu, Phil D.; McBride-Chang, Catherine

    2010-01-01

    In the present study, morphological structure processing of Chinese compounds was explored using a visual priming lexical decision task among 21 Hong Kong college students. Two compounding structures were compared. The first type was the subordinate, in which one morpheme modifies the other (e.g., [image omitted] ["laam4 kau4",…

  14. Cognitive Process in Second Language Reading: Transfer of L1 Reading Skills and Strategies.

    ERIC Educational Resources Information Center

    Koda, Keiko

    1988-01-01

    Experiments with skilled readers (N=83) from four native-language orthographic backgrounds examined the effects of: (1) blocked visual or auditory information on lexical decision-making; and (2) heterographic homophones on reading comprehension. Native and second language transfer does occur in second language reading, and orthographic structure…

  15. Danger and Usefulness Are Detected Early in Auditory Lexical Processing: Evidence from Electroencephalography

    ERIC Educational Resources Information Center

    Kryuchkova, Tatiana; Tucker, Benjamin V.; Wurm, Lee H.; Baayen, R. Harald

    2012-01-01

    Visual emotionally charged stimuli have been shown to elicit early electrophysiological responses (e.g., Ihssen, Heim, & Keil, 2007; Schupp, Junghofer, Weike, & Hamm, 2003; Stolarova, Keil, & Moratti, 2006). We presented isolated words to listeners, and observed, using generalized additive modeling, oscillations in the upper part of the delta…

  16. Repetition Blindness: Out of Sight or Out of Mind?

    ERIC Educational Resources Information Center

    Morris, Alison L.; Harris, Catherine L.

    2004-01-01

    Does repetition blindness represent a failure of perception or of memory? In Experiment 1, participants viewed rapid serial visual presentation (RSVP) sentences. When critical words (C1 and C2) were orthographically similar, C2 was frequently omitted from serial report; however, repetition priming for C2 on a postsentence lexical decision task was…

  17. Do Transposed-Letter Similarity Effects Occur at a Morpheme Level? Evidence for Morpho-Orthographic Decomposition

    ERIC Educational Resources Information Center

    Dunabeitia, Jon Andoni; Peream, Manuel; Carreiras, Manuel

    2007-01-01

    When does morphological decomposition occur in visual word recognition? An increasing body of evidence suggests the presence of early morphological processing. The present work investigates this issue via an orthographic similarity manipulation. Three masked priming lexical decision experiments were conducted to examine the transposed-letter…

  18. Not All Ambiguous Words Are Created Equal: An EEG Investigation of Homonymy and Polysemy

    ERIC Educational Resources Information Center

    Klepousniotou, Ekaterini; Pike, G. Bruce; Steinhauer, Karsten; Gracco, Vincent

    2012-01-01

    Event-related potentials (ERPs) were used to investigate the time-course of meaning activation of different types of ambiguous words. Unbalanced homonymous ("pen"), balanced homonymous ("panel"), metaphorically polysemous ("lip"), and metonymically polysemous words ("rabbit") were used in a visual single-word priming delayed lexical decision task.…

  19. Before the N400: Effects of Lexical-Semantic Violations in Visual Cortex

    ERIC Educational Resources Information Center

    Dikker, Suzanne; Pylkkanen, Liina

    2011-01-01

    There exists an increasing body of research demonstrating that language processing is aided by context-based predictions. Recent findings suggest that the brain generates estimates about the likely physical appearance of upcoming words based on syntactic predictions: words that do not physically look like the expected syntactic category show…

  20. Emotion Word Processing: Effects of Word Type and Valence in Spanish-English Bilinguals

    ERIC Educational Resources Information Center

    Kazanas, Stephanie A.; Altarriba, Jeanette

    2016-01-01

    Previous studies comparing emotion and emotion-laden word processing have used various cognitive tasks, including an Affective Simon Task (Altarriba and Basnight-Brown in "Int J Billing" 15(3):310-328, 2011), lexical decision task (LDT; Kazanas and Altarriba in "Am J Psychol", in press), and rapid serial visual processing…

  1. Word Stress in German Single-Word Reading

    ERIC Educational Resources Information Center

    Beyermann, Sandra; Penke, Martina

    2014-01-01

    This article reports a lexical-decision experiment that was conducted to investigate the impact of word stress on visual word recognition in German. Reaction-time latencies and error rates of German readers on different levels of reading proficiency (i.e., third graders and fifth graders from primary school and university students) were compared…

  2. Preserved Visual Language Identification Despite Severe Alexia

    ERIC Educational Resources Information Center

    Di Pietro, Marie; Ptak, Radek; Schnider, Armin

    2012-01-01

    Patients with letter-by-letter alexia may have residual access to lexical or semantic representations of words despite severely impaired overt word recognition (reading). Here, we report a multilingual patient with severe letter-by-letter alexia who rapidly identified the language of written words and sentences in French and English while he had…

  3. Additive and Interactive Effects on Response Time Distributions in Visual Word Recognition

    ERIC Educational Resources Information Center

    Yap, Melvin J.; Balota, David A.

    2007-01-01

    Across 3 different word recognition tasks, distributional analyses were used to examine the joint effects of stimulus quality and word frequency on underlying response time distributions. Consistent with the extant literature, stimulus quality and word frequency produced additive effects in lexical decision, not only in the means but also in the…

  4. Specifying Theories of Developmental Dyslexia: A Diffusion Model Analysis of Word Recognition

    ERIC Educational Resources Information Center

    Zeguers, Maaike H. T.; Snellings, Patrick; Tijms, Jurgen; Weeda, Wouter D.; Tamboer, Peter; Bexkens, Anika; Huizenga, Hilde M.

    2011-01-01

    The nature of word recognition difficulties in developmental dyslexia is still a topic of controversy. We investigated the contribution of phonological processing deficits and uncertainty to the word recognition difficulties of dyslexic children by mathematical diffusion modeling of visual and auditory lexical decision data. The first study showed…

  5. [When shape-invariant recognition ('A' = 'a') fails. A case study of pure alexia and kinesthetic facilitation].

    PubMed

    Diesfeldt, H F A

    2011-06-01

    A right-handed patient, aged 72, manifested alexia without agraphia, a right homonymous hemianopia and an impaired ability to identify visually presented objects. He was completely unable to read words aloud and severely deficient in naming visually presented letters. He responded to orthographic familiarity in the lexical decision tasks of the Psycholinguistic Assessments of Language Processing in Aphasia (PALPA) rather than to the lexicality of the letter strings. He was impaired at deciding whether two letters of different case (e.g., A, a) are the same, though he could detect real letters from made-up ones or from their mirror image. Consequently, his core deficit in reading was posited at the level of the abstract letter identifiers. When asked to trace a letter with his right index finger, kinesthetic facilitation enabled him to read letters and words aloud. Though he could use intact motor representations of letters in order to facilitate recognition and reading, the slow, sequential and error-prone process of reading letter by letter made him abandon further training.

  6. The activation of segmental and tonal information in visual word recognition.

    PubMed

    Li, Chuchu; Lin, Candise Y; Wang, Min; Jiang, Nan

    2013-08-01

    Mandarin Chinese has a logographic script in which graphemes map onto syllables and morphemes. It is not clear whether Chinese readers activate phonological information during lexical access, although phonological information is not explicitly represented in Chinese orthography. In the present study, we examined the activation of phonological information, including segmental and tonal information in Chinese visual word recognition, using the Stroop paradigm. Native Mandarin speakers named the presentation color of Chinese characters in Mandarin. The visual stimuli were divided into five types: color characters (e.g., , hong2, "red"), homophones of the color characters (S+T+; e.g., , hong2, "flood"), different-tone homophones (S+T-; e.g., , hong1, "boom"), characters that shared the same tone but differed in segments with the color characters (S-T+; e.g., , ping2, "bottle"), and neutral characters (S-T-; e.g., , qian1, "leading through"). Classic Stroop facilitation was shown in all color-congruent trials, and interference was shown in the incongruent trials. Furthermore, the Stroop effect was stronger for S+T- than for S-T+ trials, and was similar between S+T+ and S+T- trials. These findings suggested that both tonal and segmental forms of information play roles in lexical constraints; however, segmental information has more weight than tonal information. We proposed a revised visual word recognition model in which the functions of both segmental and suprasegmental types of information and their relative weights are taken into account.

  7. The locus of impairment in English developmental letter position dyslexia

    PubMed Central

    Kezilas, Yvette; Kohnen, Saskia; McKague, Meredith; Castles, Anne

    2014-01-01

    Many children with reading difficulties display phonological deficits and struggle to acquire non-lexical reading skills. However, not all children with reading difficulties have these problems, such as children with selective letter position dyslexia (LPD), who make excessive migration errors (such as reading slime as “smile”). Previous research has explored three possible loci for the deficit – the phonological output buffer, the orthographic input lexicon, and the orthographic-visual analysis stage of reading. While there is compelling evidence against a phonological output buffer and orthographic input lexicon deficit account of English LPD, the evidence in support of an orthographic-visual analysis deficit is currently limited. In this multiple single-case study with three English-speaking children with developmental LPD, we aimed to both replicate and extend previous findings regarding the locus of impairment in English LPD. First, we ruled out a phonological output buffer and an orthographic input lexicon deficit by administering tasks that directly assess phonological processing and lexical guessing. We then went on to directly assess whether or not children with LPD have an orthographic-visual analysis deficit by modifying two tasks that have previously been used to localize processing at this level: a same-different decision task and a non-word reading task. The results from these tasks indicate that LPD is most likely caused by a deficit specific to the coding of letter positions at the orthographic-visual analysis stage of reading. These findings provide further evidence for the heterogeneity of dyslexia and its underlying causes. PMID:24917802

  8. Stimulus Dependency of Object-Evoked Responses in Human Visual Cortex: An Inverse Problem for Category Specificity

    PubMed Central

    Graewe, Britta; De Weerd, Peter; Farivar, Reza; Castelo-Branco, Miguel

    2012-01-01

    Many studies have linked the processing of different object categories to specific event-related potentials (ERPs) such as the face-specific N170. Despite reports showing that object-related ERPs are influenced by visual stimulus features, there is consensus that these components primarily reflect categorical aspects of the stimuli. Here, we re-investigated this idea by systematically measuring the effects of visual feature manipulations on ERP responses elicited by both structure-from-motion (SFM)-defined and luminance-defined object stimuli. SFM objects elicited a novel component at 200–250 ms (N250) over parietal and posterior temporal sites. We found, however, that the N250 amplitude was unaffected by restructuring SFM stimuli into meaningless objects based on identical visual cues. This suggests that this N250 peak was not uniquely linked to categorical aspects of the objects, but is strongly determined by visual stimulus features. We provide strong support for this hypothesis by parametrically manipulating the depth range of both SFM- and luminance-defined object stimuli and showing that the N250 evoked by SFM stimuli as well as the well-known N170 to static faces were sensitive to this manipulation. Importantly, this effect could not be attributed to compromised object categorization in low depth stimuli, confirming a strong impact of visual stimulus features on object-related ERP signals. As ERP components linked with visual categorical object perception are likely determined by multiple stimulus features, this creates an interesting inverse problem when deriving specific perceptual processes from variations in ERP components. PMID:22363479

  9. Stimulus dependency of object-evoked responses in human visual cortex: an inverse problem for category specificity.

    PubMed

    Graewe, Britta; De Weerd, Peter; Farivar, Reza; Castelo-Branco, Miguel

    2012-01-01

    Many studies have linked the processing of different object categories to specific event-related potentials (ERPs) such as the face-specific N170. Despite reports showing that object-related ERPs are influenced by visual stimulus features, there is consensus that these components primarily reflect categorical aspects of the stimuli. Here, we re-investigated this idea by systematically measuring the effects of visual feature manipulations on ERP responses elicited by both structure-from-motion (SFM)-defined and luminance-defined object stimuli. SFM objects elicited a novel component at 200-250 ms (N250) over parietal and posterior temporal sites. We found, however, that the N250 amplitude was unaffected by restructuring SFM stimuli into meaningless objects based on identical visual cues. This suggests that this N250 peak was not uniquely linked to categorical aspects of the objects, but is strongly determined by visual stimulus features. We provide strong support for this hypothesis by parametrically manipulating the depth range of both SFM- and luminance-defined object stimuli and showing that the N250 evoked by SFM stimuli as well as the well-known N170 to static faces were sensitive to this manipulation. Importantly, this effect could not be attributed to compromised object categorization in low depth stimuli, confirming a strong impact of visual stimulus features on object-related ERP signals. As ERP components linked with visual categorical object perception are likely determined by multiple stimulus features, this creates an interesting inverse problem when deriving specific perceptual processes from variations in ERP components.

  10. Foveal vs. parafoveal attention-grabbing power of threat-related information.

    PubMed

    Calvo, Manuel G; Castillo, M Dolores

    2005-01-01

    We investigated whether threat words presented in attended (foveal) and in unattended (parafoveal) locations of the visual field are attention grabbing. Neutral (nonemotional) words were presented at fixation as probes in a lexical decision task. Each probe word was preceded by 2 simultaneous prime words (1 foveal, 1 parafoveal), either threatening or neutral, for 150 ms. The stimulus onset asynchrony (SOA) between the primes and the probe was either 300 or 1,000 ms. Results revealed slowed lexical decision times on the probe when primed by an unrelated foveal threat word at the short (300-ms) delay. In contrast, parafoveal threat words did not affect processing of the neutral probe at either delay. Nevertheless, both neutral and threat parafoveal words facilitated lexical decisions for identical probe words at 300-ms SOA. This suggests that threat words appearing outside the focus of attention do not draw or engage cognitive resources to such an extent as to produce interference in the processing of concurrent or subsequent neutral stimuli. An explanation of the lack of parafoveal interference is that semantic content is not extracted in the parafovea.

  11. Non-linear processing of a linear speech stream: The influence of morphological structure on the recognition of spoken Arabic words.

    PubMed

    Gwilliams, L; Marantz, A

    2015-08-01

    Although the significance of morphological structure is established in visual word processing, its role in auditory processing remains unclear. Using magnetoencephalography we probe the significance of the root morpheme for spoken Arabic words with two experimental manipulations. First we compare a model of auditory processing that calculates probable lexical outcomes based on whole-word competitors, versus a model that only considers the root as relevant to lexical identification. Second, we assess violations to the root-specific Obligatory Contour Principle (OCP), which disallows root-initial consonant gemination. Our results show root prediction to significantly correlate with neural activity in superior temporal regions, independent of predictions based on whole-word competitors. Furthermore, words that violated the OCP constraint were significantly easier to dismiss as valid words than probability-matched counterparts. The findings suggest that lexical auditory processing is dependent upon morphological structure, and that the root forms a principal unit through which spoken words are recognised. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  12. Lateralized effects of categorical and coordinate spatial processing of component parts on the recognition of 3D non-nameable objects.

    PubMed

    Saneyoshi, Ayako; Michimata, Chikashi

    2009-12-01

    Participants performed two object-matching tasks for novel, non-nameable objects consisting of geons. For each original stimulus, two transformations were applied to create comparison stimuli. In the categorical transformation, a geon connected to geon A was moved to geon B. In the coordinate transformation, a geon connected to geon A was moved to a different position on geon A. The Categorical task consisted of the original and the categorically transformed objects. The Coordinate task consisted of the original and the coordinately transformed objects. The original object was presented to the central visual field, followed by a comparison object presented to the right or left visual half-fields (RVF and LVF). The results showed an RVF advantage for the Categorical task and an LVF advantage for the Coordinate task. The possibility that categorical and coordinate spatial processing subsystems would be basic computational elements for between- and within-category object recognition was discussed.

  13. Fine-grained temporal coding of visually-similar categories in the ventral visual pathway and prefrontal cortex

    PubMed Central

    Xu, Yang; D'Lauro, Christopher; Pyles, John A.; Kass, Robert E.; Tarr, Michael J.

    2013-01-01

    Humans are remarkably proficient at categorizing visually-similar objects. To better understand the cortical basis of this categorization process, we used magnetoencephalography (MEG) to record neural activity while participants learned–with feedback–to discriminate two highly-similar, novel visual categories. We hypothesized that although prefrontal regions would mediate early category learning, this role would diminish with increasing category familiarity and that regions within the ventral visual pathway would come to play a more prominent role in encoding category-relevant information as learning progressed. Early in learning we observed some degree of categorical discriminability and predictability in both prefrontal cortex and the ventral visual pathway. Predictability improved significantly above chance in the ventral visual pathway over the course of learning with the left inferior temporal and fusiform gyri showing the greatest improvement in predictability between 150 and 250 ms (M200) during category learning. In contrast, there was no comparable increase in discriminability in prefrontal cortex with the only significant post-learning effect being a decrease in predictability in the inferior frontal gyrus between 250 and 350 ms (M300). Thus, the ventral visual pathway appears to encode learned visual categories over the long term. At the same time these results add to our understanding of the cortical origins of previously reported signature temporal components associated with perceptual learning. PMID:24146656

  14. Visual context modulates potentiation of grasp types during semantic object categorization.

    PubMed

    Kalénine, Solène; Shapiro, Allison D; Flumini, Andrea; Borghi, Anna M; Buxbaum, Laurel J

    2014-06-01

    Substantial evidence suggests that conceptual processing of manipulable objects is associated with potentiation of action. Such data have been viewed as evidence that objects are recognized via access to action features. Many objects, however, are associated with multiple actions. For example, a kitchen timer may be clenched with a power grip to move it but pinched with a precision grip to use it. The present study tested the hypothesis that action evocation during conceptual object processing is responsive to the visual scene in which objects are presented. Twenty-five healthy adults were asked to categorize object pictures presented in different naturalistic visual contexts that evoke either move- or use-related actions. Categorization judgments (natural vs. artifact) were performed by executing a move- or use-related action (clench vs. pinch) on a response device, and response times were assessed as a function of contextual congruence. Although the actions performed were irrelevant to the categorization judgment, responses were significantly faster when actions were compatible with the visual context. This compatibility effect was largely driven by faster pinch responses when objects were presented in use-compatible, as compared with move-compatible, contexts. The present study is the first to highlight the influence of visual scene on stimulus-response compatibility effects during semantic object processing. These data support the hypothesis that action evocation during conceptual object processing is biased toward context-relevant actions.

  15. Visual context modulates potentiation of grasp types during semantic object categorization

    PubMed Central

    Kalénine, Solène; Shapiro, Allison D.; Flumini, Andrea; Borghi, Anna M.; Buxbaum, Laurel J.

    2013-01-01

    Substantial evidence suggests that conceptual processing of manipulable objects is associated with potentiation of action. Such data have been viewed as evidence that objects are recognized via access to action features. Many objects, however, are associated with multiple actions. For example, a kitchen timer may be clenched with a power grip to move it, but pinched with a precision grip to use it. The present study tested the hypothesis that action evocation during conceptual object processing is responsive to the visual scene in which objects are presented. Twenty-five healthy adults were asked to categorize object pictures presented in different naturalistic visual contexts that evoke either move- or use-related actions. Categorization judgments (natural vs. artifact) were performed by executing a move- or use-related action (clench vs. pinch) on a response device, and response times were assessed as a function of contextual congruence. Although the actions performed were irrelevant to the categorization judgment, responses were significantly faster when actions were compatible with the visual context. This compatibility effect was largely driven by faster pinch responses when objects were presented in use- compared to move-compatible contexts. The present study is the first to highlight the influence of visual scene on stimulus-response compatibility effects during semantic object processing. These data support the hypothesis that action evocation during conceptual object processing is biased toward context-relevant actions. PMID:24186270

  16. Object Categorization in Finer Levels Relies More on Higher Spatial Frequencies and Takes Longer.

    PubMed

    Ashtiani, Matin N; Kheradpisheh, Saeed R; Masquelier, Timothée; Ganjtabesh, Mohammad

    2017-01-01

    The human visual system contains a hierarchical sequence of modules that take part in visual perception at different levels of abstraction, i.e., superordinate, basic, and subordinate levels. One important question is to identify the "entry" level at which the visual representation is commenced in the process of object recognition. For a long time, it was believed that the basic level had a temporal advantage over two others. This claim has been challenged recently. Here we used a series of psychophysics experiments, based on a rapid presentation paradigm, as well as two computational models, with bandpass filtered images of five object classes to study the processing order of the categorization levels. In these experiments, we investigated the type of visual information required for categorizing objects in each level by varying the spatial frequency bands of the input image. The results of our psychophysics experiments and computational models are consistent. They indicate that the different spatial frequency information had different effects on object categorization in each level. In the absence of high frequency information, subordinate and basic level categorization are performed less accurately, while the superordinate level is performed well. This means that low frequency information is sufficient for superordinate level, but not for the basic and subordinate levels. These finer levels rely more on high frequency information, which appears to take longer to be processed, leading to longer reaction times. Finally, to avoid the ceiling effect, we evaluated the robustness of the results by adding different amounts of noise to the input images and repeating the experiments. As expected, the categorization accuracy decreased and the reaction time increased significantly, but the trends were the same. This shows that our results are not due to a ceiling effect. The compatibility between our psychophysical and computational results suggests that the temporal advantage of the superordinate (resp. basic) level to basic (resp. subordinate) level is mainly due to the computational constraints (the visual system processes higher spatial frequencies more slowly, and categorization in finer levels depends more on these higher spatial frequencies).

  17. Object Categorization in Finer Levels Relies More on Higher Spatial Frequencies and Takes Longer

    PubMed Central

    Ashtiani, Matin N.; Kheradpisheh, Saeed R.; Masquelier, Timothée; Ganjtabesh, Mohammad

    2017-01-01

    The human visual system contains a hierarchical sequence of modules that take part in visual perception at different levels of abstraction, i.e., superordinate, basic, and subordinate levels. One important question is to identify the “entry” level at which the visual representation is commenced in the process of object recognition. For a long time, it was believed that the basic level had a temporal advantage over two others. This claim has been challenged recently. Here we used a series of psychophysics experiments, based on a rapid presentation paradigm, as well as two computational models, with bandpass filtered images of five object classes to study the processing order of the categorization levels. In these experiments, we investigated the type of visual information required for categorizing objects in each level by varying the spatial frequency bands of the input image. The results of our psychophysics experiments and computational models are consistent. They indicate that the different spatial frequency information had different effects on object categorization in each level. In the absence of high frequency information, subordinate and basic level categorization are performed less accurately, while the superordinate level is performed well. This means that low frequency information is sufficient for superordinate level, but not for the basic and subordinate levels. These finer levels rely more on high frequency information, which appears to take longer to be processed, leading to longer reaction times. Finally, to avoid the ceiling effect, we evaluated the robustness of the results by adding different amounts of noise to the input images and repeating the experiments. As expected, the categorization accuracy decreased and the reaction time increased significantly, but the trends were the same. This shows that our results are not due to a ceiling effect. The compatibility between our psychophysical and computational results suggests that the temporal advantage of the superordinate (resp. basic) level to basic (resp. subordinate) level is mainly due to the computational constraints (the visual system processes higher spatial frequencies more slowly, and categorization in finer levels depends more on these higher spatial frequencies). PMID:28790954

  18. Neural differentiation of lexico-syntactic categories or semantic features? event-related potential evidence for both.

    PubMed

    Kellenbach, Marion L; Wijers, Albertus A; Hovius, Marjolijn; Mulder, Juul; Mulder, Gijsbertus

    2002-05-15

    Event-related potentials (ERPs) were used to investigate whether processing differences between nouns and verbs can be accounted for by the differential salience of visual-perceptual and motor attributes in their semantic specifications. Three subclasses of nouns and verbs were selected, which differed in their semantic attribute composition (abstract, high visual, high visual and motor). Single visual word presentation with a recognition memory task was used. While multiple robust and parallel ERP effects were observed for both grammatical class and attribute type, there were no interactions between these. This pattern of effects provides support for lexical-semantic knowledge being organized in a manner that takes account both of category-based (grammatical class) and attribute-based distinctions.

  19. Visual categorization of natural movies by rats.

    PubMed

    Vinken, Kasper; Vermaercke, Ben; Op de Beeck, Hans P

    2014-08-06

    Visual categorization of complex, natural stimuli has been studied for some time in human and nonhuman primates. Recent interest in the rodent as a model for visual perception, including higher-level functional specialization, leads to the question of how rodents would perform on a categorization task using natural stimuli. To answer this question, rats were trained in a two-alternative forced choice task to discriminate movies containing rats from movies containing other objects and from scrambled movies (ordinate-level categorization). Subsequently, transfer to novel, previously unseen stimuli was tested, followed by a series of control probes. The results show that the animals are capable of acquiring a decision rule by abstracting common features from natural movies to generalize categorization to new stimuli. Control probes demonstrate that they did not use single low-level features, such as motion energy or (local) luminance. Significant generalization was even present with stationary snapshots from untrained movies. The variability within and between training and test stimuli, the complexity of natural movies, and the control experiments and analyses all suggest that a more high-level rule based on more complex stimulus features than local luminance-based cues was used to classify the novel stimuli. In conclusion, natural stimuli can be used to probe ordinate-level categorization in rats. Copyright © 2014 the authors 0270-6474/14/3410645-14$15.00/0.

  20. Influence of affective words on lexical decision task in major depression.

    PubMed

    Stip, E; Lecours, A R; Chertkow, H; Elie, R; O'Connor, K

    1994-05-01

    In cognitive science, lexical decision task is used to investigate visual word recognition and lexical access. The issue of whether or not individuals who are depressed differ in their access to affectively laden words and specifically to words that have negative affect was examined. Based on some aspects of the Resource Allocation Model (Ellis), it was postulated that patients suffering from depression take more time to recognize items from an affective-loaded list. In order to compare their behavior in a lexical decision task, patients suffering from depression and healthy controls were studied. We hoped to find an interaction between the mood state of subjects and the categories (affective or neutral) of words. Two groups of right-handed adults served as subjects in our experiment. The first group consisted of 11 patients suffering from depression (mean age: 40.2; sd: 6.8). All of this group met the DSM-III-R and the Research Diagnostic Criteria for major depressive disorder. Severity of their disease was rated using the 24-item Hamilton Depressive Rating Scale. All patients suffering from depression were without psychotropic medication. The control group was composed of 24 subjects (mean age: 32.7; sd: 7.9). A depressive word-list and a neutral word-list were built and a computer was used for the lexical-decision task. A longer reaction time to detect the non-word stimuli (F1,33 = 11.19, p < 0.01) was observed with the patients by comparison to the normal subjects. In the analysis of the word stimuli, a group by list interaction (F1,33 = 7.18, p < 0.01) was found.(ABSTRACT TRUNCATED AT 250 WORDS)

  1. Multidimensional analysis of the abnormal neural oscillations associated with lexical processing in schizophrenia.

    PubMed

    Xu, Tingting; Stephane, Massoud; Parhi, Keshab K

    2013-04-01

    The neural mechanisms of language abnormalities, the core symptoms in schizophrenia, remain unclear. In this study, a new experimental paradigm, combining magnetoencephalography (MEG) techniques and machine intelligence methodologies, was designed to gain knowledge about the frequency, brain location, and time of occurrence of the neural oscillations that are associated with lexical processing in schizophrenia. The 248-channel MEG recordings were obtained from 12 patients with schizophrenia and 10 healthy controls, during a lexical processing task, where the patients discriminated correct from incorrect lexical stimuli that were visually presented. Event-related desynchronization/synchronization (ERD/ERS) was computed along the frequency, time, and space dimensions combined, that resulted in a large spectral-spatial-temporal ERD/ERS feature set. Machine intelligence techniques were then applied to select a small subset of oscillation patterns that are abnormal in patients with schizophrenia, according to their discriminating power in patient and control classification. Patients with schizophrenia showed abnormal ERD/ERS patterns during both lexical encoding and post-encoding periods. The top-ranked features were located at the occipital and left frontal-temporal areas, and covered a wide frequency range, including δ (1-4 Hz), α (8-12 Hz), β (12-32 Hz), and γ (32-48 Hz) bands. These top features could discriminate the patient group from the control group with 90.91% high accuracy, which demonstrates significant brain oscillation abnormalities in patients with schizophrenia at the specific frequency, time, and brain location indicated by these top features. As neural oscillation abnormality may be due to the mechanisms of the disease, the spectral, spatial, and temporal content of the discriminating features can offer useful information for helping understand the physiological basis of the language disorder in schizophrenia, as well as the pathology of the disease itself.

  2. Semantic Categorization Precedes Affective Evaluation of Visual Scenes

    ERIC Educational Resources Information Center

    Nummenmaa, Lauri; Hyona, Jukka; Calvo, Manuel G.

    2010-01-01

    We compared the primacy of affective versus semantic categorization by using forced-choice saccadic and manual response tasks. Participants viewed paired emotional and neutral scenes involving humans or animals flashed rapidly in extrafoveal vision. Participants were instructed to categorize the targets by saccading toward the location occupied by…

  3. The roles of shared vs. distinctive conceptual features in lexical access

    PubMed Central

    Vieth, Harrison E.; McMahon, Katie L.; de Zubicaray, Greig I.

    2014-01-01

    Contemporary models of spoken word production assume conceptual feature sharing determines the speed with which objects are named in categorically-related contexts. However, statistical models of concept representation have also identified a role for feature distinctiveness, i.e., features that identify a single concept and serve to distinguish it quickly from other similar concepts. In three experiments we investigated whether distinctive features might explain reports of counter-intuitive semantic facilitation effects in the picture word interference (PWI) paradigm. In Experiment 1, categorically-related distractors matched in terms of semantic similarity ratings (e.g., zebra and pony) and manipulated with respect to feature distinctiveness (e.g., a zebra has stripes unlike other equine species) elicited interference effects of comparable magnitude. Experiments 2 and 3 investigated the role of feature distinctiveness with respect to reports of facilitated naming with part-whole distractor-target relations (e.g., a hump is a distinguishing part of a CAMEL, whereas knee is not, vs. an unrelated part such as plug). Related part distractors did not influence target picture naming latencies significantly when the part denoted by the related distractor was not visible in the target picture (whether distinctive or not; Experiment 2). When the part denoted by the related distractor was visible in the target picture, non-distinctive part distractors slowed target naming significantly at SOA of −150 ms (Experiment 3). Thus, our results show that semantic interference does occur for part-whole distractor-target relations in PWI, but only when distractors denote features shared with the target and other category exemplars. We discuss the implications of these results for some recently developed, novel accounts of lexical access in spoken word production. PMID:25278914

  4. Brain Plasticity in Speech Training in Native English Speakers Learning Mandarin Tones

    NASA Astrophysics Data System (ADS)

    Heinzen, Christina Carolyn

    The current study employed behavioral and event-related potential (ERP) measures to investigate brain plasticity associated with second-language (L2) phonetic learning based on an adaptive computer training program. The program utilized the acoustic characteristics of Infant-Directed Speech (IDS) to train monolingual American English-speaking listeners to perceive Mandarin lexical tones. Behavioral identification and discrimination tasks were conducted using naturally recorded speech, carefully controlled synthetic speech, and non-speech control stimuli. The ERP experiments were conducted with selected synthetic speech stimuli in a passive listening oddball paradigm. Identical pre- and post- tests were administered on nine adult listeners, who completed two-to-three hours of perceptual training. The perceptual training sessions used pair-wise lexical tone identification, and progressed through seven levels of difficulty for each tone pair. The levels of difficulty included progression in speaker variability from one to four speakers and progression through four levels of acoustic exaggeration of duration, pitch range, and pitch contour. Behavioral results for the natural speech stimuli revealed significant training-induced improvement in identification of Tones 1, 3, and 4. Improvements in identification of Tone 4 generalized to novel stimuli as well. Additionally, comparison between discrimination of across-category and within-category stimulus pairs taken from a synthetic continuum revealed a training-induced shift toward more native-like categorical perception of the Mandarin lexical tones. Analysis of the Mismatch Negativity (MMN) responses in the ERP data revealed increased amplitude and decreased latency for pre-attentive processing of across-category discrimination as a result of training. There were also laterality changes in the MMN responses to the non-speech control stimuli, which could reflect reallocation of brain resources in processing pitch patterns for the across-category lexical tone contrast. Overall, the results support the use of IDS characteristics in training non-native speech contrasts and provide impetus for further research.

  5. Stop identity cue as a cue to language identity

    NASA Astrophysics Data System (ADS)

    Castonguay, Paula Lisa

    The purpose of the present study was to determine whether language membership could potentially be cued by the acoustic-phonetic detail of word-initial stops and retained all the way through the process of lexical access to aid in language identification. Of particular interest were language-specific differences in CE and CF word-initial stops. Experiment 1 consisted of an interlingual homophone production task. The purpose of this study was to examine how word-initial stop consonants differ in terms of acoustic properties in Canadian English (CE) and Canadian French (CF) interlingual homophones. The analyses from the bilingual speakers in Experiment 1 indicate that bilinguals do produce language-specific differences in CE and CF word-initial stops, and that closure duration, voice onset time, and burst spectral SD may provide cues to language identity in CE and CF stops. Experiment 2 consisted of a Phoneme and Language Categorization task. The purpose of this study was to examine how stop identity cues, such as VOT and closure duration, influence a listener to identify word-initial stop consonants as belonging to Canadian English (CE) or Canadian French (CF). The RTs from the bilingual listeners in this study indicate that bilinguals do perceive language-specific differences in CE and CF word-initial stops, and that voice onset time may provide cues to phoneme and language membership in CE and CF stops. Experiment 3 consisted of a Phonological-Semantic priming task. The purpose of this study was to examine how subphonetic variations, such as changes in the VOT, affect lexical access. The results of Experiment 3 suggest that language-specific cues, such as VOT, affects the composition of the bilingual cohort and that the extent to which English and/or French words are activated is dependent on the language-specific cues present in a word. The findings of this study enhanced our theoretical understanding of lexical structure and lexical access in bilingual speakers. In addition, this study provides further insight on cross-language effects at the subphonetic level.

  6. Dissociable effects of inter-stimulus interval and presentation duration on rapid face categorization.

    PubMed

    Retter, Talia L; Jiang, Fang; Webster, Michael A; Rossion, Bruno

    2018-04-01

    Fast periodic visual stimulation combined with electroencephalography (FPVS-EEG) has unique sensitivity and objectivity in measuring rapid visual categorization processes. It constrains image processing time by presenting stimuli rapidly through brief stimulus presentation durations and short inter-stimulus intervals. However, the selective impact of these temporal parameters on visual categorization is largely unknown. Here, we presented natural images of objects at a rate of 10 or 20 per second (10 or 20 Hz), with faces appearing once per second (1 Hz), leading to two distinct frequency-tagged EEG responses. Twelve observers were tested with three squarewave image presentation conditions: 1) with an ISI, a traditional 50% duty cycle at 10 Hz (50-ms stimulus duration separated by a 50-ms ISI); 2) removing the ISI and matching the rate, a 100% duty cycle at 10 Hz (100-ms duration with 0-ms ISI); 3) removing the ISI and matching the stimulus presentation duration, a 100% duty cycle at 20 Hz (50-ms duration with 0-ms ISI). The face categorization response was significantly decreased in the 20 Hz 100% condition. The conditions at 10 Hz showed similar face-categorization responses, peaking maximally over the right occipito-temporal (ROT) cortex. However, the onset of the 10 Hz 100% response was delayed by about 20 ms over the ROT region relative to the 10 Hz 50% condition, likely due to immediate forward-masking by preceding images. Taken together, these results help to interpret how the FPVS-EEG paradigm sets temporal constraints on visual image categorization. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. Multicomponent Treatment of Rapid Naming, Reading Rate, and Visual Attention in Single and Double Deficit Dyslexics

    ERIC Educational Resources Information Center

    Johnson, Kary A.

    2013-01-01

    While the relationship between rapid automatic naming (RAN) deficiencies and dyslexia (reading disability) is well developed and supported in the behavioral and medical research literature to date, direct treatment of the specific RAN deficiency in addition to subsequently poor reading outcomes in the lexical skill of reading rate and the…

  8. Knowledge of a Second Language Influences Auditory Word Recognition in the Native Language

    ERIC Educational Resources Information Center

    Lagrou, Evelyne; Hartsuiker, Robert J.; Duyck, Wouter

    2011-01-01

    Many studies in bilingual visual word recognition have demonstrated that lexical access is not language selective. However, research on bilingual word recognition in the auditory modality has been scarce, and it has yielded mixed results with regard to the degree of this language nonselectivity. In the present study, we investigated whether…

  9. Changing Places: A Cross-Language Perspective on Frequency and Family Size in Dutch and Hebrew

    ERIC Educational Resources Information Center

    Moscoso del Prado Martin, Fermin; Deutsch, Avital; Frost, Ram; Schreuder, Robert; De Jong, Nivja H.; Baayen, R. Harald

    2005-01-01

    This study uses the morphological family size effect as a tool for exploring the degree of isomorphism in the networks of morphologically related words in the Hebrew and Dutch mental lexicon. Hebrew and Dutch are genetically unrelated, and they structure their morphologically complex words in very different ways. Two visual lexical decision…

  10. Investigating Developmental Trajectories of Morphemes as Reading Units in German

    ERIC Educational Resources Information Center

    Hasenäcker, Jana; Schröter, Pauline; Schroeder, Sascha

    2017-01-01

    The developmental trajectory of the use of morphemes is still unclear. We investigated the emergence of morphological effects on visual word recognition in German in a large sample across the complete course of reading acquisition in elementary school. To this end, we analyzed lexical decision data on a total of 1,152 words and pseudowords from a…

  11. The Development of Long-Term Lexical Representations through Hebb Repetition Learning

    ERIC Educational Resources Information Center

    Szmalec, Arnaud; Page, Mike P. A.; Duyck, Wouter

    2012-01-01

    This study clarifies the involvement of short- and long-term memory in novel word-form learning, using the Hebb repetition paradigm. In Experiment 1, participants recalled sequences of visually presented syllables (e.g., "la"-"va"-"bu"-"sa"-"fa"-"ra"-"re"-"si"-"di"), with one particular (Hebb) sequence repeated on every third trial. Crucially,…

  12. Learning To Learn: 15 Vocabulary Acquisition Activities. Tips and Hints.

    ERIC Educational Resources Information Center

    Holden, William R.

    1999-01-01

    This article describes a variety of ways learners can help themselves remember new words, choosing the ones that best suit their learning styles. It is asserted that repeated exposure to new lexical items using a variety of means is the most consistent predictor of retention. The use of verbal, visual, tactile, textual, kinesthetic, and sonic…

  13. Do Phonological Constraints on the Spoken Word Affect Visual Lexical Decision?

    ERIC Educational Resources Information Center

    Lee, Yang; Moreno, Miguel A.; Carello, Claudia; Turvey, M. T.

    2013-01-01

    Reading a word may involve the spoken language in two ways: in the conversion of letters to phonemes according to the conventions of the language's writing system and the assimilation of phonemes according to the language's constraints on speaking. If so, then words that require assimilation when uttered would require a change in the phonemes…

  14. The Cross-Script Length Effect: Further Evidence Challenging PDP Models of Reading Aloud

    ERIC Educational Resources Information Center

    Rastle, Kathleen; Havelka, Jelena; Wydell, Taeko N.; Coltheart, Max; Besner, Derek

    2009-01-01

    The interaction between length and lexical status is one of the key findings used in support of models of reading aloud that postulate a serial process in the orthography-to-phonology translation (B. S. Weekes, 1997). However, proponents of parallel models argue that this effect arises in peripheral visual or articulatory processes. The authors…

  15. Hemispheric Asymmetries in Processing L1 and L2 Idioms: Effects of Salience and Context

    ERIC Educational Resources Information Center

    Cieslicka, Anna B.; Heredia, Roberto R.

    2011-01-01

    This study investigates the contribution of the left and right hemispheres to the comprehension of bilingual figurative language and the joint effects of salience and context on the differential cerebral involvement in idiom processing. The divided visual field and the lexical decision priming paradigms were employed to examine the activation of…

  16. The Effects of Semantic Transparency and Base Frequency on the Recognition of English Complex Words

    ERIC Educational Resources Information Center

    Xu, Joe; Taft, Marcus

    2015-01-01

    A visual lexical decision task was used to examine the interaction between base frequency (i.e., the cumulative frequencies of morphologically related forms) and semantic transparency for a list of derived words. Linear mixed effects models revealed that high base frequency facilitates the recognition of the complex word (i.e., a "base…

  17. Concreteness in Word Processing: ERP and Behavioral Effects in a Lexical Decision Task

    ERIC Educational Resources Information Center

    Barber, Horacio A.; Otten, Leun J.; Kousta, Stavroula-Thaleia; Vigliocco, Gabriella

    2013-01-01

    Relative to abstract words, concrete words typically elicit faster response times and larger N400 and N700 event-related potential (ERP) brain responses. These effects have been interpreted as reflecting the denser links to associated semantic information of concrete words and their recruitment of visual imagery processes. Here, we examined…

  18. Early Cerebral Constraints on Reading Skills in School-Age Children: An MRI Study

    ERIC Educational Resources Information Center

    Borst, G.; Cachia, A.; Tissier, C.; Ahr, E.; Simon, G.; Houdé, O.

    2016-01-01

    Reading relies on a left-lateralized network of brain areas that include the pre-lexical processing regions of the ventral stream. Specifically, a region in the left lateral occipitotemporal sulcus (OTS) is consistently more activated for visual presentations of words than for other categories of stimuli. This region undergoes dramatic changes at…

  19. Lexical Competition during Second-Language Listening: Sentence Context, but Not Proficiency, Constrains Interference from the Native Lexicon

    ERIC Educational Resources Information Center

    Chambers, Craig G.; Cooke, Hilary

    2009-01-01

    A spoken language eye-tracking methodology was used to evaluate the effects of sentence context and proficiency on parallel language activation during spoken language comprehension. Nonnative speakers with varying proficiency levels viewed visual displays while listening to French sentences (e.g., "Marie va decrire la poule" [Marie will…

  20. The dynamics of categorization: Unraveling rapid categorization.

    PubMed

    Mack, Michael L; Palmeri, Thomas J

    2015-06-01

    We explore a puzzle of visual object categorization: Under normal viewing conditions, you spot something as a dog fastest, but at a glance, you spot it faster as an animal. During speeded category verification, a classic basic-level advantage is commonly observed (Rosch, Mervis, Gray, Johnson, & Boyes-Braem, 1976), with categorization as a dog faster than as an animal (superordinate) or Golden Retriever (subordinate). A different story emerges during ultra-rapid categorization with limited exposure duration (<30 ms), with superordinate categorization faster than basic or subordinate categorization (Thorpe, Fize, & Marlot, 1996). These two widely cited findings paint contrary theoretical pictures about the time course of categorization, yet no previous study has investigated them together. We systematically examined two experimental factors that could explain the qualitative difference in categorization across the two paradigms: exposure duration and category trial context. Mapping out the time course of object categorization by manipulating exposure duration and the timing of a post-stimulus mask revealed that brief exposure durations favor superordinate-level categorization, but with more time a basic-level advantage emerges. However, these advantages were modulated by category trial context. With randomized target categories, the superordinate advantage was eliminated; and with only four repetitions of superordinate categorization within an otherwise randomized context, the basic-level advantage was eliminated. Contrary to theoretical accounts that dictate a fixed priority for certain levels of abstraction in visual processing and access to semantic knowledge, the dynamics of object categorization are flexible, depending jointly on the level of abstraction, time for perceptual encoding, and category context. (c) 2015 APA, all rights reserved).

  1. THE DYNAMICS OF CATEGORIZATION: UNRAVELING RAPID CATEGORIZATION

    PubMed Central

    Mack, Michael L.; Palmeri, Thomas J.

    2015-01-01

    We explore a puzzle of visual object categorization: Under normal viewing conditions, you spot something as a dog fastest, but at a glance, you spot it faster as an animal. During speeded category verification, a classic basic-level advantage is commonly observed (Rosch, Mervis, Gray, Johnson, & Boyes-Braem, 1976), with categorization as a dog faster than as an animal (superordinate) or Golden Retriever (subordinate). A different story emerges during ultra-rapid categorization with limited exposure duration (<30ms), with superordinate categorization faster than basic or subordinate categorization (Thorpe, Fize, & Marlot, 1996). These two widely cited findings paint contrary theoretical pictures about the time course of object categorization, yet no study has previously investigated them together. Over five experiments, we systematically examined two experimental factors that could explain the qualitative difference in categorization across the two paradigms: exposure duration and category trial context. Mapping out the time course of object categorization by manipulating exposure duration and the timing of a post-stimulus mask revealed that brief exposure durations favor superordinate-level categorization, but with more time a basic-level advantage emerges. But this superordinate advantage was modulated significantly by target category trial context. With randomized target categories, the superordinate advantage was eliminated; and with “blocks” of only four repetitions of superordinate categorization within an otherwise randomized context, the advantage for the basic-level was eliminated. Contrary to some theoretical accounts that dictate a fixed priority for certain levels of abstraction in visual processing and access to semantic knowledge, the dynamics of object categorization are flexible, depending jointly on the level of abstraction, time for perceptual encoding, and category context. PMID:25938178

  2. False memories and lexical decision: even twelve primes do not cause long-term semantic priming.

    PubMed

    Zeelenberg, René; Pecher, Diane

    2002-03-01

    Semantic priming effects are usually obtained only if the prime is presented shortly before the target stimulus. Recent evidence obtained with the so-called false memory paradigm suggests, however, that in both explicit and implicit memory tasks semantic relations between words can result in long-lasting effects when multiple 'primes' are presented. The aim of the present study was to investigate whether these effects would generalize to lexical decision. In four experiments we showed that even as many as 12 primes do not cause long-term semantic priming. In all experiments, however, a repetition priming effect was obtained. The present results are consistent with a number of other results showing that semantic information plays a minimal role in long-term priming in visual word recognition.

  3. Immediate effects of form-class constraints on spoken word recognition

    PubMed Central

    Magnuson, James S.; Tanenhaus, Michael K.; Aslin, Richard N.

    2008-01-01

    In many domains of cognitive processing there is strong support for bottom-up priority and delayed top-down (contextual) integration. We ask whether this applies to supra-lexical context that could potentially constrain lexical access. Previous findings of early context integration in word recognition have typically used constraints that can be linked to pair-wise conceptual relations between words. Using an artificial lexicon, we found immediate integration of syntactic expectations based on pragmatic constraints linked to syntactic categories rather than words: phonologically similar “nouns” and “adjectives” did not compete when a combination of syntactic and visual information strongly predicted form class. These results suggest that predictive context is integrated continuously, and that previous findings supporting delayed context integration stem from weak contexts rather than delayed integration. PMID:18675408

  4. Lexicality, morphological structure, and semantic transparency in the processing of German ver-verbs: The complementarity of on-line and off-line evidence.

    PubMed

    Schirmeier, Matthias K; Derwing, Bruce L; Libben, Gary

    2004-01-01

    Two types of experiments investigate the visual on-line and off-line processing of German ver-verbs (e.g., verbittern 'to embitter'). In Experiments 1 and 2 (morphological priming), latency patterns revealed the existence of facilitation effects for the morphological conditions (BITTER-VERBITTERN and BITTERN-VERBITTERN) as compared to the neutral conditions (SAUBER-VERBITTERN and SAUBERN-VERBITTERN). In Experiments 3 and 4 (rating tasks) participants had to judge whether the target (VERBITTERN) "comes from," "contains a form of," or "contains the meaning of" the root (BITTER) or the root+en substring (BITTERN). Taken together, these studies revealed the combined influence of the three factors of lexicality (real word status), morphological structure, and semantic transparency.

  5. The effect of two different visual presentation modalities on the narratives of mainstream grade 3 children.

    PubMed

    Klop, D; Engelbrecht, L

    2013-12-01

    This study investigated whether a dynamic visual presentation method (a soundless animated video presentation) would elicit better narratives than a static visual presentation method (a wordless picture book). Twenty mainstream grade 3 children were randomly assigned to two groups and assessed with one of the visual presentation methods. Narrative performance was measured in terms of micro- and macrostructure variables. Microstructure variables included productivity (total number of words, total number of T-units), syntactic complexity (mean length of T-unit) and lexical diversity measures (number of different words). Macrostructure variables included episodic structure in terms of goal-attempt-outcome (GAO) sequences. Both visual presentation modalities elicited narratives of similar quantity and quality in terms of the micro- and macrostructure variables that were investigated. Animation of picture stimuli did not elicit better narratives than static picture stimuli.

  6. Rapid Categorization of Human and Ape Faces in 9-Month-Old Infants Revealed by Fast Periodic Visual Stimulation.

    PubMed

    Peykarjou, Stefanie; Hoehl, Stefanie; Pauen, Sabina; Rossion, Bruno

    2017-10-02

    This study investigates categorization of human and ape faces in 9-month-olds using a Fast Periodic Visual Stimulation (FPVS) paradigm while measuring EEG. Categorization responses are elicited only if infants discriminate between different categories and generalize across exemplars within each category. In study 1, human or ape faces were presented as standard and deviant stimuli in upright and inverted trials. Upright ape faces presented among humans elicited strong categorization responses, whereas responses for upright human faces and for inverted ape faces were smaller. Deviant inverted human faces did not elicit categorization. Data were best explained by a model with main effects of species and orientation. However, variance of low-level image characteristics was higher for the ape than the human category. Variance was matched to replicate this finding in an independent sample (study 2). Both human and ape faces elicited categorization in upright and inverted conditions, but upright ape faces elicited the strongest responses. Again, data were best explained by a model of two main effects. These experiments demonstrate that 9-month-olds rapidly categorize faces, and unfamiliar faces presented among human faces elicit increased categorization responses. This likely reflects habituation for the familiar standard category, and stronger release for the unfamiliar category deviants.

  7. Taxonomic and ad hoc categorization within the two cerebral hemispheres.

    PubMed

    Shen, Yeshayahu; Aharoni, Bat-El; Mashal, Nira

    2015-01-01

    A typicality effect refers to categorization which is performed more quickly or more accurately for typical than for atypical members of a given category. Previous studies reported a typicality effect for category members presented in the left visual field/right hemisphere (RH), suggesting that the RH applies a similarity-based categorization strategy. However, findings regarding the typicality effect within the left hemisphere (LH) are less conclusive. The current study tested the pattern of typicality effects within each hemisphere for both taxonomic and ad hoc categories, using words presented to the left or right visual fields. Experiment 1 tested typical and atypical members of taxonomic categories as well as non-members, and Experiment 2 tested typical and atypical members of ad hoc categories as well as non-members. The results revealed a typicality effect in both hemispheres and in both types of categories. Furthermore, the RH categorized atypical stimuli more accurately than did the LH. Our findings suggest that both hemispheres rely on a similarity-based categorization strategy, but the coarse semantic coding of the RH seems to facilitate the categorization of atypical members.

  8. A lexical semantic hub for heteromodal naming in middle fusiform gyrus.

    PubMed

    Forseth, Kiefer James; Kadipasaoglu, Cihan Mehmet; Conner, Christopher Richard; Hickok, Gregory; Knight, Robert Thomas; Tandon, Nitin

    2018-07-01

    Semantic memory underpins our understanding of objects, people, places, and ideas. Anomia, a disruption of semantic memory access, is the most common residual language disturbance and is seen in dementia and following injury to temporal cortex. While such anomia has been well characterized by lesion symptom mapping studies, its pathophysiology is not well understood. We hypothesize that inputs to the semantic memory system engage a specific heteromodal network hub that integrates lexical retrieval with the appropriate semantic content. Such a network hub has been proposed by others, but has thus far eluded precise spatiotemporal delineation. This limitation in our understanding of semantic memory has impeded progress in the treatment of anomia. We evaluated the cortical structure and dynamics of the lexical semantic network in driving speech production in a large cohort of patients with epilepsy using electrocorticography (n = 64), functional MRI (n = 36), and direct cortical stimulation (n = 30) during two generative language processes that rely on semantic knowledge: visual picture naming and auditory naming to definition. Each task also featured a non-semantic control condition: scrambled pictures and reversed speech, respectively. These large-scale data of the left, language-dominant hemisphere uniquely enable convergent, high-resolution analyses of neural mechanisms characterized by rapid, transient dynamics with strong interactions between distributed cortical substrates. We observed three stages of activity during both visual picture naming and auditory naming to definition that were serially organized: sensory processing, lexical semantic processing, and articulation. Critically, the second stage was absent in both the visual and auditory control conditions. Group activity maps from both electrocorticography and functional MRI identified heteromodal responses in middle fusiform gyrus, intraparietal sulcus, and inferior frontal gyrus; furthermore, the spectrotemporal profiles of these three regions revealed coincident activity preceding articulation. Only in the middle fusiform gyrus did direct cortical stimulation disrupt both naming tasks while still preserving the ability to repeat sentences. These convergent data strongly support a model in which a distinct neuroanatomical substrate in middle fusiform gyrus provides access to object semantic information. This under-appreciated locus of semantic processing is at risk in resections for temporal lobe epilepsy as well as in trauma and strokes that affect the inferior temporal cortex-it may explain the range of anomic states seen in these conditions. Further characterization of brain network behaviour engaging this region in both healthy and diseased states will expand our understanding of semantic memory and further development of therapies directed at anomia.

  9. Electrostimulation mapping of comprehension of auditory and visual words.

    PubMed

    Roux, Franck-Emmanuel; Miskin, Krasimir; Durand, Jean-Baptiste; Sacko, Oumar; Réhault, Emilie; Tanova, Rositsa; Démonet, Jean-François

    2015-10-01

    In order to spare functional areas during the removal of brain tumours, electrical stimulation mapping was used in 90 patients (77 in the left hemisphere and 13 in the right; 2754 cortical sites tested). Language functions were studied with a special focus on comprehension of auditory and visual words and the semantic system. In addition to naming, patients were asked to perform pointing tasks from auditory and visual stimuli (using sets of 4 different images controlled for familiarity), and also auditory object (sound recognition) and Token test tasks. Ninety-two auditory comprehension interference sites were observed. We found that the process of auditory comprehension involved a few, fine-grained, sub-centimetre cortical territories. Early stages of speech comprehension seem to relate to two posterior regions in the left superior temporal gyrus. Downstream lexical-semantic speech processing and sound analysis involved 2 pathways, along the anterior part of the left superior temporal gyrus, and posteriorly around the supramarginal and middle temporal gyri. Electrostimulation experimentally dissociated perceptual consciousness attached to speech comprehension. The initial word discrimination process can be considered as an "automatic" stage, the attention feedback not being impaired by stimulation as would be the case at the lexical-semantic stage. Multimodal organization of the superior temporal gyrus was also detected since some neurones could be involved in comprehension of visual material and naming. These findings demonstrate a fine graded, sub-centimetre, cortical representation of speech comprehension processing mainly in the left superior temporal gyrus and are in line with those described in dual stream models of language comprehension processing. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. A task-dependent causal role for low-level visual processes in spoken word comprehension.

    PubMed

    Ostarek, Markus; Huettig, Falk

    2017-08-01

    It is well established that the comprehension of spoken words referring to object concepts relies on high-level visual areas in the ventral stream that build increasingly abstract representations. It is much less clear whether basic low-level visual representations are also involved. Here we asked in what task situations low-level visual representations contribute functionally to concrete word comprehension using an interference paradigm. We interfered with basic visual processing while participants performed a concreteness task (Experiment 1), a lexical-decision task (Experiment 2), and a word class judgment task (Experiment 3). We found that visual noise interfered more with concrete versus abstract word processing, but only when the task required visual information to be accessed. This suggests that basic visual processes can be causally involved in language comprehension, but that their recruitment is not automatic and rather depends on the type of information that is required in a given task situation. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  11. An association between auditory-visual synchrony processing and reading comprehension: Behavioral and electrophysiological evidence

    PubMed Central

    Mossbridge, Julia; Zweig, Jacob; Grabowecky, Marcia; Suzuki, Satoru

    2016-01-01

    The perceptual system integrates synchronized auditory-visual signals in part to promote individuation of objects in cluttered environments. The processing of auditory-visual synchrony may more generally contribute to cognition by synchronizing internally generated multimodal signals. Reading is a prime example because the ability to synchronize internal phonological and/or lexical processing with visual orthographic processing may facilitate encoding of words and meanings. Consistent with this possibility, developmental and clinical research has suggested a link between reading performance and the ability to compare visual spatial/temporal patterns with auditory temporal patterns. Here, we provide converging behavioral and electrophysiological evidence suggesting that greater behavioral ability to judge auditory-visual synchrony (Experiment 1) and greater sensitivity of an electrophysiological marker of auditory-visual synchrony processing (Experiment 2) both predict superior reading comprehension performance, accounting for 16% and 25% of the variance, respectively. These results support the idea that the mechanisms that detect auditory-visual synchrony contribute to reading comprehension. PMID:28129060

  12. An Association between Auditory-Visual Synchrony Processing and Reading Comprehension: Behavioral and Electrophysiological Evidence.

    PubMed

    Mossbridge, Julia; Zweig, Jacob; Grabowecky, Marcia; Suzuki, Satoru

    2017-03-01

    The perceptual system integrates synchronized auditory-visual signals in part to promote individuation of objects in cluttered environments. The processing of auditory-visual synchrony may more generally contribute to cognition by synchronizing internally generated multimodal signals. Reading is a prime example because the ability to synchronize internal phonological and/or lexical processing with visual orthographic processing may facilitate encoding of words and meanings. Consistent with this possibility, developmental and clinical research has suggested a link between reading performance and the ability to compare visual spatial/temporal patterns with auditory temporal patterns. Here, we provide converging behavioral and electrophysiological evidence suggesting that greater behavioral ability to judge auditory-visual synchrony (Experiment 1) and greater sensitivity of an electrophysiological marker of auditory-visual synchrony processing (Experiment 2) both predict superior reading comprehension performance, accounting for 16% and 25% of the variance, respectively. These results support the idea that the mechanisms that detect auditory-visual synchrony contribute to reading comprehension.

  13. Relationships between Categorical Perception of Phonemes, Phoneme Awareness, and Visual Attention Span in Developmental Dyslexia.

    PubMed

    Zoubrinetzky, Rachel; Collet, Gregory; Serniclaes, Willy; Nguyen-Morel, Marie-Ange; Valdois, Sylviane

    2016-01-01

    We tested the hypothesis that the categorical perception deficit of speech sounds in developmental dyslexia is related to phoneme awareness skills, whereas a visual attention (VA) span deficit constitutes an independent deficit. Phoneme awareness tasks, VA span tasks and categorical perception tasks of phoneme identification and discrimination using a d/t voicing continuum were administered to 63 dyslexic children and 63 control children matched on chronological age. Results showed significant differences in categorical perception between the dyslexic and control children. Significant correlations were found between categorical perception skills, phoneme awareness and reading. Although VA span correlated with reading, no significant correlations were found between either categorical perception or phoneme awareness and VA span. Mediation analyses performed on the whole dyslexic sample suggested that the effect of categorical perception on reading might be mediated by phoneme awareness. This relationship was independent of the participants' VA span abilities. Two groups of dyslexic children with a single phoneme awareness or a single VA span deficit were then identified. The phonologically impaired group showed lower categorical perception skills than the control group but categorical perception was similar in the VA span impaired dyslexic and control children. The overall findings suggest that the link between categorical perception, phoneme awareness and reading is independent from VA span skills. These findings provide new insights on the heterogeneity of developmental dyslexia. They suggest that phonological processes and VA span independently affect reading acquisition.

  14. Relationships between Categorical Perception of Phonemes, Phoneme Awareness, and Visual Attention Span in Developmental Dyslexia

    PubMed Central

    Zoubrinetzky, Rachel; Collet, Gregory; Serniclaes, Willy; Nguyen-Morel, Marie-Ange; Valdois, Sylviane

    2016-01-01

    We tested the hypothesis that the categorical perception deficit of speech sounds in developmental dyslexia is related to phoneme awareness skills, whereas a visual attention (VA) span deficit constitutes an independent deficit. Phoneme awareness tasks, VA span tasks and categorical perception tasks of phoneme identification and discrimination using a d/t voicing continuum were administered to 63 dyslexic children and 63 control children matched on chronological age. Results showed significant differences in categorical perception between the dyslexic and control children. Significant correlations were found between categorical perception skills, phoneme awareness and reading. Although VA span correlated with reading, no significant correlations were found between either categorical perception or phoneme awareness and VA span. Mediation analyses performed on the whole dyslexic sample suggested that the effect of categorical perception on reading might be mediated by phoneme awareness. This relationship was independent of the participants’ VA span abilities. Two groups of dyslexic children with a single phoneme awareness or a single VA span deficit were then identified. The phonologically impaired group showed lower categorical perception skills than the control group but categorical perception was similar in the VA span impaired dyslexic and control children. The overall findings suggest that the link between categorical perception, phoneme awareness and reading is independent from VA span skills. These findings provide new insights on the heterogeneity of developmental dyslexia. They suggest that phonological processes and VA span independently affect reading acquisition. PMID:26950210

  15. Category learning increases discriminability of relevant object dimensions in visual cortex.

    PubMed

    Folstein, Jonathan R; Palmeri, Thomas J; Gauthier, Isabel

    2013-04-01

    Learning to categorize objects can transform how they are perceived, causing relevant perceptual dimensions predictive of object category to become enhanced. For example, an expert mycologist might become attuned to species-specific patterns of spacing between mushroom gills but learn to ignore cap textures attributable to varying environmental conditions. These selective changes in perception can persist beyond the act of categorizing objects and influence our ability to discriminate between them. Using functional magnetic resonance imaging adaptation, we demonstrate that such category-specific perceptual enhancements are associated with changes in the neural discriminability of object representations in visual cortex. Regions within the anterior fusiform gyrus became more sensitive to small variations in shape that were relevant during prior category learning. In addition, extrastriate occipital areas showed heightened sensitivity to small variations in shape that spanned the category boundary. Visual representations in cortex, just like our perception, are sensitive to an object's history of categorization.

  16. Discrimination of Lexical Tones in the First Year of Life

    ERIC Educational Resources Information Center

    Chen, Ao; Kager, René

    2016-01-01

    In the current study, we examined the developmental course of the perception of non-native tonal contrast. We tested 4, 6 and 12-month-old Dutch infants on their discrimination of Chinese low-rising tone and low-dipping tone using the visual fixation paradigm. The infants were tested in two conditions that differed in terms of degree of…

  17. Testing the Multiple in the Multiple Read-Out Model of Visual Word Recognition

    ERIC Educational Resources Information Center

    De Moor, Wendy; Verguts, Tom; Brysbaert, Marc

    2005-01-01

    This study provided a test of the multiple criteria concept used for lexical decision, as implemented in J. Grainger and A. M. Jacobs's (1996) multiple read-out model. This account predicts more inhibition (or less facilitation) from a masked neighbor when accuracy is stressed more but more facilitation (or less inhibition) when the speed of…

  18. Powerpoint as a Potential Tool to Learners' Vocabulary Retention: Empirical Evidences from a Vietnamese Secondary Education Setting

    ERIC Educational Resources Information Center

    Nam, Ta Thanh; Trinh, Lap Q.

    2012-01-01

    In Vietnamese secondary education, translation and visuals are traditionally used as major techniques in teaching new English lexical items. Responding to the Vietnamese government policy issued in 2008 on using IT for a quality education, the application of PowerPoint has been considered the most prevalent type of technology used in the…

  19. Word or Word-Like? Dissociating Orthographic Typicality from Lexicality in the Left Occipito-Temporal Cortex

    ERIC Educational Resources Information Center

    Woollams, Anna M.; Silani, Giorgia; Okada, Kayoko; Patterson, Karalyn; Price, Cathy J.

    2011-01-01

    Prior lesion and functional imaging studies have highlighted the importance of the left ventral occipito-temporal (LvOT) cortex for visual word recognition. Within this area, there is a posterior-anterior hierarchy of subregions that are specialized for different stages of orthographic processing. The aim of the present fMRI study was to…

  20. Processing of Gender and Number Agreement in Russian as a Second Language: The Devil Is in the Details

    ERIC Educational Resources Information Center

    Romanova, Natalia; Gor, Kira

    2017-01-01

    The study investigated the processing of Russian gender and number agreement by native (n = 36) and nonnative (n = 36) participants using a visual lexical decision task with priming. The design included a baseline condition that helped dissociate the underlying components of priming (facilitation and inhibition). The results showed no differences…

  1. Additive Effects of Word Frequency and Stimulus Quality: The Influence of Trial History and Data Transformations

    ERIC Educational Resources Information Center

    Balota, David A.; Aschenbrenner, Andrew J.; Yap, Melvin J.

    2013-01-01

    A counterintuitive and theoretically important pattern of results in the visual word recognition literature is that both word frequency and stimulus quality produce large but additive effects in lexical decision performance. The additive nature of these effects has recently been called into question by Masson and Kliegl (in press), who used linear…

  2. The Lexical Status of the Root in Processing Morphologically Complex Words in Arabic

    ERIC Educational Resources Information Center

    Shalhoub-Awwad, Yasmin; Leikin, Mark

    2016-01-01

    This study investigated the effects of the Arabic root in the visual word recognition process among young readers in order to explore its role in reading acquisition and its development within the structure of the Arabic mental lexicon. We examined cross-modal priming of words that were derived from the same root of the target…

  3. The Mental Representation of Verb-Noun Compounds in Italian: Evidence from a Multiple Single-Case Study in Aphasia

    ERIC Educational Resources Information Center

    Mondini, Sara; Luzzatti, Claudio; Zonca, Giusy; Pistarini, Caterina; Semenza, Carlo

    2004-01-01

    This study seeks information on the mental representation of Verb-Noun (VN) nominal compounds through neuropsychological methods. The lexical retrieval of compound nouns is tested in 30 aphasic patients using a visual confrontation naming task. The target names are VN compounds, Noun-Noun (NN) compounds, and long morphologically simple nouns…

  4. A stable biologically motivated learning mechanism for visual feature extraction to handle facial categorization.

    PubMed

    Rajaei, Karim; Khaligh-Razavi, Seyed-Mahdi; Ghodrati, Masoud; Ebrahimpour, Reza; Shiri Ahmad Abadi, Mohammad Ebrahim

    2012-01-01

    The brain mechanism of extracting visual features for recognizing various objects has consistently been a controversial issue in computational models of object recognition. To extract visual features, we introduce a new, biologically motivated model for facial categorization, which is an extension of the Hubel and Wiesel simple-to-complex cell hierarchy. To address the synaptic stability versus plasticity dilemma, we apply the Adaptive Resonance Theory (ART) for extracting informative intermediate level visual features during the learning process, which also makes this model stable against the destruction of previously learned information while learning new information. Such a mechanism has been suggested to be embedded within known laminar microcircuits of the cerebral cortex. To reveal the strength of the proposed visual feature learning mechanism, we show that when we use this mechanism in the training process of a well-known biologically motivated object recognition model (the HMAX model), it performs better than the HMAX model in face/non-face classification tasks. Furthermore, we demonstrate that our proposed mechanism is capable of following similar trends in performance as humans in a psychophysical experiment using a face versus non-face rapid categorization task.

  5. The processing of consonants and vowels during letter identity and letter position assignment in visual-word recognition: an ERP study.

    PubMed

    Vergara-Martínez, Marta; Perea, Manuel; Marín, Alejandro; Carreiras, Manuel

    2011-09-01

    Recent research suggests that there is a processing distinction between consonants and vowels in visual-word recognition. Here we conjointly examine the time course of consonants and vowels in processes of letter identity and letter position assignment. Event related potentials (ERPs) were recorded while participants read words and pseudowords in a lexical decision task. The stimuli were displayed under different conditions in a masked priming paradigm with a 50-ms SOA: (i) identity/baseline condition e.g., chocolate-CHOCOLATE); (ii) vowels-delayed condition (e.g., choc_l_te-CHOCOLATE); (iii) consonants-delayed condition (cho_o_ate-CHOCOLATE); (iv) consonants-transposed condition (cholocate-CHOCOLATE); (v) vowels-transposed condition (chocalote-CHOCOLATE), and (vi) unrelated condition (editorial-CHOCOLATE). Results showed earlier ERP effects and longer reaction times for the delayed-letter compared to the transposed-letter conditions. Furthermore, at early stages of processing, consonants may play a greater role during letter identity processing. Differences between vowels and consonants regarding letter position assignment are discussed in terms of a later phonological level involved in lexical retrieval. Copyright © 2010 Elsevier Inc. All rights reserved.

  6. How does interhemispheric communication in visual word recognition work? Deciding between early and late integration accounts of the split fovea theory.

    PubMed

    Van der Haegen, Lise; Brysbaert, Marc; Davis, Colin J

    2009-02-01

    It has recently been shown that interhemispheric communication is needed for the processing of foveally presented words. In this study, we examine whether the integration of information happens at an early stage, before word recognition proper starts, or whether the integration is part of the recognition process itself. Two lexical decision experiments are reported in which words were presented at different fixation positions. In Experiment 1, a masked form priming task was used with primes that had two adjacent letters transposed. The results showed that although the fixation position had a substantial influence on the transposed letter priming effect, the priming was not smaller when the transposed letters were sent to different hemispheres than when they were projected to the same hemisphere. In Experiment 2, stimuli were presented that either had high frequency hemifield competitors or could be identified unambiguously on the basis of the information in one hemifield. Again, the lexical decision times did not vary as a function of hemifield competitors. These results are consistent with the early integration account, as presented in the SERIOL model of visual word recognition.

  7. A conflict-based model of color categorical perception: evidence from a priming study.

    PubMed

    Hu, Zhonghua; Hanley, J Richard; Zhang, Ruiling; Liu, Qiang; Roberson, Debi

    2014-10-01

    Categorical perception (CP) of color manifests as faster or more accurate discrimination of two shades of color that straddle a category boundary (e.g., one blue and one green) than of two shades from within the same category (e.g., two different shades of green), even when the differences between the pairs of colors are equated according to some objective metric. The results of two experiments provide new evidence for a conflict-based account of this effect, in which CP is caused by competition between visual and verbal/categorical codes on within-category trials. According to this view, conflict arises because the verbal code indicates that the two colors are the same, whereas the visual code indicates that they are different. In Experiment 1, two shades from the same color category were discriminated significantly faster when the previous trial also comprised a pair of within-category colors than when the previous trial comprised a pair from two different color categories. Under the former circumstances, the CP effect disappeared. According to the conflict-based model, response conflict between visual and categorical codes during discrimination of within-category pairs produced an adjustment of cognitive control that reduced the weight given to the categorical code relative to the visual code on the subsequent trial. Consequently, responses on within-category trials were facilitated, and CP effects were reduced. The effectiveness of this conflict-based account was evaluated in comparison with an alternative view that CP reflects temporary warping of perceptual space at the boundaries between color categories.

  8. Electrophysiological evidence for the morpheme-based combinatoric processing of English compounds

    PubMed Central

    Fiorentino, Robert; Naito-Billen, Yuka; Bost, Jamie; Fund-Reznicek, Ella

    2014-01-01

    The extent to which the processing of compounds (e.g., “catfish”) makes recourse to morphological-level representations remains a matter of debate. Moreover, positing a morpheme-level route to complex word recognition entails not only access to morphological constituents, but also combinatoric processes operating on the constituent representations; however, the neurophysiological mechanisms subserving decomposition, and in particular morpheme combination, have yet to be fully elucidated. The current study presents electrophysiological evidence for the morpheme-based processing of both lexicalized (e.g., “teacup”) and novel (e.g., “tombnote”) visually-presented English compounds; these brain responses appear prior to and are dissociable from the eventual overt lexical decision response. The electrophysiological results reveal increased negativities for conditions with compound structure, including effects shared by lexicalized and novel compounds, as well as effects unique to each compound type, which may be related to aspects of morpheme combination. These findings support models positing across-the-board morphological decomposition, counter to models proposing that putatively complex words are primarily or solely processed as undecomposed representations, and motivate further electrophysiological research toward a more precise characterization of the nature and neurophysiological instantiation of complex word recognition. PMID:24279696

  9. Orthographic and Phonological Neighborhood Databases across Multiple Languages.

    PubMed

    Marian, Viorica

    2017-01-01

    The increased globalization of science and technology and the growing number of bilinguals and multilinguals in the world have made research with multiple languages a mainstay for scholars who study human function and especially those who focus on language, cognition, and the brain. Such research can benefit from large-scale databases and online resources that describe and measure lexical, phonological, orthographic, and semantic information. The present paper discusses currently-available resources and underscores the need for tools that enable measurements both within and across multiple languages. A general review of language databases is followed by a targeted introduction to databases of orthographic and phonological neighborhoods. A specific focus on CLEARPOND illustrates how databases can be used to assess and compare neighborhood information across languages, to develop research materials, and to provide insight into broad questions about language. As an example of how using large-scale databases can answer questions about language, a closer look at neighborhood effects on lexical access reveals that not only orthographic, but also phonological neighborhoods can influence visual lexical access both within and across languages. We conclude that capitalizing upon large-scale linguistic databases can advance, refine, and accelerate scientific discoveries about the human linguistic capacity.

  10. There is no clam with coats in the calm coast: delimiting the transposed-letter priming effect.

    PubMed

    Duñabeitia, Jon Andoni; Perea, Manuel; Carreiras, Manuel

    2009-10-01

    In this article, we explore the transposed-letter priming effect (e.g., jugde-JUDGE vs. jupte-JUDGE), a phenomenon that taps into some key issues on how the brain encodes letter positions and has favoured the creation of new input coding schemes. However, almost all the empirical evidence from transposed-letter priming experiments comes from nonword primes (e.g., jugde-JUDGE). Indeed, previous evidence when using word-word pairs (e.g., causal-CASUAL) is not conclusive. Here, we conducted five masked priming lexical decision experiments that examined the relationship between pairs of real words that differed only in the transposition of two of their letters (e.g., CASUAL vs. CAUSAL). Results showed that, unlike transposed-letter nonwords, transposed-letter words do not seem to affect the identification time of their transposed-letter mates. Thus, prime lexicality is a key factor that modulates the magnitude of transposed-letter priming effects. These results are interpreted under the assumption of the existence of lateral inhibition processes occurring within the lexical level-which cancels out any orthographic facilitation due to the overlapping letters. We examine the implications of these findings for models of visual-word recognition.

  11. Lexical enhancement during prime-target integration: ERP evidence from matched-case identity priming.

    PubMed

    Vergara-Martínez, Marta; Gómez, Pablo; Jiménez, María; Perea, Manuel

    2015-06-01

    A number of experiments have revealed that matched-case identity PRIME-TARGET pairs are responded to faster than mismatched-case identity prime-TARGET pairs for pseudowords (e.g., JUDPE-JUDPE < judpe-JUDPE), but not for words (JUDGE-JUDGE = judge-JUDGE). These findings suggest that prime-target integration processes are enhanced when the stimuli tap onto lexical representations, overriding physical differences between the stimuli (e.g., case). To track the time course of this phenomenon, we conducted an event-related potential (ERP) masked-priming lexical decision experiment that manipulated matched versus mismatched case identity in words and pseudowords. The behavioral results replicated previous research. The ERP waves revealed that matched-case identity-priming effects were found at a very early time epoch (N/P150 effects) for words and pseudowords. Importantly, around 200 ms after target onset (N250), these differences disappeared for words but not for pseudowords. These findings suggest that different-case word forms (lower- and uppercase) tap into the same abstract representation, leading to prime-target integration very early in processing. In contrast, different-case pseudoword forms are processed as two different representations. This word-pseudoword dissociation has important implications for neural accounts of visual-word recognition.

  12. The role of tone and segmental information in visual-word recognition in Thai.

    PubMed

    Winskel, Heather; Ratitamkul, Theeraporn; Charoensit, Akira

    2017-07-01

    Tone languages represent a large proportion of the spoken languages of the world and yet lexical tone is understudied. Thai offers a unique opportunity to investigate the role of lexical tone processing during visual-word recognition, as tone is explicitly expressed in its script. We used colour words and their orthographic neighbours as stimuli to investigate facilitation (Experiment 1) and interference (Experiment 2) Stroop effects. Five experimental conditions were created: (a) the colour word (e.g., ขาว /k h ã:w/ [white]), (b) tone different word (e.g., ข่าว /k h à:w/[news]), (c) initial consonant phonologically same word (e.g., คาว /k h a:w/ [fishy]), where the initial consonant of the word was phonologically the same but orthographically different, (d) initial consonant different, tone same word (e.g., หาว /hã:w/ yawn), where the initial consonant was orthographically different but the tone of the word was the same, and (e) initial consonant different, tone different word (e.g., กาว /ka:w/ glue), where the initial consonant was orthographically different, and the tone was different. In order to examine whether tone information per se had a facilitative effect, we also included a colour congruent word condition where the segmental (S) information was different but the tone (T) matched the colour word (S-T+) in Experiment 2. Facilitation/interference effects were found for all five conditions when compared with a neutral control word. Results of the critical comparisons revealed that tone information comes into play at a later stage in lexical processing, and orthographic information contributes more than phonological information.

  13. Feature-Specific Event-Related Potential Effects to Action- and Sound-Related Verbs during Visual Word Recognition

    PubMed Central

    Popp, Margot; Trumpp, Natalie M.; Kiefer, Markus

    2016-01-01

    Grounded cognition theories suggest that conceptual representations essentially depend on modality-specific sensory and motor systems. Feature-specific brain activation across different feature types such as action or audition has been intensively investigated in nouns, while feature-specific conceptual category differences in verbs mainly focused on body part specific effects. The present work aimed at assessing whether feature-specific event-related potential (ERP) differences between action and sound concepts, as previously observed in nouns, can also be found within the word class of verbs. In Experiment 1, participants were visually presented with carefully matched sound and action verbs within a lexical decision task, which provides implicit access to word meaning and minimizes strategic access to semantic word features. Experiment 2 tested whether pre-activating the verb concept in a context phase, in which the verb is presented with a related context noun, modulates subsequent feature-specific action vs. sound verb processing within the lexical decision task. In Experiment 1, ERP analyses revealed a differential ERP polarity pattern for action and sound verbs at parietal and central electrodes similar to previous results in nouns. Pre-activation of the meaning of verbs in the preceding context phase in Experiment 2 resulted in a polarity-reversal of feature-specific ERP effects in the lexical decision task compared with Experiment 1. This parallels analogous earlier findings for primed action and sound related nouns. In line with grounded cognitions theories, our ERP study provides evidence for a differential processing of action and sound verbs similar to earlier observation for concrete nouns. Although the localizational value of ERPs must be viewed with caution, our results indicate that the meaning of verbs is linked to different neural circuits depending on conceptual feature relevance. PMID:28018201

  14. Feature-Specific Event-Related Potential Effects to Action- and Sound-Related Verbs during Visual Word Recognition.

    PubMed

    Popp, Margot; Trumpp, Natalie M; Kiefer, Markus

    2016-01-01

    Grounded cognition theories suggest that conceptual representations essentially depend on modality-specific sensory and motor systems. Feature-specific brain activation across different feature types such as action or audition has been intensively investigated in nouns, while feature-specific conceptual category differences in verbs mainly focused on body part specific effects. The present work aimed at assessing whether feature-specific event-related potential (ERP) differences between action and sound concepts, as previously observed in nouns, can also be found within the word class of verbs. In Experiment 1, participants were visually presented with carefully matched sound and action verbs within a lexical decision task, which provides implicit access to word meaning and minimizes strategic access to semantic word features. Experiment 2 tested whether pre-activating the verb concept in a context phase, in which the verb is presented with a related context noun, modulates subsequent feature-specific action vs. sound verb processing within the lexical decision task. In Experiment 1, ERP analyses revealed a differential ERP polarity pattern for action and sound verbs at parietal and central electrodes similar to previous results in nouns. Pre-activation of the meaning of verbs in the preceding context phase in Experiment 2 resulted in a polarity-reversal of feature-specific ERP effects in the lexical decision task compared with Experiment 1. This parallels analogous earlier findings for primed action and sound related nouns. In line with grounded cognitions theories, our ERP study provides evidence for a differential processing of action and sound verbs similar to earlier observation for concrete nouns. Although the localizational value of ERPs must be viewed with caution, our results indicate that the meaning of verbs is linked to different neural circuits depending on conceptual feature relevance.

  15. Constraints on the Transfer of Perceptual Learning in Accented Speech

    PubMed Central

    Eisner, Frank; Melinger, Alissa; Weber, Andrea

    2013-01-01

    The perception of speech sounds can be re-tuned through a mechanism of lexically driven perceptual learning after exposure to instances of atypical speech production. This study asked whether this re-tuning is sensitive to the position of the atypical sound within the word. We investigated perceptual learning using English voiced stop consonants, which are commonly devoiced in word-final position by Dutch learners of English. After exposure to a Dutch learner’s productions of devoiced stops in word-final position (but not in any other positions), British English (BE) listeners showed evidence of perceptual learning in a subsequent cross-modal priming task, where auditory primes with devoiced final stops (e.g., “seed”, pronounced [si:th]), facilitated recognition of visual targets with voiced final stops (e.g., SEED). In Experiment 1, this learning effect generalized to test pairs where the critical contrast was in word-initial position, e.g., auditory primes such as “town” facilitated recognition of visual targets like DOWN. Control listeners, who had not heard any stops by the speaker during exposure, showed no learning effects. The generalization to word-initial position did not occur when participants had also heard correctly voiced, word-initial stops during exposure (Experiment 2), and when the speaker was a native BE speaker who mimicked the word-final devoicing (Experiment 3). The readiness of the perceptual system to generalize a previously learned adjustment to other positions within the word thus appears to be modulated by distributional properties of the speech input, as well as by the perceived sociophonetic characteristics of the speaker. The results suggest that the transfer of pre-lexical perceptual adjustments that occur through lexically driven learning can be affected by a combination of acoustic, phonological, and sociophonetic factors. PMID:23554598

  16. DIMENSION-BASED STATISTICAL LEARNING OF VOWELS

    PubMed Central

    Liu, Ran; Holt, Lori L.

    2015-01-01

    Speech perception depends on long-term representations that reflect regularities of the native language. However, listeners rapidly adapt when speech acoustics deviate from these regularities due to talker idiosyncrasies such as foreign accents and dialects. To better understand these dual aspects of speech perception, we probe native English listeners’ baseline perceptual weighting of two acoustic dimensions (spectral quality and vowel duration) towards vowel categorization and examine how they subsequently adapt to an “artificial accent” that deviates from English norms in the correlation between the two dimensions. At baseline, listeners rely relatively more on spectral quality than vowel duration to signal vowel category, but duration nonetheless contributes. Upon encountering an “artificial accent” in which the spectral-duration correlation is perturbed relative to English language norms, listeners rapidly down-weight reliance on duration. Listeners exhibit this type of short-term statistical learning even in the context of nonwords, confirming that lexical information is not necessary to this form of adaptive plasticity in speech perception. Moreover, learning generalizes to both novel lexical contexts and acoustically-distinct altered voices. These findings are discussed in the context of a mechanistic proposal for how supervised learning may contribute to this type of adaptive plasticity in speech perception. PMID:26280268

  17. Lateralized Cognition: Asymmetrical and Complementary Strategies of Pigeons during Discrimination of the "Human Concept"

    ERIC Educational Resources Information Center

    Yamazaki, Y.; Aust, U.; Huber, L.; Hausmann, M.; Gunturkun, O.

    2007-01-01

    This study was aimed at revealing which cognitive processes are lateralized in visual categorizations of "humans" by pigeons. To this end, pigeons were trained to categorize pictures of humans and then tested binocularly or monocularly (left or right eye) on the learned categorization and for transfer to novel exemplars (Experiment 1). Subsequent…

  18. Visual Cortical Entrainment to Motion and Categorical Speech Features during Silent Lipreading

    PubMed Central

    O’Sullivan, Aisling E.; Crosse, Michael J.; Di Liberto, Giovanni M.; Lalor, Edmund C.

    2017-01-01

    Speech is a multisensory percept, comprising an auditory and visual component. While the content and processing pathways of audio speech have been well characterized, the visual component is less well understood. In this work, we expand current methodologies using system identification to introduce a framework that facilitates the study of visual speech in its natural, continuous form. Specifically, we use models based on the unheard acoustic envelope (E), the motion signal (M) and categorical visual speech features (V) to predict EEG activity during silent lipreading. Our results show that each of these models performs similarly at predicting EEG in visual regions and that respective combinations of the individual models (EV, MV, EM and EMV) provide an improved prediction of the neural activity over their constituent models. In comparing these different combinations, we find that the model incorporating all three types of features (EMV) outperforms the individual models, as well as both the EV and MV models, while it performs similarly to the EM model. Importantly, EM does not outperform EV and MV, which, considering the higher dimensionality of the V model, suggests that more data is needed to clarify this finding. Nevertheless, the performance of EMV, and comparisons of the subject performances for the three individual models, provides further evidence to suggest that visual regions are involved in both low-level processing of stimulus dynamics and categorical speech perception. This framework may prove useful for investigating modality-specific processing of visual speech under naturalistic conditions. PMID:28123363

  19. Interval-level measurement with visual analogue scales in Internet-based research: VAS Generator.

    PubMed

    Reips, Ulf-Dietrich; Funke, Frederik

    2008-08-01

    The present article describes VAS Generator (www.vasgenerator.net), a free Web service for creating a wide range of visual analogue scales that can be used as measurement devices in Web surveys and Web experimentation, as well as for local computerized assessment. A step-by-step example for creating and implementing a visual analogue scale with visual feedback is given. VAS Generator and the scales it generates work independently of platforms and use the underlying languages HTML and JavaScript. Results from a validation study with 355 participants are reported and show that the scales generated with VAS Generator approximate an interval-scale level. In light of previous research on visual analogue versus categorical (e.g., radio button) scales in Internet-based research, we conclude that categorical scales only reach ordinal-scale level, and thus visual analogue scales are to be preferred whenever possible.

  20. Categorizing words using 'frequent frames': what cross-linguistic analyses reveal about distributional acquisition strategies.

    PubMed

    Chemla, Emmanuel; Mintz, Toben H; Bernal, Savita; Christophe, Anne

    2009-04-01

    Mintz (2003) described a distributional environment called a frame, defined as the co-occurrence of two context words with one intervening target word. Analyses of English child-directed speech showed that words that fell within any frequently occurring frame consistently belonged to the same grammatical category (e.g. noun, verb, adjective, etc.). In this paper, we first generalize this result to French, a language in which the function word system allows patterns that are potentially detrimental to a frame-based analysis procedure. Second, we show that the discontinuity of the chosen environments (i.e. the fact that target words are framed by the context words) is crucial for the mechanism to be efficient. This property might be relevant for any computational approach to grammatical categorization. Finally, we investigate a recursive application of the procedure and observe that the categorization is paradoxically worse when context elements are categories rather than actual lexical items. Item-specificity is thus also a core computational principle for this type of algorithm. Our analysis, along with results from behavioural studies (Gómez, 2002; Gómez and Maye, 2005; Mintz, 2006), provides strong support for frames as a basis for the acquisition of grammatical categories by infants. Discontinuity and item-specificity appear to be crucial features.

  1. Interpreting Chicken-Scratch: Lexical Access for Handwritten Words

    PubMed Central

    Barnhart, Anthony S.; Goldinger, Stephen D.

    2014-01-01

    Handwritten word recognition is a field of study that has largely been neglected in the psychological literature, despite its prevalence in society. Whereas studies of spoken word recognition almost exclusively employ natural, human voices as stimuli, studies of visual word recognition use synthetic typefaces, thus simplifying the process of word recognition. The current study examined the effects of handwriting on a series of lexical variables thought to influence bottom-up and top-down processing, including word frequency, regularity, bidirectional consistency, and imageability. The results suggest that the natural physical ambiguity of handwritten stimuli forces a greater reliance on top-down processes, because almost all effects were magnified, relative to conditions with computer print. These findings suggest that processes of word perception naturally adapt to handwriting, compensating for physical ambiguity by increasing top-down feedback. PMID:20695708

  2. The role of spatial attention in visual word processing

    NASA Technical Reports Server (NTRS)

    Mccann, Robert S.; Folk, Charles L.; Johnston, James C.

    1992-01-01

    Subjects made lexical decisions on a target letter string presented above or below fixation. In Experiments 1 and 2, target location was cued 100 ms in advance of target onset. Responses were faster on validly than on invalidly cued trials. In Experiment 3, the target was sometimes accompanied by irrelevant stimuli on the other side of fixation; in such cases, responses were slowed (a spatial filtering effect). Both cuing and filtering effects on response time were additive with effects of word frequency and lexical status (words vs. nonwords). These findings are difficult to reconcile with claims that spatial attention is less involved in processing familiar words than in unfamiliar words and nonwords. The results can be reconciled with a late-selection locus of spatial attention only with difficulty, but are easily explained by early-selection models.

  3. ERP manifestations of processing printed words at different psycholinguistic levels: time course and scalp distribution.

    PubMed

    Bentin, S; Mouchetant-Rostaing, Y; Giard, M H; Echallier, J F; Pernier, J

    1999-05-01

    The aim of the present study was to examine the time course and scalp distribution of electrophysiological manifestations of the visual word recognition mechanism. Event-related potentials (ERPs) elicited by visually presented lists of words were recorded while subjects were involved in a series of oddball tasks. The distinction between the designated target and nontarget stimuli was manipulated to induce a different level of processing in each session (visual, phonological/phonetic, phonological/lexical, and semantic). The ERPs of main interest in this study were those elicited by nontarget stimuli. In the visual task the targets were twice as big as the nontargets. Words, pseudowords, strings of consonants, strings of alphanumeric symbols, and strings of forms elicited a sharp negative peak at 170 msec (N170); their distribution was limited to the occipito-temporal sites. For the left hemisphere electrode sites, the N170 was larger for orthographic than for nonorthographic stimuli and vice versa for the right hemisphere. The ERPs elicited by all orthographic stimuli formed a clearly distinct cluster that was different from the ERPs elicited by nonorthographic stimuli. In the phonological/phonetic decision task the targets were words and pseudowords rhyming with the French word vitrail, whereas the nontargets were words, pseudowords, and strings of consonants that did not rhyme with vitrail. The most conspicuous potential was a negative peak at 320 msec, which was similarly elicited by pronounceable stimuli but not by nonpronounceable stimuli. The N320 was bilaterally distributed over the middle temporal lobe and was significantly larger over the left than over the right hemisphere. In the phonological/lexical processing task we compared the ERPs elicited by strings of consonants (among which words were selected), pseudowords (among which words were selected), and by words (among which pseudowords were selected). The most conspicuous potential in these tasks was a negative potential peaking at 350 msec (N350) elicited by phonologically legal but not by phonologically illegal stimuli. The distribution of the N350 was similar to that of the N320, but it was broader and including temporo-parietal areas that were not activated in the "rhyme" task. Finally, in the semantic task the targets were abstract words, and the nontargets were concrete words, pseudowords, and strings of consonants. The negative potential in this task peaked at 450 msec. Unlike the lexical decision, the negative peak in this task significantly distinguished not only between phonologically legal and illegal words but also between meaningful (words) and meaningless (pseudowords) phonologically legal structures. The distribution of the N450 included the areas activated in the lexical decision task but also areas in the fronto-central regions. The present data corroborated the functional neuroanatomy of word recognition systems suggested by other neuroimaging methods and described their timecourse, supporting a cascade-type process that involves different but interconnected neural modules, each responsible for a different level of processing word-related information.

  4. The differential effect of trigeminal vs. peripheral pain stimulation on visual processing and memory encoding is influenced by pain-related fear.

    PubMed

    Schmidt, K; Forkmann, K; Sinke, C; Gratz, M; Bitz, A; Bingel, U

    2016-07-01

    Compared to peripheral pain, trigeminal pain elicits higher levels of fear, which is assumed to enhance the interruptive effects of pain on concomitant cognitive processes. In this fMRI study we examined the behavioral and neural effects of trigeminal (forehead) and peripheral (hand) pain on visual processing and memory encoding. Cerebral activity was measured in 23 healthy subjects performing a visual categorization task that was immediately followed by a surprise recognition task. During the categorization task subjects received concomitant noxious electrical stimulation on the forehead or hand. Our data show that fear ratings were significantly higher for trigeminal pain. Categorization and recognition performance did not differ between pictures that were presented with trigeminal and peripheral pain. However, object categorization in the presence of trigeminal pain was associated with stronger activity in task-relevant visual areas (lateral occipital complex, LOC), memory encoding areas (hippocampus and parahippocampus) and areas implicated in emotional processing (amygdala) compared to peripheral pain. Further, individual differences in neural activation between the trigeminal and the peripheral condition were positively related to differences in fear ratings between both conditions. Functional connectivity between amygdala and LOC was increased during trigeminal compared to peripheral painful stimulation. Fear-driven compensatory resource activation seems to be enhanced for trigeminal stimuli, presumably due to their exceptional biological relevance. Copyright © 2016 Elsevier Inc. All rights reserved.

  5. Differential verbal, visual, and spatial working memory in written language production.

    PubMed

    Raulerson, Bascom A; Donovan, Michael J; Whiteford, Alison P; Kellogg, Ronald T

    2010-02-01

    The contributions of verbal, visual, and spatial working memory to written language production were investigated. Participants composed definitions for nouns while concurrently performing a task which required updating, storing, and retrieving information coded either verbally, visually, or spatially. The present study extended past findings by showing the linguistic encoding of planned conceptual content makes its largest demand on verbal working memory for both low and high frequency nouns. Kellogg, Olive, and Piolat in 2007 found that concrete nouns place substantial demands on visual working memory when imaging the nouns' referents during planning, whereas abstract nouns make no demand. The current study further showed that this pattern was not an artifact of visual working memory being sensitive to manipulation of just any lexical property of the noun prompts. In contrast to past results, writing made a small but detectible demand on spatial working memory.

  6. Overcoming default categorical bias in spatial memory.

    PubMed

    Sampaio, Cristina; Wang, Ranxiao Frances

    2010-12-01

    In the present study, we investigated whether a strong default categorical bias can be overcome in spatial memory by using alternative membership information. In three experiments, we tested location memory in a circular space while providing participants with an alternative categorization. We found that visual presentation of the boundaries of the alternative categories (Experiment 1) did not induce the use of the alternative categories in estimation. In contrast, visual cuing of the alternative category membership of a target (Experiment 2) and unique target feature information associated with each alternative category (Experiment 3) successfully led to the use of the alternative categories in estimation. Taken together, the results indicate that default categorical bias in spatial memory can be overcome when appropriate cues are provided. We discuss how these findings expand the category adjustment model (Huttenlocher, Hedges, & Duncan, 1991) in spatial memory by proposing a retrieval-based category adjustment (RCA) model.

  7. Learning Category-Specific Dictionary and Shared Dictionary for Fine-Grained Image Categorization.

    PubMed

    Gao, Shenghua; Tsang, Ivor Wai-Hung; Ma, Yi

    2014-02-01

    This paper targets fine-grained image categorization by learning a category-specific dictionary for each category and a shared dictionary for all the categories. Such category-specific dictionaries encode subtle visual differences among different categories, while the shared dictionary encodes common visual patterns among all the categories. To this end, we impose incoherence constraints among the different dictionaries in the objective of feature coding. In addition, to make the learnt dictionary stable, we also impose the constraint that each dictionary should be self-incoherent. Our proposed dictionary learning formulation not only applies to fine-grained classification, but also improves conventional basic-level object categorization and other tasks such as event recognition. Experimental results on five data sets show that our method can outperform the state-of-the-art fine-grained image categorization frameworks as well as sparse coding based dictionary learning frameworks. All these results demonstrate the effectiveness of our method.

  8. The biological basis of a universal constraint on color naming: cone contrasts and the two-way categorization of colors.

    PubMed

    Xiao, Youping; Kavanau, Christopher; Bertin, Lauren; Kaplan, Ehud

    2011-01-01

    Many studies have provided evidence for the existence of universal constraints on color categorization or naming in various languages, but the biological basis of these constraints is unknown. A recent study of the pattern of color categorization across numerous languages has suggested that these patterns tend to avoid straddling a region in color space at or near the border between the English composite categories of "warm" and "cool". This fault line in color space represents a fundamental constraint on color naming. Here we report that the two-way categorization along the fault line is correlated with the sign of the L- versus M-cone contrast of a stimulus color. Moreover, we found that the sign of the L-M cone contrast also accounted for the two-way clustering of the spatially distributed neural responses in small regions of the macaque primary visual cortex, visualized with optical imaging. These small regions correspond to the hue maps, where our previous study found a spatially organized representation of stimulus hue. Altogether, these results establish a direct link between a universal constraint on color naming and the cone-specific information that is represented in the primate early visual system.

  9. Fast periodic presentation of natural images reveals a robust face-selective electrophysiological response in the human brain.

    PubMed

    Rossion, Bruno; Torfs, Katrien; Jacques, Corentin; Liu-Shuang, Joan

    2015-01-16

    We designed a fast periodic visual stimulation approach to identify an objective signature of face categorization incorporating both visual discrimination (from nonface objects) and generalization (across widely variable face exemplars). Scalp electroencephalographic (EEG) data were recorded in 12 human observers viewing natural images of objects at a rapid frequency of 5.88 images/s for 60 s. Natural images of faces were interleaved every five stimuli, i.e., at 1.18 Hz (5.88/5). Face categorization was indexed by a high signal-to-noise ratio response, specifically at an oddball face stimulation frequency of 1.18 Hz and its harmonics. This face-selective periodic EEG response was highly significant for every participant, even for a single 60-s sequence, and was generally localized over the right occipitotemporal cortex. The periodicity constraint and the large selection of stimuli ensured that this selective response to natural face images was free of low-level visual confounds, as confirmed by the absence of any oddball response for phase-scrambled stimuli. Without any subtraction procedure, time-domain analysis revealed a sequence of differential face-selective EEG components between 120 and 400 ms after oddball face image onset, progressing from medial occipital (P1-faces) to occipitotemporal (N1-faces) and anterior temporal (P2-faces) regions. Overall, this fast periodic visual stimulation approach provides a direct signature of natural face categorization and opens an avenue for efficiently measuring categorization responses of complex visual stimuli in the human brain. © 2015 ARVO.

  10. A Preliminary Account of the Effect of Otitis Media on 15-Month- Olds' Categorization and Some Implications for Early Language Learning.

    ERIC Educational Resources Information Center

    Roberts, Kenneth

    1997-01-01

    Infants (N=24) with history of otitis media and tube placement were tested for categorical responding within a visual familiarization-discrimination model. Findings suggest that even mild hearing loss may adversely affect categorical responding under specific input conditions, which may persist after normal hearing is restored, possibly because…

  11. Distinct functional contributions of primary sensory and association areas to audiovisual integration in object categorization.

    PubMed

    Werner, Sebastian; Noppeney, Uta

    2010-02-17

    Multisensory interactions have been demonstrated in a distributed neural system encompassing primary sensory and higher-order association areas. However, their distinct functional roles in multisensory integration remain unclear. This functional magnetic resonance imaging study dissociated the functional contributions of three cortical levels to multisensory integration in object categorization. Subjects actively categorized or passively perceived noisy auditory and visual signals emanating from everyday actions with objects. The experiment included two 2 x 2 factorial designs that manipulated either (1) the presence/absence or (2) the informativeness of the sensory inputs. These experimental manipulations revealed three patterns of audiovisual interactions. (1) In primary auditory cortices (PACs), a concurrent visual input increased the stimulus salience by amplifying the auditory response regardless of task-context. Effective connectivity analyses demonstrated that this automatic response amplification is mediated via both direct and indirect [via superior temporal sulcus (STS)] connectivity to visual cortices. (2) In STS and intraparietal sulcus (IPS), audiovisual interactions sustained the integration of higher-order object features and predicted subjects' audiovisual benefits in object categorization. (3) In the left ventrolateral prefrontal cortex (vlPFC), explicit semantic categorization resulted in suppressive audiovisual interactions as an index for multisensory facilitation of semantic retrieval and response selection. In conclusion, multisensory integration emerges at multiple processing stages within the cortical hierarchy. The distinct profiles of audiovisual interactions dissociate audiovisual salience effects in PACs, formation of object representations in STS/IPS and audiovisual facilitation of semantic categorization in vlPFC. Furthermore, in STS/IPS, the profiles of audiovisual interactions were behaviorally relevant and predicted subjects' multisensory benefits in performance accuracy.

  12. Independent sources of anisotropy in visual orientation representation: a visual and a cognitive oblique effect.

    PubMed

    Balikou, Panagiota; Gourtzelidis, Pavlos; Mantas, Asimakis; Moutoussis, Konstantinos; Evdokimidis, Ioannis; Smyrnis, Nikolaos

    2015-11-01

    The representation of visual orientation is more accurate for cardinal orientations compared to oblique, and this anisotropy has been hypothesized to reflect a low-level visual process (visual, "class 1" oblique effect). The reproduction of directional and orientation information also leads to a mean error away from cardinal orientations or directions. This anisotropy has been hypothesized to reflect a high-level cognitive process of space categorization (cognitive, "class 2," oblique effect). This space categorization process would be more prominent when the visual representation of orientation degrades such as in the case of working memory with increasing cognitive load, leading to increasing magnitude of the "class 2" oblique effect, while the "class 1" oblique effect would remain unchanged. Two experiments were performed in which an array of orientation stimuli (1-4 items) was presented and then subjects had to realign a probe stimulus within the previously presented array. In the first experiment, the delay between stimulus presentation and probe varied, while in the second experiment, the stimulus presentation time varied. The variable error was larger for oblique compared to cardinal orientations in both experiments reproducing the visual "class 1" oblique effect. The mean error also reproduced the tendency away from cardinal and toward the oblique orientations in both experiments (cognitive "class 2" oblique effect). The accuracy or the reproduced orientation degraded (increasing variable error) and the cognitive "class 2" oblique effect increased with increasing memory load (number of items) in both experiments and presentation time in the second experiment. In contrast, the visual "class 1" oblique effect was not significantly modulated by any one of these experimental factors. These results confirmed the theoretical predictions for the two anisotropies in visual orientation reproduction and provided support for models proposing the categorization of orientation in visual working memory.

  13. How does Interhemispheric Communication in Visual Word Recognition Work? Deciding between Early and Late Integration Accounts of the Split Fovea Theory

    ERIC Educational Resources Information Center

    Van der Haegen, Lise; Brysbaert, Marc; Davis, Colin J.

    2009-01-01

    It has recently been shown that interhemispheric communication is needed for the processing of foveally presented words. In this study, we examine whether the integration of information happens at an early stage, before word recognition proper starts, or whether the integration is part of the recognition process itself. Two lexical decision…

  14. Semantic Richness and Aging: The Effect of Number of Features in the Lexical Decision Task

    ERIC Educational Resources Information Center

    Robert, Christelle; Rico Duarte, Liliana

    2016-01-01

    The aim of this study was to examine whether the effect of semantic richness in visual word recognition (i.e., words with a rich semantic representation are faster to recognize than words with a poorer semantic representation), is changed with aging. Semantic richness was investigated by manipulating the number of features of words (NOF), i.e.,…

  15. Lexicality, Morphological Structure, and Semantic Transparency in the Processing of German Ver-Verbs: The Complementarity of On-Line and Off-Line Evidence

    ERIC Educational Resources Information Center

    Schirmeier, Matthias K.; Derwing, Bruce L.; Libben, Gary

    2004-01-01

    Two types of experiments investigate the visual on-line and off-line processing of German ver-verbs (e.g., verbittern "to embitte"). In Experiments 1 and 2 (morphological priming), latency patterns revealed the existence of facilitation effects for the morphological conditions (BITTER-VERBITTERN and BITTERN-VERBITTERN) as compared to the neutral…

  16. Spatial distance effects on incremental semantic interpretation of abstract sentences: evidence from eye tracking.

    PubMed

    Guerra, Ernesto; Knoeferle, Pia

    2014-12-01

    A large body of evidence has shown that visual context information can rapidly modulate language comprehension for concrete sentences and when it is mediated by a referential or a lexical-semantic link. What has not yet been examined is whether visual context can also modulate comprehension of abstract sentences incrementally when it is neither referenced by, nor lexically associated with, the sentence. Three eye-tracking reading experiments examined the effects of spatial distance between words (Experiment 1) and objects (Experiment 2 and 3) on participants' reading times for sentences that convey similarity or difference between two abstract nouns (e.g., 'Peace and war are certainly different...'). Before reading the sentence, participants inspected a visual context with two playing cards that moved either far apart or close together. In Experiment 1, the cards turned and showed the first two nouns of the sentence (e.g., 'peace', 'war'). In Experiments 2 and 3, they turned but remained blank. Participants' reading times at the adjective (Experiment 1: first-pass reading time; Experiment 2: total times) and at the second noun phrase (Experiment 3: first-pass times) were faster for sentences that expressed similarity when the preceding words/objects were close together (vs. far apart) and for sentences that expressed dissimilarity when the preceding words/objects were far apart (vs. close together). Thus, spatial distance between words or entirely unrelated objects can rapidly and incrementally modulate the semantic interpretation of abstract sentences. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. Visual speech segmentation: using facial cues to locate word boundaries in continuous speech

    PubMed Central

    Mitchel, Aaron D.; Weiss, Daniel J.

    2014-01-01

    Speech is typically a multimodal phenomenon, yet few studies have focused on the exclusive contributions of visual cues to language acquisition. To address this gap, we investigated whether visual prosodic information can facilitate speech segmentation. Previous research has demonstrated that language learners can use lexical stress and pitch cues to segment speech and that learners can extract this information from talking faces. Thus, we created an artificial speech stream that contained minimal segmentation cues and paired it with two synchronous facial displays in which visual prosody was either informative or uninformative for identifying word boundaries. Across three familiarisation conditions (audio stream alone, facial streams alone, and paired audiovisual), learning occurred only when the facial displays were informative to word boundaries, suggesting that facial cues can help learners solve the early challenges of language acquisition. PMID:25018577

  18. Interobserver Variability in Histologic Evaluation of Liver Fibrosis Using Categorical and Quantitative Scores.

    PubMed

    Pavlides, Michael; Birks, Jacqueline; Fryer, Eve; Delaney, David; Sarania, Nikita; Banerjee, Rajarshi; Neubauer, Stefan; Barnes, Eleanor; Fleming, Kenneth A; Wang, Lai Mun

    2017-04-01

    The aim of the study was to investigate the interobserver agreement for categorical and quantitative scores of liver fibrosis. Sixty-five consecutive biopsy specimens from patients with mixed liver disease etiologies were assessed by three pathologists using the Ishak and nonalcoholic steatohepatitis Clinical Research Network (NASH CRN) scoring systems, and the fibrosis area (collagen proportionate area [CPA]) was estimated by visual inspection (visual-CPA). A subset of 20 biopsy specimens was analyzed using digital imaging analysis (DIA) for the measurement of CPA (DIA-CPA). The bivariate weighted κ between any two pathologists ranged from 0.57 to 0.67 for Ishak staging and from 0.47 to 0.57 for the NASH CRN staging. Bland-Altman analysis showed poor agreement between all possible pathologist pairings for visual-CPA but good agreement between all pathologist pairings for DIA-CPA. There was good agreement between the two pathologists who assessed biopsy specimens by visual-CPA and DIA-CPA. The intraclass correlation coefficient, which is equivalent to the κ statistic for continuous variables, was 0.78 for visual-CPA and 0.97 for DIA-CPA. These results suggest that DIA-CPA is the most robust method for assessing liver fibrosis followed by visual-CPA. Categorical scores perform less well than both the quantitative CPA scores assessed here. © American Society for Clinical Pathology, 2017. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  19. Anomalous visual experiences, negative symptoms, perceptual organization and the magnocellular pathway in schizophrenia: a shared construct?

    PubMed

    Kéri, Szabolcs; Kiss, Imre; Kelemen, Oguz; Benedek, György; Janka, Zoltán

    2005-10-01

    Schizophrenia is associated with impaired visual information processing. The aim of this study was to investigate the relationship between anomalous perceptual experiences, positive and negative symptoms, perceptual organization, rapid categorization of natural images and magnocellular (M) and parvocellular (P) visual pathway functioning. Thirty-five unmedicated patients with schizophrenia and 20 matched healthy control volunteers participated. Anomalous perceptual experiences were assessed with the Bonn Scale for the Assessment Basic Symptoms (BSABS). General intellectual functions were evaluated with the revised version of the Wechsler Adult Intelligence Scale. The 1-9 version of the Continuous Performance Test (CPT) was used to investigate sustained attention. The following psychophysical tests were used: detection of Gabor patches with collinear and orthogonal flankers (perceptual organization), categorization of briefly presented natural scenes (rapid visual processing), low-contrast and frequency-doubling vernier threshold (M pathway functioning), isoluminant colour vernier threshold and high spatial frequency discrimination (P pathway functioning). The patients with schizophrenia were impaired on test of perceptual organization, rapid visual processing and M pathway functioning. There was a significant correlation between BSABS scores, negative symptoms, perceptual organization, rapid visual processing and M pathway functioning. Positive symptoms, IQ, CPT and P pathway measures did not correlate with these parameters. The best predictor of the BSABS score was the perceptual organization deficit. These results raise the possibility that multiple facets of visual information processing deficits can be explained by M pathway dysfunctions in schizophrenia, resulting in impaired attentional modulation of perceptual organization and of natural image categorization.

  20. Does the reading of different orthographies produce distinct brain activity patterns? An ERP study.

    PubMed

    Bar-Kochva, Irit; Breznitz, Zvia

    2012-01-01

    Orthographies vary in the degree of transparency of spelling-sound correspondence. These range from shallow orthographies with transparent grapheme-phoneme relations, to deep orthographies, in which these relations are opaque. Only a few studies have examined whether orthographic depth is reflected in brain activity. In these studies a between-language design was applied, making it difficult to isolate the aspect of orthographic depth. In the present work this question was examined using a within-subject-and-language investigation. The participants were speakers of Hebrew, as they are skilled in reading two forms of script transcribing the same oral language. One form is the shallow pointed script (with diacritics), and the other is the deep unpointed script (without diacritics). Event-related potentials (ERPs) were recorded while skilled readers carried out a lexical decision task in the two forms of script. A visual non-orthographic task controlled for the visual difference between the scripts (resulting from the addition of diacritics to the pointed script only). At an early visual-perceptual stage of processing (~165 ms after target onset), the pointed script evoked larger amplitudes with longer latencies than the unpointed script at occipital-temporal sites. However, these effects were not restricted to orthographic processing, and may therefore have reflected, at least in part, the visual load imposed by the diacritics. Nevertheless, the results implied that distinct orthographic processing may have also contributed to these effects. At later stages (~340 ms after target onset) the unpointed script elicited larger amplitudes than the pointed one with earlier latencies. As this latency has been linked to orthographic-linguistic processing and to the classification of stimuli, it is suggested that these differences are associated with distinct lexical processing of a shallow and a deep orthography.

  1. Direct versus indirect processing changes the influence of color in natural scene categorization.

    PubMed

    Otsuka, Sachio; Kawaguchi, Jun

    2009-10-01

    We examined whether participants would use a negative priming (NP) paradigm to categorize color and grayscale images of natural scenes that were presented peripherally and were ignored. We focused on (1) attentional resources allocated to natural scenes and (2) direct versus indirect processing of them. We set up low and high attention-load conditions, based on the set size of the searched stimuli in the prime display (one and five). Participants were required to detect and categorize the target objects in natural scenes in a central visual search task, ignoring peripheral natural images in both the prime and probe displays. The results showed that, irrespective of attention load, NP was observed for color scenes but not for grayscale scenes. We did not observe any effect of color information in central visual search, where participants responded directly to natural scenes. These results indicate that, in a situation in which participants indirectly process natural scenes, color information is critical to object categorization, but when the scenes are processed directly, color information does not contribute to categorization.

  2. Reading and Spelling in Adults: Are There Lexical and Sub-Lexical Subtypes?

    ERIC Educational Resources Information Center

    Burt, Jennifer S.; Heffernan, Maree E.

    2012-01-01

    The dual-route model of reading proposes distinct lexical and sub-lexical procedures for word reading and spelling. Lexically reliant and sub-lexically reliant reader subgroups were selected from 78 university students on the basis of their performance on lexical (orthographic) and sub-lexical (phonological) choice tests, and on irregular and…

  3. Neural correlates of priming effects in children during spoken word processing with orthographic demands

    PubMed Central

    Cao, Fan; Khalid, Kainat; Zaveri, Rishi; Bolger, Donald J.; Bitan, Tali; Booth, James R.

    2009-01-01

    Priming effects were examined in 40 children (9 - 15 years old) using functional magnetic resonance imaging (fMRI). An orthographic judgment task required participants to determine if two sequentially presented spoken words had the same spelling for the rime. Four lexical conditions were designed: similar orthography and phonology (O+P+), similar orthography but different phonology (O+P−), similar phonology but different orthography (O−P+), and different orthography and phonology (O−P−). In left superior temporal gyrus, there was lower activation for targets in O+P+ than for those in O−P− and higher accuracy was correlated with stronger activation across all lexical conditions. These results provide evidence for phonological priming in children and greater elaboration of phonological representations in higher skill children, respectively. In left fusiform gyrus, there was lower activation for targets in O+P+ and O+P− than for those in O−P−, suggesting that visual similarity resulted in orthographic priming even with only auditory input. In left middle temporal gyrus, there was lower activation for targets in O+P+ than all other lexical conditions, suggesting that converging orthographic and phonological information resulted in a weaker influence on semantic representations. In addition, higher reading skill was correlated with weaker activation in left middle temporal gyrus across all lexical conditions, suggesting that higher skill children rely to a lesser degree on semantics as a compensatory mechanism. Finally, conflict effects but not priming effects were observed in left inferior frontal gyrus, suggesting that this region is involved in resolving conflicting orthographic and phonological information but not in perceptual priming. PMID:19665784

  4. Lexical learning in mild aphasia: gesture benefit depends on patholinguistic profile and lesion pattern.

    PubMed

    Kroenke, Klaus-Martin; Kraft, Indra; Regenbrecht, Frank; Obrig, Hellmuth

    2013-01-01

    Gestures accompany speech and enrich human communication. When aphasia interferes with verbal abilities, gestures become even more relevant, compensating for and/or facilitating verbal communication. However, small-scale clinical studies yielded diverging results with regard to a therapeutic gesture benefit for lexical retrieval. Based on recent functional neuroimaging results, delineating a speech-gesture integration network for lexical learning in healthy adults, we hypothesized that the commonly observed variability may stem from differential patholinguistic profiles in turn depending on lesion pattern. Therefore we used a controlled novel word learning paradigm to probe the impact of gestures on lexical learning, in the lesioned language network. Fourteen patients with chronic left hemispheric lesions and mild residual aphasia learned 30 novel words for manipulable objects over four days. Half of the words were trained with gestures while the other half were trained purely verbally. For the gesture condition, rootwords were visually presented (e.g., Klavier, [piano]), followed by videos of the corresponding gestures and the auditory presentation of the novel words (e.g., /krulo/). Participants had to repeat pseudowords and simultaneously reproduce gestures. In the verbal condition no gesture-video was shown and participants only repeated pseudowords orally. Correlational analyses confirmed that gesture benefit depends on the patholinguistic profile: lesser lexico-semantic impairment correlated with better gesture-enhanced learning. Conversely largely preserved segmental-phonological capabilities correlated with better purely verbal learning. Moreover, structural MRI-analysis disclosed differential lesion patterns, most interestingly suggesting that integrity of the left anterior temporal pole predicted gesture benefit. Thus largely preserved semantic capabilities and relative integrity of a semantic integration network are prerequisites for successful use of the multimodal learning strategy, in which gestures may cause a deeper semantic rooting of the novel word-form. The results tap into theoretical accounts of gestures in lexical learning and suggest an explanation for the diverging effect in therapeutical studies advocating gestures in aphasia rehabilitation. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Item parameters dissociate between expectation formats: a regression analysis of time-frequency decomposed EEG data

    PubMed Central

    Monsalve, Irene F.; Pérez, Alejandro; Molinaro, Nicola

    2014-01-01

    During language comprehension, semantic contextual information is used to generate expectations about upcoming items. This has been commonly studied through the N400 event-related potential (ERP), as a measure of facilitated lexical retrieval. However, the associative relationships in multi-word expressions (MWE) may enable the generation of a categorical expectation, leading to lexical retrieval before target word onset. Processing of the target word would thus reflect a target-identification mechanism, possibly indexed by a P3 ERP component. However, given their time overlap (200–500 ms post-stimulus onset), differentiating between N400/P3 ERP responses (averaged over multiple linguistically variable trials) is problematic. In the present study, we analyzed EEG data from a previous experiment, which compared ERP responses to highly expected words that were placed either in a MWE or a regular non-fixed compositional context, and to low predictability controls. We focused on oscillatory dynamics and regression analyses, in order to dissociate between the two contexts by modeling the electrophysiological response as a function of item-level parameters. A significant interaction between word position and condition was found in the regression model for power in a theta range (~7–9 Hz), providing evidence for the presence of qualitative differences between conditions. Power levels within this band were lower for MWE than compositional contexts when the target word appeared later on in the sentence, confirming that in the former lexical retrieval would have taken place before word onset. On the other hand, gamma-power (~50–70 Hz) was also modulated by predictability of the item in all conditions, which is interpreted as an index of a similar “matching” sub-step for both types of contexts, binding an expected representation and the external input. PMID:25161630

  6. The development of a natural language interface to a geographical information system

    NASA Technical Reports Server (NTRS)

    Toledo, Sue Walker; Davis, Bruce

    1993-01-01

    This paper will discuss a two and a half year long project undertaken to develop an English-language interface for the geographical information system GRASS. The work was carried out for NASA by a small business, Netrologic, based in San Diego, California, under Phase 1 and 2 Small Business Innovative Research contracts. We consider here the potential value of this system whose current functionality addresses numerical, categorical and boolean raster layers and includes the display of point sets defined by constraints on one or more layers, answers yes/no and numerical questions, and creates statistical reports. It also handles complex queries and lexical ambiguities, and allows temporarily switching to UNIX or GRASS.

  7. Active sensing in the categorization of visual patterns

    PubMed Central

    Yang, Scott Cheng-Hsin; Lengyel, Máté; Wolpert, Daniel M

    2016-01-01

    Interpreting visual scenes typically requires us to accumulate information from multiple locations in a scene. Using a novel gaze-contingent paradigm in a visual categorization task, we show that participants' scan paths follow an active sensing strategy that incorporates information already acquired about the scene and knowledge of the statistical structure of patterns. Intriguingly, categorization performance was markedly improved when locations were revealed to participants by an optimal Bayesian active sensor algorithm. By using a combination of a Bayesian ideal observer and the active sensor algorithm, we estimate that a major portion of this apparent suboptimality of fixation locations arises from prior biases, perceptual noise and inaccuracies in eye movements, and the central process of selecting fixation locations is around 70% efficient in our task. Our results suggest that participants select eye movements with the goal of maximizing information about abstract categories that require the integration of information from multiple locations. DOI: http://dx.doi.org/10.7554/eLife.12215.001 PMID:26880546

  8. Using Prosopagnosia to Test and Modify Visual Recognition Theory.

    PubMed

    O'Brien, Alexander M

    2018-02-01

    Biederman's contemporary theory of basic visual object recognition (Recognition-by-Components) is based on structural descriptions of objects and presumes 36 visual primitives (geons) people can discriminate, but there has been no empirical test of the actual use of these 36 geons to visually distinguish objects. In this study, we tested for the actual use of these geons in basic visual discrimination by comparing object discrimination performance patterns (when distinguishing varied stimuli) of an acquired prosopagnosia patient (LB) and healthy control participants. LB's prosopagnosia left her heavily reliant on structural descriptions or categorical object differences in visual discrimination tasks versus the control participants' additional ability to use face recognition or coordinate systems (Coordinate Relations Hypothesis). Thus, when LB performed comparably to control participants with a given stimulus, her restricted reliance on basic or categorical discriminations meant that the stimuli must be distinguishable on the basis of a geon feature. By varying stimuli in eight separate experiments and presenting all 36 geons, we discerned that LB coded only 12 (vs. 36) distinct visual primitives (geons), apparently reflective of human visual systems generally.

  9. Phonological Priming in Children with Hearing Loss: Effect of Speech Mode, Fidelity, and Lexical Status

    PubMed Central

    Jerger, Susan; Tye-Murray, Nancy; Damian, Markus F.; Abdi, Hervé

    2016-01-01

    Objectives Our research determined 1) how phonological priming of picture naming was affected by the mode (auditory-visual [AV] vs auditory), fidelity (intact vs non-intact auditory onsets), and lexical status (words vs nonwords) of speech stimuli in children with prelingual sensorineural hearing impairment (CHI) vs. children with normal hearing (CNH); and 2) how the degree of hearing impairment (HI), auditory word recognition, and age influenced results in CHI. Note that some of our AV stimuli were not the traditional bimodal input but instead they consisted of an intact consonant/rhyme in the visual track coupled to a non-intact onset/rhyme in the auditory track. Example stimuli for the word bag are: 1) AV: intact visual (b/ag) coupled to non-intact auditory (−b/ag) and 2) Auditory: static face coupled to the same non-intact auditory (−b/ag). Our question was whether the intact visual speech would “restore or fill-in” the non-intact auditory speech in which case performance for the same auditory stimulus would differ depending upon the presence/absence of visual speech. Design Participants were 62 CHI and 62 CNH whose ages had a group-mean and -distribution akin to that in the CHI group. Ages ranged from 4 to 14 years. All participants met the following criteria: 1) spoke English as a native language, 2) communicated successfully aurally/orally, and 3) had no diagnosed or suspected disabilities other than HI and its accompanying verbal problems. The phonological priming of picture naming was assessed with the multi-modal picture word task. Results Both CHI and CNH showed greater phonological priming from high than low fidelity stimuli and from AV than auditory speech. These overall fidelity and mode effects did not differ in the CHI vs. CNH—thus these CHI appeared to have sufficiently well specified phonological onset representations to support priming and visual speech did not appear to be a disproportionately important source of the CHI’s phonological knowledge. Two exceptions occurred, however. First—with regard to lexical status—both the CHI and CNH showed significantly greater phonological priming from the nonwords than words, a pattern consistent with the prediction that children are more aware of phonetics-phonology content for nonwords. This overall pattern of similarity between the groups was qualified by the finding that CHI showed more nearly equal priming by the high vs. low fidelity nonwords than the CNH; in other words, the CHI were less affected by the fidelity of the auditory input for nonwords. Second, auditory word recognition—but not degree of HI or age—uniquely influenced phonological priming by the nonwords presented AV. Conclusions With minor exceptions, phonological priming in CHI and CNH showed more similarities than differences. Importantly, we documented that the addition of visual speech significantly increased phonological priming in both groups. Clinically these data support intervention programs that view visual speech as a powerful asset for developing spoken language in CHI. PMID:27438867

  10. Ideophones in Japanese modulate the P2 and late positive complex responses

    PubMed Central

    Lockwood, Gwilym; Tuomainen, Jyrki

    2015-01-01

    Sound-symbolism, or the direct link between sound and meaning, is typologically and behaviorally attested across languages. However, neuroimaging research has mostly focused on artificial non-words or individual segments, which do not represent sound-symbolism in natural language. We used EEG to compare Japanese ideophones, which are phonologically distinctive sound-symbolic lexical words, and arbitrary adverbs during a sentence reading task. Ideophones elicit a larger visual P2 response than arbitrary adverbs, as well as a sustained late positive complex. Our results and previous literature suggest that the larger P2 may indicate the integration of sound and sensory information by association in response to the distinctive phonology of ideophones. The late positive complex may reflect the facilitated lexical retrieval of arbitrary words in comparison to ideophones. This account provides new evidence that ideophones exhibit similar cross-modal correspondences to those which have been proposed for non-words and individual sounds. PMID:26191031

  11. Early adolescents show sustained susceptibility to cognitive interference by emotional distractors.

    PubMed

    Heim, Sabine; Ihssen, Niklas; Hasselhorn, Marcus; Keil, Andreas

    2013-01-01

    A child's ability to continuously pay attention to a cognitive task is often challenged by distracting events. Distraction is especially detrimental in a learning or classroom environment in which attended information is typically associated with establishing skills and knowledge. Here we report a study examining the effect of emotional distractors on performance in a subsequent visual lexical decision task in 11- to 13-year-old students (n=30). Lexical decisions about neutral verbs and verb-like pseudowords (i.e., targets) were analysed as a function of the preceding distractor type (pleasant, neutral, or unpleasant photos) and the picture-target stimulus-onset asynchrony (SOA; 200 or 600 ms). Across distractor categories, emotionally arousing pictures prolonged decisions about word targets when compared to neutral pictures, irrespective of the SOA. The present results demonstrate that similar to adults, early adolescent students exhibit sustained susceptibility to cognitive interference by irrelevant emotional events.

  12. Development of Embodied Word Meanings: Sensorimotor Effects in Children's Lexical Processing.

    PubMed

    Inkster, Michelle; Wellsby, Michele; Lloyd, Ellen; Pexman, Penny M

    2016-01-01

    Previous research showed an effect of words' rated body-object interaction (BOI) in children's visual word naming performance, but only in children 8 years of age or older (Wellsby and Pexman, 2014a). In that study, however, BOI was established using adult ratings. Here we collected ratings from a group of parents for children's BOI experience (child-BOI). We examined effects of words' child-BOI and also words' imageability on children's responses in an auditory word naming task, which is suited to the lexical processing skills of younger children. We tested a group of 54 children aged 6-7 years and a comparison group of 25 adults. Results showed significant effects of both imageability and child-BOI on children's auditory naming latencies. These results provide evidence that children younger than 8 years of age have richer semantic representations for high imageability and high child-BOI words, consistent with an embodied account of word meaning.

  13. Phonological-Lexical Feedback during Early Abstract Encoding: The Case of Deaf Readers

    PubMed Central

    Perea, Manuel; Marcet, Ana; Vergara-Martínez, Marta

    2016-01-01

    In the masked priming technique, physical identity between prime and target enjoys an advantage over nominal identity in nonwords (GEDA-GEDA faster than geda-GEDA). However, nominal identity overrides physical identity in words (e.g., REAL-REAL similar to real-REAL). Here we tested whether the lack of an advantage of the physical identity condition for words was due to top-down feedback from phonological-lexical information. We examined this issue with deaf readers, as their phonological representations are not as fully developed as in hearing readers. Results revealed that physical identity enjoyed a processing advantage over nominal identity not only in nonwords but also in words (GEDA-GEDA faster than geda-GEDA; REAL-REAL faster than real-REAL). This suggests the existence of fundamental differences in the early stages of visual word recognition of hearing and deaf readers, possibly related to the amount of feedback from higher levels of information. PMID:26731110

  14. Suprasegmental Features Are Not Acquired Early: Perception and Production of Monosyllabic Cantonese Lexical Tones in 4- to 6-Year-Old Preschool Children.

    PubMed

    Wong, Puisan; Tsz-Tin Leung, Carrie

    2018-05-17

    Previous studies reported that children acquire Cantonese tones before 3 years of age, supporting the assumption in models of phonological development that suprasegmental features are acquired rapidly and early in children. Yet, recent research found a large disparity in the age of Cantonese tone acquisition. This study investigated Cantonese tone development in 4- to 6-year-old children. Forty-eight 4- to 6-year-old Cantonese-speaking children and 28 mothers of the children labeled 30 pictures representing familiar words in the 6 tones in a picture-naming task and identified pictures representing words in different Cantonese tones in a picture-pointing task. To control for lexical biases in tone assessment, tone productions were low-pass filtered to eliminate lexical information. Five judges categorized the tones in filtered stimuli. Tone production accuracy, tone perception accuracy, and correlation between tone production and perception accuracy were examined. Children did not start to produce adultlike tones until 5 and 6 years of age. Four-year-olds produced none of the tones with adultlike accuracy. Five- and 6-year-olds attained adultlike productions in 2 (T5 and T6) to 3 (T4, T5, and T6) tones, respectively. Children made better progress in tone perception and achieved higher accuracy in perception than in production. However, children in all age groups perceived none of the tones as accurately as adults, except that T1 was perceived with adultlike accuracy by 6-year-olds. Only weak association was found between children's tone perception and production accuracy. Contradicting to the long-held assumption that children acquire lexical tone rapidly and early before the mastery of segmentals, this study found that 4- to 6-year-old children have not mastered the perception or production of the full set of Cantonese tones in familiar monosyllabic words. Larger development was found in children's tone perception than tone production. The higher tone perception accuracy but weak correlation between tone perception and production abilities in children suggested that tone perception accuracy is not sufficient for children's tone production accuracy. The findings have clinical and theoretical implications.

  15. Webly-Supervised Fine-Grained Visual Categorization via Deep Domain Adaptation.

    PubMed

    Xu, Zhe; Huang, Shaoli; Zhang, Ya; Tao, Dacheng

    2018-05-01

    Learning visual representations from web data has recently attracted attention for object recognition. Previous studies have mainly focused on overcoming label noise and data bias and have shown promising results by learning directly from web data. However, we argue that it might be better to transfer knowledge from existing human labeling resources to improve performance at nearly no additional cost. In this paper, we propose a new semi-supervised method for learning via web data. Our method has the unique design of exploiting strong supervision, i.e., in addition to standard image-level labels, our method also utilizes detailed annotations including object bounding boxes and part landmarks. By transferring as much knowledge as possible from existing strongly supervised datasets to weakly supervised web images, our method can benefit from sophisticated object recognition algorithms and overcome several typical problems found in webly-supervised learning. We consider the problem of fine-grained visual categorization, in which existing training resources are scarce, as our main research objective. Comprehensive experimentation and extensive analysis demonstrate encouraging performance of the proposed approach, which, at the same time, delivers a new pipeline for fine-grained visual categorization that is likely to be highly effective for real-world applications.

  16. When does reading dirty words impede picture processing? Taboo interference with verbal and manual responses.

    PubMed

    Mädebach, Andreas; Markuske, Anna-Maria; Jescheniak, Jörg D

    2018-05-22

    Picture naming takes longer in the presence of socially inappropriate (taboo) distractor words compared with neutral distractor words. Previous studies have attributed this taboo interference effect to increased attentional capture by taboo words or verbal self-monitoring-that is, control processes scrutinizing verbal responses before articulation. In this study, we investigated the cause and locus of the taboo interference effect by contrasting three tasks that used the same target pictures, but systematically differed with respect to the processing stages involved: picture naming (requiring conceptual processing, lexical processing, and articulation), phoneme decision (requiring conceptual and lexical processing), and natural size decision (requiring conceptual processing only). We observed taboo interference in picture naming and phoneme decision. In size decision, taboo interference was not reliably observed under the same task conditions in which the effect arose in picture naming and phoneme decision, but it emerged when the difficulty of the size decision task was increased by visually degrading the target pictures. Overall, these results suggest that taboo interference cannot be exclusively attributed to verbal self-monitoring operating over articulatory responses. Instead, taboo interference appears to arise already prior to articulatory preparation, during lexical processing and-at least with sufficiently high task difficulty-during prelexical processing stages.

  17. Acoustic and perceptual effects of overall F0 range in a lexical pitch accent distinction

    NASA Astrophysics Data System (ADS)

    Wade, Travis

    2002-05-01

    A speaker's overall fundamental frequency range is generally considered a variable, nonlinguistic element of intonation. This study examined the precision with which overall F0 is predictable based on previous intonational context and the extent to which it may be perceptually significant. Speakers of Tokyo Japanese produced pairs of sentences differing lexically only in the presence or absence of a single pitch accent as responses to visual and prerecorded speech cues presented in an interactive manner. F0 placement of high tones (previously observed to be relatively variable in pitch contours) was found to be consistent across speakers and uniformly dependent on the intonation of the different sentences used as cues. In a subsequent perception experiment, continuous manipulation of these same sentences between typical accented and typical non-accent-containing versions were presented to Japanese listeners for lexical identification. Results showed that listeners' perception was not significantly altered in compensation for artificial manipulation of preceding intonation. Implications are discussed within an autosegmental analysis of tone. The current results are consistent with the notion that pitch range (i.e., specific vertical locations of tonal peaks) does not simply vary gradiently across speakers and situations but constitutes a predictable part of the phonetic specification of tones.

  18. Individual differences in online spoken word recognition: Implications for SLI

    PubMed Central

    McMurray, Bob; Samelson, Vicki M.; Lee, Sung Hee; Tomblin, J. Bruce

    2012-01-01

    Thirty years of research has uncovered the broad principles that characterize spoken word processing across listeners. However, there have been few systematic investigations of individual differences. Such an investigation could help refine models of word recognition by indicating which processing parameters are likely to vary, and could also have important implications for work on language impairment. The present study begins to fill this gap by relating individual differences in overall language ability to variation in online word recognition processes. Using the visual world paradigm, we evaluated online spoken word recognition in adolescents who varied in both basic language abilities and non-verbal cognitive abilities. Eye movements to target, cohort and rhyme objects were monitored during spoken word recognition, as an index of lexical activation. Adolescents with poor language skills showed fewer looks to the target and more fixations to the cohort and rhyme competitors. These results were compared to a number of variants of the TRACE model (McClelland & Elman, 1986) that were constructed to test a range of theoretical approaches to language impairment: impairments at sensory and phonological levels; vocabulary size, and generalized slowing. None were strongly supported, and variation in lexical decay offered the best fit. Thus, basic word recognition processes like lexical decay may offer a new way to characterize processing differences in language impairment. PMID:19836014

  19. Neural Systems Underlying Lexical Competition: An Eyetracking and fMRI Study

    PubMed Central

    Righi, Giulia; Blumstein, Sheila E.; Mertus, John; Worden, Michael S.

    2010-01-01

    The present study investigated the neural bases of phonological onset competition using an eye tracking paradigm coupled with fMRI. Eighteen subjects were presented with an auditory target (e.g. beaker) and a visual display containing a pictorial representation of the target (e.g. beaker), an onset competitor (e.g. beetle), and two phonologically and semantically unrelated objects (e.g. shoe, hammer). Behavioral results replicated earlier research showing increased looks to the onset competitor compared to the unrelated items. fMRI results showed that lexical competition induced by shared phonological onsets recruits both frontal structures and posterior structures. Specifically, comparison between competitor and no-competitor trials elicited activation in two non-overlapping clusters in the left IFG, one located primarily within BA 44 and the other primarily located within BA 45, and one cluster in the left supramarginal gyrus extending into the posterior-superior temporal gyrus. These results indicate that the left IFG is sensitive to competition driven by phonological similarity and not only to competition among semantic/conceptual factors. Moreover, they indicate that the SMG is not only recruited in tasks requiring access to lexical form but is also recruited in tasks that require access to the conceptual representation of a word. PMID:19301991

  20. Effects of relative embodiment in lexical and semantic processing of verbs.

    PubMed

    Sidhu, David M; Kwan, Rachel; Pexman, Penny M; Siakaluk, Paul D

    2014-06-01

    Research examining semantic richness effects in visual word recognition has shown that multiple dimensions of meaning are activated in the process of word recognition (e.g., Yap et al., 2012). This research has, however, been limited to nouns. In the present research we extended the semantic richness approach to verb stimuli in order to investigate how verb meanings are represented. We characterized a dimension of relative embodiment for verbs, based on the bodily sense described by Borghi and Cimatti (2010), and collected ratings on that dimension for 687 English verbs. The relative embodiment ratings revealed that bodily experience was judged to be more important to the meanings of some verbs (e.g., dance, breathe) than to others (e.g., evaporate, expect). We then tested the effects of relative embodiment and imageability on verb processing in lexical decision (Experiment 1), action picture naming (Experiment 2), and syntactic classification (Experiment 3). In all three experiments results showed facilitatory effects of relative embodiment, but not imageability: latencies were faster for relatively more embodied verbs, even after several other lexical variables were controlled. The results suggest that relative embodiment is an important aspect of verb meaning, and that the semantic richness approach holds promise as a strategy for investigating other aspects of verb meaning. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Task by stimulus interactions in brain responses during Chinese character processing.

    PubMed

    Yang, Jianfeng; Wang, Xiaojuan; Shu, Hua; Zevin, Jason D

    2012-04-02

    In the visual word recognition literature, it is well understood that various stimulus effects interact with behavioral task. For example, effects of word frequency are exaggerated and effects of spelling-to-sound regularity are reduced in the lexical decision task, relative to reading aloud. Neuroimaging studies of reading often examine effects of task and stimulus properties on brain activity independently, but potential interactions between task demands and stimulus effects have not been extensively explored. To address this issue, we conducted lexical decision and symbol detection tasks using stimuli that varied parametrically in their word-likeness, and tested for task by stimulus class interactions. Interactions were found throughout the reading system, such that stimulus selectivity was observed during the lexical decision task, but not during the symbol detection task. Further, the pattern of stimulus selectivity was directly related to task difficulty, so that the strongest brain activity was observed to the most word-like stimuli that required "no" responses, whereas brain activity to words, which elicit rapid and accurate "yes" responses were relatively weak. This is in line with models that argue for task-dependent specialization of brain regions, and contrasts with the notion of task-independent stimulus selectivity in the reading system. Copyright © 2012 Elsevier Inc. All rights reserved.

  2. Morphological Family Size Effects in Young First and Second Language Learners: Evidence of Cross-Language Semantic Activation in Visual Word Recognition

    ERIC Educational Resources Information Center

    de Zeeuw, Marlies; Verhoeven, Ludo; Schreuder, Robert

    2012-01-01

    This study examined to what extent young second language (L2) learners showed morphological family size effects in L2 word recognition and whether the effects were grade-level related. Turkish-Dutch bilingual children (L2) and Dutch (first language, L1) children from second, fourth, and sixth grade performed a Dutch lexical decision task on words…

  3. Human striatal activation during adjustment of the response criterion in visual word recognition.

    PubMed

    Kuchinke, Lars; Hofmann, Markus J; Jacobs, Arthur M; Frühholz, Sascha; Tamm, Sascha; Herrmann, Manfred

    2011-02-01

    Results of recent computational modelling studies suggest that a general function of the striatum in human cognition is related to shifting decision criteria in selection processes. We used functional magnetic resonance imaging (fMRI) in 21 healthy subjects to examine the hemodynamic responses when subjects shift their response criterion on a trial-by-trial basis in the lexical decision paradigm. Trial-by-trial criterion setting is obtained when subjects respond faster in trials following a word trial than in trials following nonword trials - irrespective of the lexicality of the current trial. Since selection demands are equally high in the current trials, we expected to observe neural activations that are related to response criterion shifting. The behavioural data show sequential effects with faster responses in trials following word trials compared to trials following nonword trials, suggesting that subjects shifted their response criterion on a trial-by-trial basis. The neural responses revealed a signal increase in the striatum only in trials following word trials. This striatal activation is therefore likely to be related to response criterion setting. It demonstrates a role of the striatum in shifting decision criteria in visual word recognition, which cannot be attributed to pure error-related processing or the selection of a preferred response. Copyright © 2010 Elsevier Inc. All rights reserved.

  4. Third and fifth graders' processing of parafoveal information in reading: A study in single-word recognition.

    PubMed

    Khelifi, Rachid; Sparrow, Laurent; Casalis, Séverine

    2015-11-01

    We assessed third and fifth graders' processing of parafoveal word information using a lexical decision task. On each trial, a preview word was first briefly presented parafoveally in the left or right visual field before a target word was displayed. Preview and target words could be identical, share the first three letters, or have no letters in common. Experiment 1 showed that developing readers receive the same word recognition benefit from parafoveal previews as expert readers. The impact of a change of case between preview and target in Experiment 2 showed that in all groups of readers, the preview benefit resulted from the identification of letters at an abstract level rather than from facilitation at a purely visual level. Fifth graders identified more letters from the preview than third graders. The results are interpreted within the framework of the interactive activation model. In particular, we suggest that although the processing of parafoveal information led to letter identification in developing readers, the processes involved may differ from those in expert readers. Although expert readers' processing of parafoveal information led to activation at the level of lexical representations, no such activation was observed in developing readers. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Basic-level categorization of intermediate complexity fragments reveals top-down effects of expertise in visual perception.

    PubMed

    Harel, Assaf; Ullman, Shimon; Harari, Danny; Bentin, Shlomo

    2011-07-28

    Visual expertise is usually defined as the superior ability to distinguish between exemplars of a homogeneous category. Here, we ask how real-world expertise manifests at basic-level categorization and assess the contribution of stimulus-driven and top-down knowledge-based factors to this manifestation. Car experts and novices categorized computer-selected image fragments of cars, airplanes, and faces. Within each category, the fragments varied in their mutual information (MI), an objective quantifiable measure of feature diagnosticity. Categorization of face and airplane fragments was similar within and between groups, showing better performance with increasing MI levels. Novices categorized car fragments more slowly than face and airplane fragments, while experts categorized car fragments as fast as face and airplane fragments. The experts' advantage with car fragments was similar across MI levels, with similar functions relating RT with MI level for both groups. Accuracy was equal between groups for cars as well as faces and airplanes, but experts' response criteria were biased toward cars. These findings suggest that expertise does not entail only specific perceptual strategies. Rather, at the basic level, expertise manifests as a general processing advantage arguably involving application of top-down mechanisms, such as knowledge and attention, which helps experts to distinguish between object categories. © ARVO

  6. Visual search and autism symptoms: What young children search for and co-occurring ADHD matter.

    PubMed

    Doherty, Brianna R; Charman, Tony; Johnson, Mark H; Scerif, Gaia; Gliga, Teodora

    2018-05-03

    Superior visual search is one of the most common findings in the autism spectrum disorder (ASD) literature. Here, we ascertain how generalizable these findings are across task and participant characteristics, in light of recent replication failures. We tested 106 3-year-old children at familial risk for ASD, a sample that presents high ASD and ADHD symptoms, and 25 control participants, in three multi-target search conditions: easy exemplar search (look for cats amongst artefacts), difficult exemplar search (look for dogs amongst chairs/tables perceptually similar to dogs), and categorical search (look for animals amongst artefacts). Performance was related to dimensional measures of ASD and ADHD, in agreement with current research domain criteria (RDoC). We found that ASD symptom severity did not associate with enhanced performance in search, but did associate with poorer categorical search in particular, consistent with literature describing impairments in categorical knowledge in ASD. Furthermore, ASD and ADHD symptoms were both associated with more disorganized search paths across all conditions. Thus, ASD traits do not always convey an advantage in visual search; on the contrary, ASD traits may be associated with difficulties in search depending upon the nature of the stimuli (e.g., exemplar vs. categorical search) and the presence of co-occurring symptoms. © 2018 John Wiley & Sons Ltd.

  7. Face adaptation aftereffects reveal anterior medial temporal cortex role in high level category representation.

    PubMed

    Furl, N; van Rijsbergen, N J; Treves, A; Dolan, R J

    2007-08-01

    Previous studies have shown reductions of the functional magnetic resonance imaging (fMRI) signal in response to repetition of specific visual stimuli. We examined how adaptation affects the neural responses associated with categorization behavior, using face adaptation aftereffects. Adaptation to a given facial category biases categorization towards non-adapted facial categories in response to presentation of ambiguous morphs. We explored a hypothesis, posed by recent psychophysical studies, that these adaptation-induced categorizations are mediated by activity in relatively advanced stages within the occipitotemporal visual processing stream. Replicating these studies, we find that adaptation to a facial expression heightens perception of non-adapted expressions. Using comparable behavioral methods, we also show that adaptation to a specific identity heightens perception of a second identity in morph faces. We show both expression and identity effects to be associated with heightened anterior medial temporal lobe activity, specifically when perceiving the non-adapted category. These regions, incorporating bilateral anterior ventral rhinal cortices, perirhinal cortex and left anterior hippocampus are regions previously implicated in high-level visual perception. These categorization effects were not evident in fusiform or occipital gyri, although activity in these regions was reduced to repeated faces. The findings suggest that adaptation-induced perception is mediated by activity in regions downstream to those showing reductions due to stimulus repetition.

  8. Edge co-occurrences can account for rapid categorization of natural versus animal images

    NASA Astrophysics Data System (ADS)

    Perrinet, Laurent U.; Bednar, James A.

    2015-06-01

    Making a judgment about the semantic category of a visual scene, such as whether it contains an animal, is typically assumed to involve high-level associative brain areas. Previous explanations require progressively analyzing the scene hierarchically at increasing levels of abstraction, from edge extraction to mid-level object recognition and then object categorization. Here we show that the statistics of edge co-occurrences alone are sufficient to perform a rough yet robust (translation, scale, and rotation invariant) scene categorization. We first extracted the edges from images using a scale-space analysis coupled with a sparse coding algorithm. We then computed the “association field” for different categories (natural, man-made, or containing an animal) by computing the statistics of edge co-occurrences. These differed strongly, with animal images having more curved configurations. We show that this geometry alone is sufficient for categorization, and that the pattern of errors made by humans is consistent with this procedure. Because these statistics could be measured as early as the primary visual cortex, the results challenge widely held assumptions about the flow of computations in the visual system. The results also suggest new algorithms for image classification and signal processing that exploit correlations between low-level structure and the underlying semantic category.

  9. Predicting Lexical Proficiency in Language Learner Texts Using Computational Indices

    ERIC Educational Resources Information Center

    Crossley, Scott A.; Salsbury, Tom; McNamara, Danielle S.; Jarvis, Scott

    2011-01-01

    The authors present a model of lexical proficiency based on lexical indices related to vocabulary size, depth of lexical knowledge, and accessibility to core lexical items. The lexical indices used in this study come from the computational tool Coh-Metrix and include word length scores, lexical diversity values, word frequency counts, hypernymy…

  10. Visual word form familiarity and attention in lateral difference during processing Japanese Kana words.

    PubMed

    Nakagawa, A; Sukigara, M

    2000-09-01

    The purpose of this study was to examine the relationship between familiarity and laterality in reading Japanese Kana words. In two divided-visual-field experiments, three- or four-character Hiragana or Katakana words were presented in both familiar and unfamiliar scripts, to which subjects performed lexical decisions. Experiment 1, using three stimulus durations (40, 100, 160 ms), suggested that only in the unfamiliar script condition was increased stimulus presentation time differently affected in each visual field. To examine this lateral difference during the processing of unfamiliar scripts as related to attentional laterality, a concurrent auditory shadowing task was added in Experiment 2. The results suggested that processing words in an unfamiliar script requires attention, which could be left-hemisphere lateralized, while orthographically familiar kana words can be processed automatically on the basis of their word-level orthographic representations or visual word form. Copyright 2000 Academic Press.

  11. Does silent reading speed in normal adult readers depend on early visual processes? evidence from event-related brain potentials.

    PubMed

    Korinth, Sebastian Peter; Sommer, Werner; Breznitz, Zvia

    2012-01-01

    Little is known about the relationship of reading speed and early visual processes in normal readers. Here we examined the association of the early P1, N170 and late N1 component in visual event-related potentials (ERPs) with silent reading speed and a number of additional cognitive skills in a sample of 52 adult German readers utilizing a Lexical Decision Task (LDT) and a Face Decision Task (FDT). Amplitudes of the N170 component in the LDT but, interestingly, also in the FDT correlated with behavioral tests measuring silent reading speed. We suggest that reading speed performance can be at least partially accounted for by the extraction of essential structural information from visual stimuli, consisting of a domain-general and a domain-specific expertise-based portion. © 2011 Elsevier Inc. All rights reserved.

  12. Transient global amnesia: implicit/explicit memory dissociation and PET assessment of brain perfusion and oxygen metabolism in the acute stage.

    PubMed

    Eustache, F; Desgranges, B; Petit-Taboué, M C; de la Sayette, V; Piot, V; Sablé, C; Marchal, G; Baron, J C

    1997-09-01

    To assess explicit memory and two components of implicit memory--that is, perceptual-verbal skill learning and lexical-semantic priming effects--as well as resting cerebral blood flow (CBF) and oxygen metabolism (CMRO2) during the acute phase of transient global amnesia. In a 59 year old woman, whose amnestic episode fulfilled all current criteria for transient global amnesia, a neuropsychological protocol was administered, including word learning, story recall, categorical fluency, mirror reading, and word stem completion tasks. PET was performed using the (15)O steady state inhalation method, while the patient still exhibited severe anterograde amnesia and was interleaved with the cognitive tests. There was a clear cut dissociation between impaired long term episodic memory and preserved implicit memory for its two components. Categorical fluency was significantly altered, suggesting word retrieval strategy--rather than semantic memory--impairment. The PET study disclosed a reduced CMRO2 with relatively or fully preserved CBF in the left prefrontotemporal cortex and lentiform nucleus, and the reverse pattern over the left occipital cortex. The PET alterations with patchy CBF-CMRO2 uncoupling would be compatible with a migraine-like phenomenon and indicate that the isolated assessment of perfusion in transient global amnesia may be misleading. The pattern of metabolic depression, with sparing of the hippocampal area, is one among the distinct patterns of brain dysfunction that underlie the (apparently) uniform clinical presentation of transient global amnesia. The finding of a left prefrontal hypometabolism in the face of impaired episodic memory and altered verbal fluency would fit present day concepts from PET activation studies about the role of this area in episodic and semantic memory encoding/retrieval. Likewise, the changes affecting the lenticular nucleus but sparing the caudate would be consistent with the normal performance in perceptual-verbal skill learning. Finally, unaltered lexical-semantic priming effects, despite left temporal cortex hypometabolism, suggest that these processes are subserved by a more distributed neocortical network.

  13. Variation in spatial language and cognition: exploring visuo-spatial thinking and speaking cross-linguistically.

    PubMed

    Soroli, Efstathia

    2012-08-01

    Languages differ strikingly in how they encode spatial information. This variability is realized with spatial semantic elements mapped across languages in very different ways onto lexical/syntactic structures. For example, satellite-framed languages (e.g., English) express MANNER: in the verb and PATH: in satellites, while verb-framed languages (e.g., French) lexicalize PATH: in the verb, leaving MANNER: implicit or peripheral. Some languages are harder to classify into these categories, rather presenting equipollently framed systems, such as Chinese (serial-verb constructions) or Greek (parallel verb- and satellite-framed structures in equally frequent contexts). Such properties seem to have implications not only on the formulation/articulation levels, but also on the conceptualization level, thereby reviving questions concerning the language-thought interface. The present study investigates the relative impact of language-independent and language-specific factors on spatial representations across three typologically different languages (English-French-Greek) combining a variety of complementary tasks (production, non-verbal, and verbal categorization). The findings show that typological properties of languages can have an impact on both linguistic and non-linguistic organization of spatial information, open new perspectives for the investigation of conceptualization, and contribute more generally to the debate concerning the universal and language-specific dimensions of cognition.

  14. The word frequency effect during sentence reading: A linear or nonlinear effect of log frequency?

    PubMed

    White, Sarah J; Drieghe, Denis; Liversedge, Simon P; Staub, Adrian

    2016-10-20

    The effect of word frequency on eye movement behaviour during reading has been reported in many experimental studies. However, the vast majority of these studies compared only two levels of word frequency (high and low). Here we assess whether the effect of log word frequency on eye movement measures is linear, in an experiment in which a critical target word in each sentence was at one of three approximately equally spaced log frequency levels. Separate analyses treated log frequency as a categorical or a continuous predictor. Both analyses showed only a linear effect of log frequency on the likelihood of skipping a word, and on first fixation duration. Ex-Gaussian analyses of first fixation duration showed similar effects on distributional parameters in comparing high- and medium-frequency words, and medium- and low-frequency words. Analyses of gaze duration and the probability of a refixation suggested a nonlinear pattern, with a larger effect at the lower end of the log frequency scale. However, the nonlinear effects were small, and Bayes Factor analyses favoured the simpler linear models for all measures. The possible roles of lexical and post-lexical factors in producing nonlinear effects of log word frequency during sentence reading are discussed.

  15. Dutch modality exclusivity norms: Simulating perceptual modality in space.

    PubMed

    Speed, Laura J; Majid, Asifa

    2017-12-01

    Perceptual information is important for the meaning of nouns. We present modality exclusivity norms for 485 Dutch nouns rated on visual, auditory, haptic, gustatory, and olfactory associations. We found these nouns are highly multimodal. They were rated most dominant in vision, and least in olfaction. A factor analysis identified two main dimensions: one loaded strongly on olfaction and gustation (reflecting joint involvement in flavor), and a second loaded strongly on vision and touch (reflecting joint involvement in manipulable objects). In a second study, we validated the ratings with similarity judgments. As expected, words from the same dominant modality were rated more similar than words from different dominant modalities; but - more importantly - this effect was enhanced when word pairs had high modality strength ratings. We further demonstrated the utility of our ratings by investigating whether perceptual modalities are differentially experienced in space, in a third study. Nouns were categorized into their dominant modality and used in a lexical decision experiment where the spatial position of words was either in proximal or distal space. We found words dominant in olfaction were processed faster in proximal than distal space compared to the other modalities, suggesting olfactory information is mentally simulated as "close" to the body. Finally, we collected ratings of emotion (valence, dominance, and arousal) to assess its role in perceptual space simulation, but the valence did not explain the data. So, words are processed differently depending on their perceptual associations, and strength of association is captured by modality exclusivity ratings.

  16. Is nevtral NEUTRAL? Visual similarity effects in the early phases of written-word recognition.

    PubMed

    Marcet, Ana; Perea, Manuel

    2017-08-01

    For simplicity, contemporary models of written-word recognition and reading have unspecified feature/letter levels-they predict that the visually similar substituted-letter nonword PEQPLE is as effective at activating the word PEOPLE as the visually dissimilar substituted-letter nonword PEYPLE. Previous empirical evidence on the effects of visual similarly across letters during written-word recognition is scarce and nonconclusive. To examine whether visual similarity across letters plays a role early in word processing, we conducted two masked priming lexical decision experiments (stimulus-onset asynchrony = 50 ms). The substituted-letter primes were visually very similar to the target letters (u/v in Experiment 1 and i/j in Experiment 2; e.g., nevtral-NEUTRAL). For comparison purposes, we included an identity prime condition (neutral-NEUTRAL) and a dissimilar-letter prime condition (neztral-NEUTRAL). Results showed that the similar-letter prime condition produced faster word identification times than the dissimilar-letter prime condition. We discuss how models of written-word recognition should be amended to capture visual similarity effects across letters.

  17. From Lexical Tone to Lexical Stress: A Cross-Language Mediation Model for Cantonese Children Learning English as a Second Language

    PubMed Central

    Choi, William; Tong, Xiuli; Singh, Leher

    2017-01-01

    This study investigated how Cantonese lexical tone sensitivity contributed to English lexical stress sensitivity among Cantonese children who learned English as a second language (ESL). Five-hundred-and-sixteen second-to-third grade Cantonese ESL children were tested on their Cantonese lexical tone sensitivity, English lexical stress sensitivity, general auditory sensitivity, and working memory. Structural equation modeling revealed that Cantonese lexical tone sensitivity contributed to English lexical stress sensitivity both directly, and indirectly through the mediation of general auditory sensitivity, in which the direct pathway had a larger relative contribution to English lexical stress sensitivity than the indirect pathway. These results suggest that the tone-stress association might be accounted for by joint phonological and acoustic processes that underlie lexical tone and lexical stress perception. PMID:28408898

  18. Implicit phonological priming during visual word recognition.

    PubMed

    Wilson, Lisa B; Tregellas, Jason R; Slason, Erin; Pasko, Bryce E; Rojas, Donald C

    2011-03-15

    Phonology is a lower-level structural aspect of language involving the sounds of a language and their organization in that language. Numerous behavioral studies utilizing priming, which refers to an increased sensitivity to a stimulus following prior experience with that or a related stimulus, have provided evidence for the role of phonology in visual word recognition. However, most language studies utilizing priming in conjunction with functional magnetic resonance imaging (fMRI) have focused on lexical-semantic aspects of language processing. The aim of the present study was to investigate the neurobiological substrates of the automatic, implicit stages of phonological processing. While undergoing fMRI, eighteen individuals performed a lexical decision task (LDT) on prime-target pairs including word-word homophone and pseudoword-word pseudohomophone pairs with a prime presentation below perceptual threshold. Whole-brain analyses revealed several cortical regions exhibiting hemodynamic response suppression due to phonological priming including bilateral superior temporal gyri (STG), middle temporal gyri (MTG), and angular gyri (AG) with additional region of interest (ROI) analyses revealing response suppression in the left lateralized supramarginal gyrus (SMG). Homophone and pseudohomophone priming also resulted in different patterns of hemodynamic responses relative to one another. These results suggest that phonological processing plays a key role in visual word recognition. Furthermore, enhanced hemodynamic responses for unrelated stimuli relative to primed stimuli were observed in midline cortical regions corresponding to the default-mode network (DMN) suggesting that DMN activity can be modulated by task requirements within the context of an implicit task. Copyright © 2010 Elsevier Inc. All rights reserved.

  19. Magnocellular-dorsal pathway and sub-lexical route in developmental dyslexia

    PubMed Central

    Gori, Simone; Cecchini, Paolo; Bigoni, Anna; Molteni, Massimo; Facoetti, Andrea

    2014-01-01

    Although developmental dyslexia (DD) is frequently associate with a phonological deficit, the underlying neurobiological cause remains undetermined. Recently, a new model, called “temporal sampling framework” (TSF), provided an innovative prospect in the DD study. TSF suggests that deficits in syllabic perception at a specific temporal frequencies are the critical basis for the poor reading performance in DD. This approach was presented as a possible neurobiological substrate of the phonological deficit of DD but the TSF can also easily be applied to the visual modality deficits. The deficit in the magnocellular-dorsal (M-D) pathway - often found in individuals with DD - fits well with a temporal oscillatory deficit specifically related to this visual pathway. This study investigated the visual M-D and parvocellular-ventral (P-V) pathways in dyslexic and in chronological age and IQ-matched normally reading children by measuring temporal (frequency doubling illusion) and static stimuli sensitivity, respectively. A specific deficit in M-D temporal oscillation was found. Importantly, the M-D deficit was selectively shown in poor phonological decoders. M-D deficit appears to be frequent because 75% of poor pseudo-word readers were at least 1 SD below the mean of the controls. Finally, a replication study by using a new group of poor phonological decoders and reading level controls suggested a crucial role of M-D deficit in DD. These results showed that a M-D deficit might impair the sub-lexical mechanisms that are critical for reading development. The possible link between these findings and TSF is discussed. PMID:25009484

  20. Surviving blind decomposition: A distributional analysis of the time-course of complex word recognition.

    PubMed

    Schmidtke, Daniel; Matsuki, Kazunaga; Kuperman, Victor

    2017-11-01

    The current study addresses a discrepancy in the psycholinguistic literature about the chronology of information processing during the visual recognition of morphologically complex words. Form-then-meaning accounts of complex word recognition claim that morphemes are processed as units of form prior to any influence of their meanings, whereas form-and-meaning models posit that recognition of complex word forms involves the simultaneous access of morphological and semantic information. The study reported here addresses this theoretical discrepancy by applying a nonparametric distributional technique of survival analysis (Reingold & Sheridan, 2014) to 2 behavioral measures of complex word processing. Across 7 experiments reported here, this technique is employed to estimate the point in time at which orthographic, morphological, and semantic variables exert their earliest discernible influence on lexical decision RTs and eye movement fixation durations. Contrary to form-then-meaning predictions, Experiments 1-4 reveal that surface frequency is the earliest lexical variable to exert a demonstrable influence on lexical decision RTs for English and Dutch derived words (e.g., badness ; bad + ness ), English pseudoderived words (e.g., wander ; wand + er ) and morphologically simple control words (e.g., ballad ; ball + ad ). Furthermore, for derived word processing across lexical decision and eye-tracking paradigms (Experiments 1-2; 5-7), semantic effects emerge early in the time-course of word recognition, and their effects either precede or emerge simultaneously with morphological effects. These results are not consistent with the premises of the form-then-meaning view of complex word recognition, but are convergent with a form-and-meaning account of complex word recognition. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

Top