Sample records for recognition task experiment

  1. The Costs and Benefits of Testing and Guessing on Recognition Memory

    PubMed Central

    Huff, Mark J.; Balota, David A.; Hutchison, Keith A.

    2016-01-01

    We examined whether two types of interpolated tasks (i.e., retrieval-practice via free recall or guessing a missing critical item) improved final recognition for related and unrelated word lists relative to restudying or completing a filler task. Both retrieval-practice and guessing tasks improved correct recognition relative to restudy and filler tasks, particularly when study lists were semantically related. However, both retrieval practice and guessing also generally inflated false recognition for the non-presented critical words. These patterns were found when final recognition was completed during a short delay within the same experimental session (Experiment 1) and following a 24-hr delay (Experiment 2). In Experiment 3, task instructions were presented randomly after each list to determine whether retrieval-practice and guessing effects were influenced by task-expectancy processes. In contrast to Experiments 1 and 2, final recognition following retrieval practice and guessing was equivalent to restudy, suggesting that the observed retrieval-practice and guessing advantages were in part due to preparatory task-based processing during study. PMID:26950490

  2. What Types of Visual Recognition Tasks Are Mediated by the Neural Subsystem that Subserves Face Recognition?

    ERIC Educational Resources Information Center

    Brooks, Brian E.; Cooper, Eric E.

    2006-01-01

    Three divided visual field experiments tested current hypotheses about the types of visual shape representation tasks that recruit the cognitive and neural mechanisms underlying face recognition. Experiment 1 found a right hemisphere advantage for subordinate but not basic-level face recognition. Experiment 2 found a right hemisphere advantage for…

  3. Recognition intent and visual word recognition.

    PubMed

    Wang, Man-Ying; Ching, Chi-Le

    2009-03-01

    This study adopted a change detection task to investigate whether and how recognition intent affects the construction of orthographic representation in visual word recognition. Chinese readers (Experiment 1-1) and nonreaders (Experiment 1-2) detected color changes in radical components of Chinese characters. Explicit recognition demand was imposed in Experiment 2 by an additional recognition task. When the recognition was implicit, a bias favoring the radical location informative of character identity was found in Chinese readers (Experiment 1-1), but not nonreaders (Experiment 1-2). With explicit recognition demands, the effect of radical location interacted with radical function and word frequency (Experiment 2). An estimate of identification performance under implicit recognition was derived in Experiment 3. These findings reflect the joint influence of recognition intent and orthographic regularity in shaping readers' orthographic representation. The implication for the role of visual attention in word recognition was also discussed.

  4. Memory Asymmetry of Forward and Backward Associations in Recognition Tasks

    PubMed Central

    Yang, Jiongjiong; Zhu, Zijian; Mecklinger, Axel; Fang, Zhiyong; Li, Han

    2013-01-01

    There is an intensive debate on whether memory for serial order is symmetric. The objective of this study was to explore whether associative asymmetry is modulated by memory task (recognition vs. cued recall). Participants were asked to memorize word triples (Experiment 1–2) or pairs (Experiment 3–6) during the study phase. They then recalled the word by a cue during a cued recall task (Experiment 1–4), and judged whether the presented two words were in the same or in a different order compared to the study phase during a recognition task (Experiment 1–6). To control for perceptual matching between the study and test phase, participants were presented with vertical test pairs when they made directional judgment in Experiment 5. In Experiment 6, participants also made associative recognition judgments for word pairs presented at the same or the reversed position. The results showed that forward associations were recalled at similar levels as backward associations, and that the correlations between forward and backward associations were high in the cued recall tasks. On the other hand, the direction of forward associations was recognized more accurately (and more quickly) than backward associations, and their correlations were comparable to the control condition in the recognition tasks. This forward advantage was also obtained for the associative recognition task. Diminishing positional information did not change the pattern of associative asymmetry. These results suggest that associative asymmetry is modulated by cued recall and recognition manipulations, and that direction as a constituent part of a memory trace can facilitate associative memory. PMID:22924326

  5. Mind wandering in text comprehension under dual-task conditions.

    PubMed

    Dixon, Peter; Li, Henry

    2013-01-01

    In two experiments, subjects responded to on-task probes while reading under dual-task conditions. The secondary task was to monitor the text for occurrences of the letter e. In Experiment 1, reading comprehension was assessed with a multiple-choice recognition test; in Experiment 2, subjects recalled the text. In both experiments, the secondary task replicated the well-known "missing-letter effect" in which detection of e's was less effective for function words and the word "the." Letter detection was also more effective when subjects were on task, but this effect did not interact with the missing-letter effect. Comprehension was assessed in both the dual-task conditions and in control single-task conditions. In the single-task conditions, both recognition (Experiment 1) and recall (Experiment 2) was better when subjects were on task, replicating previous research on mind wandering. Surprisingly, though, comprehension under dual-task conditions only showed an effect of being on task when measured with recall; there was no effect on recognition performance. Our interpretation of this pattern of results is that subjects generate responses to on-task probes on the basis of a retrospective assessment of the contents of working memory. Further, we argue that under dual-task conditions, the contents of working memory is not closely related to the reading processes required for accurate recognition performance. These conclusions have implications for models of text comprehension and for the interpretation of on-task probe responses.

  6. Mind wandering in text comprehension under dual-task conditions

    PubMed Central

    Dixon, Peter; Li, Henry

    2013-01-01

    In two experiments, subjects responded to on-task probes while reading under dual-task conditions. The secondary task was to monitor the text for occurrences of the letter e. In Experiment 1, reading comprehension was assessed with a multiple-choice recognition test; in Experiment 2, subjects recalled the text. In both experiments, the secondary task replicated the well-known “missing-letter effect” in which detection of e's was less effective for function words and the word “the.” Letter detection was also more effective when subjects were on task, but this effect did not interact with the missing-letter effect. Comprehension was assessed in both the dual-task conditions and in control single-task conditions. In the single-task conditions, both recognition (Experiment 1) and recall (Experiment 2) was better when subjects were on task, replicating previous research on mind wandering. Surprisingly, though, comprehension under dual-task conditions only showed an effect of being on task when measured with recall; there was no effect on recognition performance. Our interpretation of this pattern of results is that subjects generate responses to on-task probes on the basis of a retrospective assessment of the contents of working memory. Further, we argue that under dual-task conditions, the contents of working memory is not closely related to the reading processes required for accurate recognition performance. These conclusions have implications for models of text comprehension and for the interpretation of on-task probe responses. PMID:24101909

  7. Evaluating the contributions of task expectancy in the testing and guessing benefits on recognition memory.

    PubMed

    Huff, Mark J; Yates, Tyler J; Balota, David A

    2018-05-03

    Recently, we have shown that two types of initial testing (recall of a list or guessing of critical items repeated over 12 study/test cycles) improved final recognition of related and unrelated word lists relative to restudy. These benefits were eliminated, however, when test instructions were manipulated within subjects and presented after study of each list, procedures designed to minimise expectancy of a specific type of upcoming test [Huff, Balota, & Hutchison, 2016. The costs and benefits of testing and guessing on recognition memory. Journal of Experimental Psychology: Learning, Memory, and Cognition, 42, 1559-1572. doi: 10.1037/xlm0000269 ], suggesting that testing and guessing effects may be influenced by encoding strategies specific for the type of upcoming task. We follow-up these experiments by examining test-expectancy processes in guessing and testing. Testing and guessing benefits over restudy were not found when test instructions were presented either after (Experiment 1) or before (Experiment 2) a single study/task cycle was completed, nor were benefits found when instructions were presented before study/task cycles and the task was repeated three times (Experiment 3). Testing and guessing benefits emerged only when instructions were presented before a study/task cycle and the task was repeated six times (Experiments 4A and 4B). These experiments demonstrate that initial testing and guessing can produce memory benefits in recognition, but only following substantial task repetitions which likely promote task-expectancy processes.

  8. Investigating the encoding-retrieval match in recognition memory: effects of experimental design, specificity, and retention interval.

    PubMed

    Dewhurst, Stephen A; Knott, Lauren M

    2010-12-01

    Five experiments investigated the encoding-retrieval match in recognition memory by manipulating read and generate conditions at study and at test. Experiments 1A and 1B confirmed previous findings that reinstating encoding operations at test enhances recognition accuracy in a within-groups design but reduces recognition accuracy in a between-groups design. Experiment 2A showed that generating from anagrams at study and at test enhanced recognition accuracy even when study and test items were generated from different anagrams. Experiment 2B showed that switching from one generation task at study (e.g., anagram solution) to a different generation task at test (e.g., fragment completion) eliminated this recognition advantage. Experiment 3 showed that the recognition advantage found in Experiment 1A is reliably present up to 1 week after study. The findings are consistent with theories of memory that emphasize the importance of the match between encoding and retrieval operations.

  9. The role of visual imagery in the retention of information from sentences.

    PubMed

    Drose, G S; Allen, G L

    1994-01-01

    We conducted two experiments to evaluate a multiple-code model for sentence memory that posits both propositional and visual representational systems. Both sentences involved recognition memory. The results of Experiment 1 indicated that subjects' recognition memory for concrete sentences was superior to their recognition memory for abstract sentences. Instructions to use visual imagery to enhance recognition performance yielded no effects. Experiment 2 tested the prediction that interference by a visual task would differentially affect recognition memory for concrete sentences. Results showed the interference task to have had a detrimental effect on recognition memory for both concrete and abstract sentences. Overall, the evidence provided partial support for both a multiple-code model and a semantic integration model of sentence memory.

  10. The own-age face recognition bias is task dependent.

    PubMed

    Proietti, Valentina; Macchi Cassia, Viola; Mondloch, Catherine J

    2015-08-01

    The own-age bias (OAB) in face recognition (more accurate recognition of own-age than other-age faces) is robust among young adults but not older adults. We investigated the OAB under two different task conditions. In Experiment 1 young and older adults (who reported more recent experience with own than other-age faces) completed a match-to-sample task with young and older adult faces; only young adults showed an OAB. In Experiment 2 young and older adults completed an identity detection task in which we manipulated the identity strength of target and distracter identities by morphing each face with an average face in 20% steps. Accuracy increased with identity strength and facial age influenced older adults' (but not younger adults') strategy, but there was no evidence of an OAB. Collectively, these results suggest that the OAB depends on task demands and may be absent when searching for one identity. © 2014 The British Psychological Society.

  11. The locus of word frequency effects in skilled spelling-to-dictation.

    PubMed

    Chua, Shi Min; Liow, Susan J Rickard

    2014-01-01

    In spelling-to-dictation tasks, skilled spellers consistently initiate spelling of high-frequency words faster than that of low-frequency words. Tainturier and Rapp's model of spelling shows three possible loci for this frequency effect: spoken word recognition, orthographic retrieval, and response execution of the first letter. Thus far, researchers have attributed the effect solely to orthographic retrieval without considering spoken word recognition or response execution. To investigate word frequency effects at each of these three loci, Experiment 1 involved a delayed spelling-to-dictation task and Experiment 2 involved a delayed/uncertain task. In Experiment 1, no frequency effect was found in the 1200-ms delayed condition, suggesting that response execution is not affected by word frequency. In Experiment 2, no frequency effect was found in the delayed/uncertain task that reflects the orthographic retrieval, whereas a frequency effect was found in the comparison immediate/uncertain task that reflects both spoken word recognition and orthographic retrieval. The results of this two-part study suggest that frequency effects in spoken word recognition play a substantial role in skilled spelling-to-dictation. Discrepancies between these findings and previous research, and the limitations of the present study, are discussed.

  12. Local Navon letter processing affects skilled behavior: a golf-putting experiment.

    PubMed

    Lewis, Michael B; Dawkins, Gemma

    2015-04-01

    Expert or skilled behaviors (for example, face recognition or sporting performance) are typically performed automatically and with little conscious awareness. Previous studies, in various domains of performance, have shown that activities immediately prior to a task demanding a learned skill can affect performance. In sport, describing the to-be-performed action is detrimental, whereas in face recognition, describing a face or reading local Navon letters is detrimental. Two golf-putting experiments are presented that compare the effects that these three tasks have on experienced and novice golfers. Experiment 1 found a Navon effect on golf performance for experienced players. Experiment 2 found, for experienced players only, that performance was impaired following the three tasks described above, when compared with reading or global Navon tasks. It is suggested that the three tasks affect skilled performance by provoking a shift from automatic behavior to a more analytic style. By demonstrating similarities between effects in face recognition and sporting behavior, it is hoped to better understand concepts in both fields.

  13. Sources of Interference in Recognition Testing

    ERIC Educational Resources Information Center

    Annis, Jeffrey; Malmberg, Kenneth J.; Criss, Amy H.; Shiffrin, Richard M.

    2013-01-01

    Recognition memory accuracy is harmed by prior testing (a.k.a., output interference [OI]; Tulving & Arbuckle, 1966). In several experiments, we interpolated various tasks between recognition test trials. The stimuli and the tasks were more similar (lexical decision [LD] of words and nonwords) or less similar (gender identification of male and…

  14. Relational and item-specific influences on generate-recognize processes in recall.

    PubMed

    Guynn, Melissa J; McDaniel, Mark A; Strosser, Garrett L; Ramirez, Juan M; Castleberry, Erica H; Arnett, Kristen H

    2014-02-01

    The generate-recognize model and the relational-item-specific distinction are two approaches to explaining recall. In this study, we consider the two approaches in concert. Following Jacoby and Hollingshead (Journal of Memory and Language 29:433-454, 1990), we implemented a production task and a recognition task following production (1) to evaluate whether generation and recognition components were evident in cued recall and (2) to gauge the effects of relational and item-specific processing on these components. An encoding task designed to augment item-specific processing (anagram-transposition) produced a benefit on the recognition component (Experiments 1-3) but no significant benefit on the generation component (Experiments 1-3), in the context of a significant benefit to cued recall. By contrast, an encoding task designed to augment relational processing (category-sorting) did produce a benefit on the generation component (Experiment 3). These results converge on the idea that in recall, item-specific processing impacts a recognition component, whereas relational processing impacts a generation component.

  15. The medial dorsal thalamic nucleus and the medial prefrontal cortex of the rat function together to support associative recognition and recency but not item recognition.

    PubMed

    Cross, Laura; Brown, Malcolm W; Aggleton, John P; Warburton, E Clea

    2012-12-21

    In humans recognition memory deficits, a typical feature of diencephalic amnesia, have been tentatively linked to mediodorsal thalamic nucleus (MD) damage. Animal studies have occasionally investigated the role of the MD in single-item recognition, but have not systematically analyzed its involvement in other recognition memory processes. In Experiment 1 rats with bilateral excitotoxic lesions in the MD or the medial prefrontal cortex (mPFC) were tested in tasks that assessed single-item recognition (novel object preference), associative recognition memory (object-in-place), and recency discrimination (recency memory task). Experiment 2 examined the functional importance of the interactions between the MD and mPFC using disconnection techniques. Unilateral excitotoxic lesions were placed in both the MD and the mPFC in either the same (MD + mPFC Ipsi) or opposite hemispheres (MD + mPFC Contra group). Bilateral lesions in the MD or mPFC impaired object-in-place and recency memory tasks, but had no effect on novel object preference. In Experiment 2 the MD + mPFC Contra group was significantly impaired in the object-in-place and recency memory tasks compared with the MD + mPFC Ipsi group, but novel object preference was intact. Thus, connections between the MD and mPFC are critical for recognition memory when the discriminations involve associative or recency information. However, the rodent MD is not necessary for single-item recognition memory.

  16. Deletion of the GluA1 AMPA receptor subunit impairs recency-dependent object recognition memory

    PubMed Central

    Sanderson, David J.; Hindley, Emma; Smeaton, Emily; Denny, Nick; Taylor, Amy; Barkus, Chris; Sprengel, Rolf; Seeburg, Peter H.; Bannerman, David M.

    2011-01-01

    Deletion of the GluA1 AMPA receptor subunit impairs short-term spatial recognition memory. It has been suggested that short-term recognition depends upon memory caused by the recent presentation of a stimulus that is independent of contextual–retrieval processes. The aim of the present set of experiments was to test whether the role of GluA1 extends to nonspatial recognition memory. Wild-type and GluA1 knockout mice were tested on the standard object recognition task and a context-independent recognition task that required recency-dependent memory. In a first set of experiments it was found that GluA1 deletion failed to impair performance on either of the object recognition or recency-dependent tasks. However, GluA1 knockout mice displayed increased levels of exploration of the objects in both the sample and test phases compared to controls. In contrast, when the time that GluA1 knockout mice spent exploring the objects was yoked to control mice during the sample phase, it was found that GluA1 deletion now impaired performance on both the object recognition and the recency-dependent tasks. GluA1 deletion failed to impair performance on a context-dependent recognition task regardless of whether object exposure in knockout mice was yoked to controls or not. These results demonstrate that GluA1 is necessary for nonspatial as well as spatial recognition memory and plays an important role in recency-dependent memory processes. PMID:21378100

  17. Repetition and brain potentials when recognizing natural scenes: task and emotion differences

    PubMed Central

    Bradley, Margaret M.; Codispoti, Maurizio; Karlsson, Marie; Lang, Peter J.

    2013-01-01

    Repetition has long been known to facilitate memory performance, but its effects on event-related potentials (ERPs), measured as an index of recognition memory, are less well characterized. In Experiment 1, effects of both massed and distributed repetition on old–new ERPs were assessed during an immediate recognition test that followed incidental encoding of natural scenes that also varied in emotionality. Distributed repetition at encoding enhanced both memory performance and the amplitude of an old–new ERP difference over centro-parietal sensors. To assess whether these repetition effects reflect encoding or retrieval differences, the recognition task was replaced with passive viewing of old and new pictures in Experiment 2. In the absence of an explicit recognition task, ERPs were completely unaffected by repetition at encoding, and only emotional pictures prompted a modestly enhanced old–new difference. Taken together, the data suggest that repetition facilitates retrieval processes and that, in the absence of an explicit recognition task, differences in old–new ERPs are only apparent for affective cues. PMID:22842817

  18. Famous face recognition, face matching, and extraversion.

    PubMed

    Lander, Karen; Poyarekar, Siddhi

    2015-01-01

    It has been previously established that extraverts who are skilled at interpersonal interaction perform significantly better than introverts on a face-specific recognition memory task. In our experiment we further investigate the relationship between extraversion and face recognition, focusing on famous face recognition and face matching. Results indicate that more extraverted individuals perform significantly better on an upright famous face recognition task and show significantly larger face inversion effects. However, our results did not find an effect of extraversion on face matching or inverted famous face recognition.

  19. Memory Asymmetry of Forward and Backward Associations in Recognition Tasks

    ERIC Educational Resources Information Center

    Yang, Jiongjiong; Zhao, Peng; Zhu, Zijian; Mecklinger, Axel; Fang, Zhiyong; Li, Han

    2013-01-01

    There is an intensive debate on whether memory for serial order is symmetric. The objective of this study was to explore whether associative asymmetry is modulated by memory task (recognition vs. cued recall). Participants were asked to memorize word triples (Experiments 1-2) or pairs (Experiments 3-6) during the study phase. They then recalled…

  20. False recall and recognition of brand names increases over time.

    PubMed

    Sherman, Susan M

    2013-01-01

    Using the Deese-Roediger-McDermott (DRM) paradigm, participants are presented with lists of associated words (e.g., bed, awake, night). Subsequently, they reliably have false memories for related but nonpresented words (e.g., SLEEP). Previous research has found that false memories can be created for brand names (e.g., Morrisons, Sainsbury's, Waitrose, and TESCO). The present study investigates the effect of a week's delay on false memories for brand names. Participants were presented with lists of brand names followed by a distractor task. In two between-subjects experiments, participants completed a free recall task or a recognition task either immediately or a week later. In two within-subjects experiments, participants completed a free recall task or a recognition task both immediately and a week later. Correct recall for presented list items decreased over time, whereas false recall for nonpresented lure items increased. For recognition, raw scores revealed an increase in false memory across time reflected in an increase in Remember responses. Analysis of Pr scores revealed that false memory for lures stayed constant over a week, but with an increase in Remember responses in the between-subjects experiment and a trend in the same direction in the within-subjects experiment. Implications for theories of false memory are discussed.

  1. The Effects of a Distracting N-Back Task on Recognition Memory Are Reduced by Negative Emotional Intensity

    PubMed Central

    Buratto, Luciano G.; Pottage, Claire L.; Brown, Charity; Morrison, Catriona M.; Schaefer, Alexandre

    2014-01-01

    Memory performance is usually impaired when participants have to encode information while performing a concurrent task. Recent studies using recall tasks have found that emotional items are more resistant to such cognitive depletion effects than non-emotional items. However, when recognition tasks are used, the same effect is more elusive as recent recognition studies have obtained contradictory results. In two experiments, we provide evidence that negative emotional content can reliably reduce the effects of cognitive depletion on recognition memory only if stimuli with high levels of emotional intensity are used. In particular, we found that recognition performance for realistic pictures was impaired by a secondary 3-back working memory task during encoding if stimuli were emotionally neutral or had moderate levels of negative emotionality. In contrast, when negative pictures with high levels of emotional intensity were used, the detrimental effects of the secondary task were significantly attenuated. PMID:25330251

  2. The effects of a distracting N-back task on recognition memory are reduced by negative emotional intensity.

    PubMed

    Buratto, Luciano G; Pottage, Claire L; Brown, Charity; Morrison, Catriona M; Schaefer, Alexandre

    2014-01-01

    Memory performance is usually impaired when participants have to encode information while performing a concurrent task. Recent studies using recall tasks have found that emotional items are more resistant to such cognitive depletion effects than non-emotional items. However, when recognition tasks are used, the same effect is more elusive as recent recognition studies have obtained contradictory results. In two experiments, we provide evidence that negative emotional content can reliably reduce the effects of cognitive depletion on recognition memory only if stimuli with high levels of emotional intensity are used. In particular, we found that recognition performance for realistic pictures was impaired by a secondary 3-back working memory task during encoding if stimuli were emotionally neutral or had moderate levels of negative emotionality. In contrast, when negative pictures with high levels of emotional intensity were used, the detrimental effects of the secondary task were significantly attenuated.

  3. Effects of level of processing but not of task enactment on recognition memory in a case of developmental amnesia.

    PubMed

    Gardiner, John M; Brandt, Karen R; Vargha-Khadem, Faraneh; Baddeley, Alan; Mishkin, Mortimer

    2006-09-01

    We report the performance in four recognition memory experiments of Jon, a young adult with early-onset developmental amnesia whose episodic memory is gravely impaired in tests of recall, but seems relatively preserved in tests of recognition, and who has developed normal levels of performance in tests of intelligence and general knowledge. Jon's recognition performance was enhanced by deeper levels of processing in comparing a more meaningful study task with a less meaningful one, but not by task enactment in comparing performance of an action with reading an action phrase. Both of these variables normally enhance episodic remembering, which Jon claimed to experience. But Jon was unable to support that claim by recollecting what it was that he remembered. Taken altogether, the findings strongly imply that Jon's recognition performance entailed little genuine episodic remembering and that the levels-of-processing effects in Jon reflected semantic, not episodic, memory.

  4. Transfer-appropriate processing in recognition memory: perceptual and conceptual effects on recognition memory depend on task demands.

    PubMed

    Parks, Colleen M

    2013-07-01

    Research examining the importance of surface-level information to familiarity in recognition memory tasks is mixed: Sometimes it affects recognition and sometimes it does not. One potential explanation of the inconsistent findings comes from the ideas of dual process theory of recognition and the transfer-appropriate processing framework, which suggest that the extent to which perceptual fluency matters on a recognition test depends in large part on the task demands. A test that recruits perceptual processing for discrimination should show greater perceptual effects and smaller conceptual effects than standard recognition, similar to the pattern of effects found in perceptual implicit memory tasks. This idea was tested in the current experiment by crossing a levels of processing manipulation with a modality manipulation on a series of recognition tests that ranged from conceptual (standard recognition) to very perceptually demanding (a speeded recognition test with degraded stimuli). Results showed that the levels of processing effect decreased and the effect of modality increased when tests were made perceptually demanding. These results support the idea that surface-level features influence performance on recognition tests when they are made salient by the task demands. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  5. "We all look the same to me": positive emotions eliminate the own-race in face recognition.

    PubMed

    Johnson, Kareem J; Fredrickson, Barbara L

    2005-11-01

    Extrapolating from the broaden-and-build theory, we hypothesized that positive emotion may reduce the own-race bias in facial recognition. In Experiments 1 and 2, Caucasian participants (N = 89) viewed Black and White faces for a recognition task. They viewed videos eliciting joy, fear, or neutrality before the learning (Experiment 1) or testing (Experiment 2) stages of the task. Results reliably supported the hypothesis. Relative to fear or a neutral state, joy experienced before either stage improved recognition of Black faces and significantly reduced the own-race bias. Discussion centers on possible mechanisms for this reduction of the own-race bias, including improvements in holistic processing and promotion of a common in-group identity due to positive emotions.

  6. Cultural differences in self-recognition: the early development of autonomous and related selves?

    PubMed

    Ross, Josephine; Yilmaz, Mandy; Dale, Rachel; Cassidy, Rose; Yildirim, Iraz; Suzanne Zeedyk, M

    2017-05-01

    Fifteen- to 18-month-old infants from three nationalities were observed interacting with their mothers and during two self-recognition tasks. Scottish interactions were characterized by distal contact, Zambian interactions by proximal contact, and Turkish interactions by a mixture of contact strategies. These culturally distinct experiences may scaffold different perspectives on self. In support, Scottish infants performed best in a task requiring recognition of the self in an individualistic context (mirror self-recognition), whereas Zambian infants performed best in a task requiring recognition of the self in a less individualistic context (body-as-obstacle task). Turkish infants performed similarly to Zambian infants on the body-as-obstacle task, but outperformed Zambians on the mirror self-recognition task. Verbal contact (a distal strategy) was positively related to mirror self-recognition and negatively related to passing the body-as-obstacle task. Directive action and speech (proximal strategies) were negatively related to mirror self-recognition. Self-awareness performance was best predicted by cultural context; autonomous settings predicted success in mirror self-recognition, and related settings predicted success in the body-as-obstacle task. These novel data substantiate the idea that cultural factors may play a role in the early expression of self-awareness. More broadly, the results highlight the importance of moving beyond the mark test, and designing culturally sensitive tests of self-awareness. © 2016 John Wiley & Sons Ltd.

  7. Does the generation effect occur for pictures?

    PubMed

    Kinjo, H; Snodgrass, J G

    2000-01-01

    The generation effect is the finding that self-generated stimuli are recalled and recognized better than read stimuli. The effect has been demonstrated primarily with words. This article examines the effect for pictures in two experiments: Subjects named complete pictures (name condition) and fragmented pictures (generation condition). In Experiment 1, memory was tested in 3 explicit tasks: free recall, yes/no recognition, and a source-monitoring task on whether each picture was complete or fragmented (the complete/incomplete task). The generation effect was found for all 3 tasks. However, in the recognition and source-monitoring tasks, the generation effect was observed only in the generation condition. We hypothesized that absence of the effect in the name condition was due to the sensory or process match effect between study and test pictures and the superior identification of pictures in the name condition. Therefore, stimuli were changed from pictures to their names in Experiment 2. Memory was tested in the recognition task, complete/incomplete task, and second source-monitoring task (success/failure) on whether each picture had been identified successfully. The generation effect was observed for all 3 tasks. These results suggest that memory of structural and semantic characteristics and of success in identification of generated pictures may contribute to the generation effect.

  8. Face recognition and description abilities in people with mild intellectual disabilities.

    PubMed

    Gawrylowicz, Julie; Gabbert, Fiona; Carson, Derek; Lindsay, William R; Hancock, Peter J B

    2013-09-01

    People with intellectual disabilities (ID) are as likely as the general population to find themselves in the situation of having to identify and/or describe a perpetrator's face to the police. However, limited verbal and memory abilities in people with ID might prevent them to engage in standard police procedures. Two experiments examined face recognition and description abilities in people with mild intellectual disabilities (mID) and compared their performance with that of people without ID. Experiment 1 used three old/new face recognition tasks. Experiment 2 consisted of two face description tasks, during which participants had to verbally describe faces from memory and with the target in view. Participants with mID performed significantly poorer on both recognition and recall tasks than control participants. However, their group performance was better than chance and they showed variability in performance depending on the measures introduced. The practical implications of these findings in forensic settings are discussed. © 2013 John Wiley & Sons Ltd.

  9. Explicit and spontaneous retrieval of emotional scenes: electrophysiological correlates.

    PubMed

    Weymar, Mathias; Bradley, Margaret M; El-Hinnawi, Nasryn; Lang, Peter J

    2013-10-01

    When event-related potentials (ERP) are measured during a recognition task, items that have previously been presented typically elicit a larger late (400-800 ms) positive potential than new items. Recent data, however, suggest that emotional, but not neutral, pictures show ERP evidence of spontaneous retrieval when presented in a free-viewing task (Ferrari, Bradley, Codispoti, Karlsson, & Lang, 2012). In two experiments, we further investigated the brain dynamics of implicit and explicit retrieval. In Experiment 1, brain potentials were measured during a semantic categorization task, which did not explicitly probe episodic memory, but which, like a recognition task, required an active decision and a button press, and were compared to those elicited during recognition and free viewing. Explicit recognition prompted a late enhanced positivity for previously presented, compared with new, pictures regardless of hedonic content. In contrast, only emotional pictures showed an old-new difference when the task did not explicitly probe episodic memory, either when making an active categorization decision regarding picture content, or when simply viewing pictures. In Experiment 2, however, neutral pictures did prompt a significant old-new ERP difference during subsequent free viewing when emotionally arousing pictures were not included in the encoding set. These data suggest that spontaneous retrieval is heightened for salient cues, perhaps reflecting heightened attention and elaborative processing at encoding.

  10. Explicit and spontaneous retrieval of emotional scenes: Electrophysiological correlates

    PubMed Central

    Weymar, Mathias; Bradley, Margaret M.; El-Hinnawi, Nasryn; Lang, Peter J.

    2014-01-01

    When event-related potentials are measured during a recognition task, items that have previously been presented typically elicit a larger late (400–800 ms) positive potential than new items. Recent data, however, suggest that emotional, but not neutral, pictures show ERP evidence of spontaneous retrieval when presented in a free-viewing task (Ferrari, Bradley, Codispoti & Lang, 2012). In two experiments, we further investigated the brain dynamics of implicit and explicit retrieval. In Experiment 1, brain potentials were measured during a semantic categorization task, which did not explicitly probe episodic memory, but which, like a recognition task, required an active decision and a button press, and were compared to those elicited during recognition and free viewing. Explicit recognition prompted a late enhanced positivity for previously presented, compared to new, pictures regardless of hedonic content. In contrast, only emotional pictures showed an old-new difference when the task did not explicitly probe episodic memory, either when either making an active categorization decision regarding picture content, or when simply viewing pictures. In Experiment 2, however, neutral pictures did prompt a significant old-new ERP difference during subsequent free viewing when emotionally arousing pictures were not included in the encoding set. These data suggest that spontaneous retrieval is heightened for salient cues, perhaps reflecting heightened attention and elaborative processing at encoding. PMID:23795588

  11. Emotion recognition and oxytocin in patients with schizophrenia

    PubMed Central

    Averbeck, B. B.; Bobin, T.; Evans, S.; Shergill, S. S.

    2012-01-01

    Background Studies have suggested that patients with schizophrenia are impaired at recognizing emotions. Recently, it has been shown that the neuropeptide oxytocin can have beneficial effects on social behaviors. Method To examine emotion recognition deficits in patients and see whether oxytocin could improve these deficits, we carried out two experiments. In the first experiment we recruited 30 patients with schizophrenia and 29 age- and IQ-matched control subjects, and gave them an emotion recognition task. Following this, we carried out a second experiment in which we recruited 21 patients with schizophrenia for a double-blind, placebo-controlled cross-over study of the effects of oxytocin on the same emotion recognition task. Results In the first experiment we found that patients with schizophrenia had a deficit relative to controls in recognizing emotions. In the second experiment we found that administration of oxytocin improved the ability of patients to recognize emotions. The improvement was consistent and occurred for most emotions, and was present whether patients were identifying morphed or non-morphed faces. Conclusions These data add to a growing literature showing beneficial effects of oxytocin on social–behavioral tasks, as well as clinical symptoms. PMID:21835090

  12. Depth rotation and mirror-image reflection reduce affective preference as well as recognition memory for pictures of novel objects.

    PubMed

    Lawson, Rebecca

    2004-10-01

    In two experiments, the identification of novel 3-D objects was worse for depth-rotated and mirror-reflected views, compared with the study view in an implicit affective preference memory task, as well as in an explicit recognition memory task. In Experiment 1, recognition was worse and preference was lower when depth-rotated views of an object were paired with an unstudied object relative to trials when the study view of that object was shown. There was a similar trend for mirror-reflected views. In Experiment 2, the study view of an object was both recognized and preferred above chance when it was paired with either depth-rotated or mirror-reflected views of that object. These results suggest that view-sensitive representations of objects mediate performance in implicit, as well as explicit, memory tasks. The findings do not support the claim that separate episodic and structural description representations underlie performance in implicit and explicit memory tasks, respectively.

  13. Déjà vu experiences in healthy subjects are unrelated to laboratory tests of recollection and familiarity for word stimuli.

    PubMed

    O'Connor, Akira R; Moulin, Chris J A

    2013-01-01

    Recent neuropsychological and neuroscientific research suggests that people who experience more déjà vu display characteristic patterns in normal recognition memory. We conducted a large individual differences study (n = 206) to test these predictions using recollection and familiarity parameters recovered from a standard memory task. Participants reported déjà vu frequency and a number of its correlates, and completed a recognition memory task analogous to a Remember-Know procedure. The individual difference measures replicated an established correlation between déjà vu frequency and frequency of travel, and recognition performance showed well-established word frequency and accuracy effects. Contrary to predictions, no relationships were found between déjà vu frequency and recollection or familiarity memory parameters from the recognition test. We suggest that déjà vu in the healthy population reflects a mismatch between errant memory signaling and memory monitoring processes not easily characterized by standard recognition memory task performance.

  14. Déjà vu experiences in healthy subjects are unrelated to laboratory tests of recollection and familiarity for word stimuli

    PubMed Central

    O’Connor, Akira R.; Moulin, Chris J. A.

    2013-01-01

    Recent neuropsychological and neuroscientific research suggests that people who experience more déjà vu display characteristic patterns in normal recognition memory. We conducted a large individual differences study (n = 206) to test these predictions using recollection and familiarity parameters recovered from a standard memory task. Participants reported déjà vu frequency and a number of its correlates, and completed a recognition memory task analogous to a Remember-Know procedure. The individual difference measures replicated an established correlation between déjà vu frequency and frequency of travel, and recognition performance showed well-established word frequency and accuracy effects. Contrary to predictions, no relationships were found between déjà vu frequency and recollection or familiarity memory parameters from the recognition test. We suggest that déjà vu in the healthy population reflects a mismatch between errant memory signaling and memory monitoring processes not easily characterized by standard recognition memory task performance. PMID:24409159

  15. Selective attention affects conceptual object priming and recognition: a study with young and older adults.

    PubMed

    Ballesteros, Soledad; Mayas, Julia

    2014-01-01

    In the present study, we investigated the effects of selective attention at encoding on conceptual object priming (Experiment 1) and old-new recognition memory (Experiment 2) tasks in young and older adults. The procedures of both experiments included encoding and memory test phases separated by a short delay. At encoding, the picture outlines of two familiar objects, one in blue and the other in green, were presented to the left and to the right of fixation. In Experiment 1, participants were instructed to attend to the picture outline of a certain color and to classify the object as natural or artificial. After a short delay, participants performed a natural/artificial speeded conceptual classification task with repeated attended, repeated unattended, and new pictures. In Experiment 2, participants at encoding memorized the attended pictures and classify them as natural or artificial. After the encoding phase, they performed an old-new recognition memory task. Consistent with previous findings with perceptual priming tasks, we found that conceptual object priming, like explicit memory, required attention at encoding. Significant priming was obtained in both age groups, but only for those pictures that were attended at encoding. Although older adults were slower than young adults, both groups showed facilitation for attended pictures. In line with previous studies, young adults had better recognition memory than older adults.

  16. Selective attention affects conceptual object priming and recognition: a study with young and older adults

    PubMed Central

    Ballesteros, Soledad; Mayas, Julia

    2015-01-01

    In the present study, we investigated the effects of selective attention at encoding on conceptual object priming (Experiment 1) and old–new recognition memory (Experiment 2) tasks in young and older adults. The procedures of both experiments included encoding and memory test phases separated by a short delay. At encoding, the picture outlines of two familiar objects, one in blue and the other in green, were presented to the left and to the right of fixation. In Experiment 1, participants were instructed to attend to the picture outline of a certain color and to classify the object as natural or artificial. After a short delay, participants performed a natural/artificial speeded conceptual classification task with repeated attended, repeated unattended, and new pictures. In Experiment 2, participants at encoding memorized the attended pictures and classify them as natural or artificial. After the encoding phase, they performed an old–new recognition memory task. Consistent with previous findings with perceptual priming tasks, we found that conceptual object priming, like explicit memory, required attention at encoding. Significant priming was obtained in both age groups, but only for those pictures that were attended at encoding. Although older adults were slower than young adults, both groups showed facilitation for attended pictures. In line with previous studies, young adults had better recognition memory than older adults. PMID:25628588

  17. Distinguishing familiarity from fluency for the compound word pair effect in associative recognition.

    PubMed

    Ahmad, Fahad N; Hockley, William E

    2017-09-01

    We examined whether processing fluency contributes to associative recognition of unitized pre-experimental associations. In Experiments 1A and 1B, we minimized perceptual fluency by presenting each word of pairs on separate screens at both study and test, yet the compound word (CW) effect (i.e., hit and false-alarm rates greater for CW pairs with no difference in discrimination) did not reduce. In Experiments 2A and 2B, conceptual fluency was examined by comparing transparent (e.g., hand bag) and opaque (e.g., rag time) CW pairs in lexical decision and associative recognition tasks. Lexical decision was faster for transparent CWs (Experiment 2A) but in associative recognition, the CW effect did not differ by CW pair type (Experiment 2B). In Experiments 3A and 3B, we examined whether priming that increases processing fluency would influence the CW effect. In Experiment 3A, CW and non-compound word pairs were preceded with matched and mismatched primes at test in an associative recognition task. In Experiment 3B, only transparent and opaque CW pairs were presented. Results showed that presenting matched versus mismatched primes at test did not influence the CW effect. The CW effect in yes-no associative recognition is due to reliance on enhanced familiarity of unitized CW pairs.

  18. Multi-Task Convolutional Neural Network for Pose-Invariant Face Recognition

    NASA Astrophysics Data System (ADS)

    Yin, Xi; Liu, Xiaoming

    2018-02-01

    This paper explores multi-task learning (MTL) for face recognition. We answer the questions of how and why MTL can improve the face recognition performance. First, we propose a multi-task Convolutional Neural Network (CNN) for face recognition where identity classification is the main task and pose, illumination, and expression estimations are the side tasks. Second, we develop a dynamic-weighting scheme to automatically assign the loss weight to each side task, which is a crucial problem in MTL. Third, we propose a pose-directed multi-task CNN by grouping different poses to learn pose-specific identity features, simultaneously across all poses. Last but not least, we propose an energy-based weight analysis method to explore how CNN-based MTL works. We observe that the side tasks serve as regularizations to disentangle the variations from the learnt identity features. Extensive experiments on the entire Multi-PIE dataset demonstrate the effectiveness of the proposed approach. To the best of our knowledge, this is the first work using all data in Multi-PIE for face recognition. Our approach is also applicable to in-the-wild datasets for pose-invariant face recognition and achieves comparable or better performance than state of the art on LFW, CFP, and IJB-A datasets.

  19. Warmth of familiarity and chill of error: affective consequences of recognition decisions.

    PubMed

    Chetverikov, Andrey

    2014-04-01

    The present research aimed to assess the effect of recognition decision on subsequent affective evaluations of recognised and non-recognised objects. Consistent with the proposed account of post-decisional preferences, results showed that the effect of recognition on preferences depends upon objective familiarity. If stimuli are recognised, liking ratings are positively associated with exposure frequency; if stimuli are not recognised, this link is either absent (Experiment 1) or negative (Experiments 2 and 3). This interaction between familiarity and recognition exists even when recognition accuracy is at chance level and the "mere exposure" effect is absent. Finally, data obtained from repeated measurements of preferences and using manipulations of task order confirm that recognition decisions have a causal influence on preferences. The findings suggest that affective evaluation can provide fine-grained access to the efficacy of cognitive processing even in simple cognitive tasks.

  20. Orthographic neighborhood effects in recognition and recall tasks in a transparent orthography.

    PubMed

    Justi, Francis R R; Jaeger, Antonio

    2017-04-01

    The number of orthographic neighbors of a word influences its probability of being retrieved in recognition and free recall memory tests. Even though this phenomenon is well demonstrated for English words, it has yet to be demonstrated for languages with more predictable grapheme-phoneme mappings than English. To address this issue, 4 experiments were conducted to investigate effects of number of orthographic neighbors (N) and effects of frequency of occurrence of orthographic neighbors (NF) on memory retrieval of Brazilian Portuguese words. One hundred twenty-four Brazilian Portuguese speakers performed first a lexical-decision task (LDT) on words that were factorially manipulated according to N and NF, and intermixed with either nonpronounceable nonwords without orthographic neighbors (Experiments 1A and 2A), or with pronounceable nonwords with a large number of orthographic neighbors (Experiments 1B and 2B). The words were later used as probes on either recognition (Experiments 1A and 1B) or recall tests (Experiments 2A and 2B). Words with 1 orthographic neighbor were consistently better remembered than words with several orthographic neighbors in all recognition and recall tests. Notably, whereas in Experiment 1A higher false alarm rates were yielded for words with several rather than 1 orthographic neighbor, in Experiment 1B higher false alarm rates were yielded for words with 1 rather than several orthographic neighbors. Effects of NF, on the other hand, were not consistent among memory tasks. The effects of N on the recognition and recall tests conducted here are interpreted in light of dual process models of recognition. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  1. Latency of modality-specific reactivation of auditory and visual information during episodic memory retrieval.

    PubMed

    Ueno, Daisuke; Masumoto, Kouhei; Sutani, Kouichi; Iwaki, Sunao

    2015-04-15

    This study used magnetoencephalography (MEG) to examine the latency of modality-specific reactivation in the visual and auditory cortices during a recognition task to determine the effects of reactivation on episodic memory retrieval. Nine right-handed healthy young adults participated in the experiment. The experiment consisted of a word-encoding phase and two recognition phases. Three encoding conditions were included: encoding words alone (word-only) and encoding words presented with either related pictures (visual) or related sounds (auditory). The recognition task was conducted in the MEG scanner 15 min after the completion of the encoding phase. After the recognition test, a source-recognition task was given, in which participants were required to choose whether each recognition word was not presented or was presented with which information during the encoding phase. Word recognition in the auditory condition was higher than that in the word-only condition. Confidence-of-recognition scores (d') and the source-recognition test showed superior performance in both the visual and the auditory conditions compared with the word-only condition. An equivalent current dipoles analysis of MEG data indicated that higher equivalent current dipole amplitudes in the right fusiform gyrus occurred during the visual condition and in the superior temporal auditory cortices during the auditory condition, both 450-550 ms after onset of the recognition stimuli. Results suggest that reactivation of visual and auditory brain regions during recognition binds language with modality-specific information and that reactivation enhances confidence in one's recognition performance.

  2. Tolerance for distorted faces: challenges to a configural processing account of familiar face recognition.

    PubMed

    Sandford, Adam; Burton, A Mike

    2014-09-01

    Face recognition is widely held to rely on 'configural processing', an analysis of spatial relations between facial features. We present three experiments in which viewers were shown distorted faces, and asked to resize these to their correct shape. Based on configural theories appealing to metric distances between features, we reason that this should be an easier task for familiar than unfamiliar faces (whose subtle arrangements of features are unknown). In fact, participants were inaccurate at this task, making between 8% and 13% errors across experiments. Importantly, we observed no advantage for familiar faces: in one experiment participants were more accurate with unfamiliars, and in two experiments there was no difference. These findings were not due to general task difficulty - participants were able to resize blocks of colour to target shapes (squares) more accurately. We also found an advantage of familiarity for resizing other stimuli (brand logos). If configural processing does underlie face recognition, these results place constraints on the definition of 'configural'. Alternatively, familiar face recognition might rely on more complex criteria - based on tolerance to within-person variation rather than highly specific measurement. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. An information-processing model of three cortical regions: evidence in episodic memory retrieval.

    PubMed

    Sohn, Myeong-Ho; Goode, Adam; Stenger, V Andrew; Jung, Kwan-Jin; Carter, Cameron S; Anderson, John R

    2005-03-01

    ACT-R (Anderson, J.R., et al., 2003. An information-processing model of the BOLD response in symbol manipulation tasks. Psychon. Bull. Rev. 10, 241-261) relates the inferior dorso-lateral prefrontal cortex to a retrieval buffer that holds information retrieved from memory and the posterior parietal cortex to an imaginal buffer that holds problem representations. Because the number of changes in a problem representation is not necessarily correlated with retrieval difficulties, it is possible to dissociate prefrontal-parietal activations. In two fMRI experiments, we examined this dissociation using the fan effect paradigm. Experiment 1 compared a recognition task, in which representation requirement remains the same regardless of retrieval difficulty, with a recall task, in which both representation and retrieval loads increase with retrieval difficulty. In the recognition task, the prefrontal activation revealed a fan effect but not the parietal activation. In the recall task, both regions revealed fan effects. In Experiment 2, we compared visually presented stimuli and aurally presented stimuli using the recognition task. While only the prefrontal region revealed the fan effect, the activation patterns in the prefrontal and the parietal region did not differ by stimulus presentation modality. In general, these results provide support for the prefrontal-parietal dissociation in terms of retrieval and representation and the modality-independent nature of the information processed by these regions. Using ACT-R, we also provide computational models that explain patterns of fMRI responses in these two areas during recognition and recall.

  4. View-invariant object recognition ability develops after discrimination, not mere exposure, at several viewing angles.

    PubMed

    Yamashita, Wakayo; Wang, Gang; Tanaka, Keiji

    2010-01-01

    One usually fails to recognize an unfamiliar object across changes in viewing angle when it has to be discriminated from similar distractor objects. Previous work has demonstrated that after long-term experience in discriminating among a set of objects seen from the same viewing angle, immediate recognition of the objects across 30-60 degrees changes in viewing angle becomes possible. The capability for view-invariant object recognition should develop during the within-viewing-angle discrimination, which includes two kinds of experience: seeing individual views and discriminating among the objects. The aim of the present study was to determine the relative contribution of each factor to the development of view-invariant object recognition capability. Monkeys were first extensively trained in a task that required view-invariant object recognition (Object task) with several sets of objects. The animals were then exposed to a new set of objects over 26 days in one of two preparatory tasks: one in which each object view was seen individually, and a second that required discrimination among the objects at each of four viewing angles. After the preparatory period, we measured the monkeys' ability to recognize the objects across changes in viewing angle, by introducing the object set to the Object task. Results indicated significant view-invariant recognition after the second but not first preparatory task. These results suggest that discrimination of objects from distractors at each of several viewing angles is required for the development of view-invariant recognition of the objects when the distractors are similar to the objects.

  5. On the Relationship between Memory and Perception: Sequential Dependencies in Recognition Memory Testing

    ERIC Educational Resources Information Center

    Malmberg, Kenneth J.; Annis, Jeffrey

    2012-01-01

    Many models of recognition are derived from models originally applied to perception tasks, which assume that decisions from trial to trial are independent. While the independence assumption is violated for many perception tasks, we present the results of several experiments intended to relate memory and perception by exploring sequential…

  6. The Nature of Phoneme Representation in Spoken Word Recognition

    ERIC Educational Resources Information Center

    Gaskell, M. Gareth; Quinlan, Philip T.; Tamminen, Jakke; Cleland, Alexandra A.

    2008-01-01

    Four experiments used the psychological refractory period logic to examine whether integration of multiple sources of phonemic information has a decisional locus. All experiments made use of a dual-task paradigm in which participants made forced-choice color categorization (Task 1) and phoneme categorization (Task 2) decisions at varying stimulus…

  7. The effect of changing the secondary task in dual-task paradigms for measuring listening effort.

    PubMed

    Picou, Erin M; Ricketts, Todd A

    2014-01-01

    The purpose of this study was to evaluate the effect of changing the secondary task in dual-task paradigms that measure listening effort. Specifically, the effects of increasing the secondary task complexity or the depth of processing on a paradigm's sensitivity to changes in listening effort were quantified in a series of two experiments. Specific factors investigated within each experiment were background noise and visual cues. Participants in Experiment 1 were adults with normal hearing (mean age 23 years) and participants in Experiment 2 were adults with mild sloping to moderately severe sensorineural hearing loss (mean age 60.1 years). In both experiments, participants were tested using three dual-task paradigms. These paradigms had identical primary tasks, which were always monosyllable word recognition. The secondary tasks were all physical reaction time measures. The stimulus for the secondary task varied by paradigm and was a (1) simple visual probe, (2) a complex visual probe, or (3) the category of word presented. In this way, the secondary tasks mainly varied from the simple paradigm by either complexity or depth of speech processing. Using all three paradigms, participants were tested in four conditions, (1) auditory-only stimuli in quiet, (2) auditory-only stimuli in noise, (3) auditory-visual stimuli in quiet, and (4) auditory-visual stimuli in noise. During auditory-visual conditions, the talker's face was visible. Signal-to-noise ratios used during conditions with background noise were set individually so word recognition performance was matched in auditory-only and auditory-visual conditions. In noise, word recognition performance was approximately 80% and 65% for Experiments 1 and 2, respectively. For both experiments, word recognition performance was stable across the three paradigms, confirming that none of the secondary tasks interfered with the primary task. In Experiment 1 (listeners with normal hearing), analysis of median reaction times revealed a significant main effect of background noise on listening effort only with the paradigm that required deep processing. Visual cues did not change listening effort as measured with any of the three dual-task paradigms. In Experiment 2 (listeners with hearing loss), analysis of median reaction times revealed expected significant effects of background noise using all three paradigms, but no significant effects of visual cues. None of the dual-task paradigms were sensitive to the effects of visual cues. Furthermore, changing the complexity of the secondary task did not change dual-task paradigm sensitivity to the effects of background noise on listening effort for either group of listeners. However, the paradigm whose secondary task involved deeper processing was more sensitive to the effects of background noise for both groups of listeners. While this paradigm differed from the others in several respects, depth of processing may be partially responsible for the increased sensitivity. Therefore, this paradigm may be a valuable tool for evaluating other factors that affect listening effort.

  8. High confidence in falsely recognizing prototypical faces.

    PubMed

    Sampaio, Cristina; Reinke, Victoria; Mathews, Jeffrey; Swart, Alexandra; Wallinger, Stephen

    2018-06-01

    We applied a metacognitive approach to investigate confidence in recognition of prototypical faces. Participants were presented with sets of faces constructed digitally as deviations from prototype/base faces. Participants were then tested with a simple recognition task (Experiment 1) or a multiple-choice task (Experiment 2) for old and new items plus new prototypes, and they showed a high rate of confident false alarms to the prototypes. Confidence and accuracy relationship in this face recognition paradigm was found to be positive for standard items but negative for the prototypes; thus, it was contingent on the nature of the items used. The data have implications for lineups that employ match-to-suspect strategies.

  9. Familiarity and Recollection in Heuristic Decision Making

    PubMed Central

    Schwikert, Shane R.; Curran, Tim

    2014-01-01

    Heuristics involve the ability to utilize memory to make quick judgments by exploiting fundamental cognitive abilities. In the current study we investigated the memory processes that contribute to the recognition heuristic and the fluency heuristic, which are both presumed to capitalize on the by-products of memory to make quick decisions. In Experiment 1, we used a city-size comparison task while recording event-related potentials (ERPs) to investigate the potential contributions of familiarity and recollection to the two heuristics. ERPs were markedly different for recognition heuristic-based decisions and fluency heuristic-based decisions, suggesting a role for familiarity in the recognition heuristic and recollection in the fluency heuristic. In Experiment 2, we coupled the same city-size comparison task with measures of subjective pre-experimental memory for each stimulus in the task. Although previous literature suggests the fluency heuristic relies on recognition speed alone, our results suggest differential contributions of recognition speed and recollected knowledge to these decisions, whereas the recognition heuristic relies on familiarity. Based on these results, we created a new theoretical frame work that explains decisions attributed to both heuristics based on the underlying memory associated with the choice options. PMID:25347534

  10. Familiarity and recollection in heuristic decision making.

    PubMed

    Schwikert, Shane R; Curran, Tim

    2014-12-01

    Heuristics involve the ability to utilize memory to make quick judgments by exploiting fundamental cognitive abilities. In the current study we investigated the memory processes that contribute to the recognition heuristic and the fluency heuristic, which are both presumed to capitalize on the byproducts of memory to make quick decisions. In Experiment 1, we used a city-size comparison task while recording event-related potentials (ERPs) to investigate the potential contributions of familiarity and recollection to the 2 heuristics. ERPs were markedly different for recognition heuristic-based decisions and fluency heuristic-based decisions, suggesting a role for familiarity in the recognition heuristic and recollection in the fluency heuristic. In Experiment 2, we coupled the same city-size comparison task with measures of subjective preexperimental memory for each stimulus in the task. Although previous literature suggests the fluency heuristic relies on recognition speed alone, our results suggest differential contributions of recognition speed and recollected knowledge to these decisions, whereas the recognition heuristic relies on familiarity. Based on these results, we created a new theoretical framework that explains decisions attributed to both heuristics based on the underlying memory associated with the choice options. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  11. Auditory Learning Using a Portable Real-Time Vocoder: Preliminary Findings

    PubMed Central

    Pisoni, David B.

    2015-01-01

    Purpose Although traditional study of auditory training has been in controlled laboratory settings, interest has been increasing in more interactive options. The authors examine whether such interactive training can result in short-term perceptual learning, and the range of perceptual skills it impacts. Method Experiments 1 (N = 37) and 2 (N = 21) used pre- and posttest measures of speech and nonspeech recognition to find evidence of learning (within subject) and to compare the effects of 3 kinds of training (between subject) on the perceptual abilities of adults with normal hearing listening to simulations of cochlear implant processing. Subjects were given interactive, standard lab-based, or control training experience for 1 hr between the pre- and posttest tasks (unique sets across Experiments 1 & 2). Results Subjects receiving interactive training showed significant learning on sentence recognition in quiet task (Experiment 1), outperforming controls but not lab-trained subjects following training. Training groups did not differ significantly on any other task, even those directly involved in the interactive training experience. Conclusions Interactive training has the potential to produce learning in 1 domain (sentence recognition in quiet), but the particulars of the present training method (short duration, high complexity) may have limited benefits to this single criterion task. PMID:25674884

  12. Measuring listening effort: driving simulator vs. simple dual-task paradigm

    PubMed Central

    Wu, Yu-Hsiang; Aksan, Nazan; Rizzo, Matthew; Stangl, Elizabeth; Zhang, Xuyang; Bentler, Ruth

    2014-01-01

    Objectives The dual-task paradigm has been widely used to measure listening effort. The primary objectives of the study were to (1) investigate the effect of hearing aid amplification and a hearing aid directional technology on listening effort measured by a complicated, more real world dual-task paradigm, and (2) compare the results obtained with this paradigm to a simpler laboratory-style dual-task paradigm. Design The listening effort of adults with hearing impairment was measured using two dual-task paradigms, wherein participants performed a speech recognition task simultaneously with either a driving task in a simulator or a visual reaction-time task in a sound-treated booth. The speech materials and road noises for the speech recognition task were recorded in a van traveling on the highway in three hearing aid conditions: unaided, aided with omni directional processing (OMNI), and aided with directional processing (DIR). The change in the driving task or the visual reaction-time task performance across the conditions quantified the change in listening effort. Results Compared to the driving-only condition, driving performance declined significantly with the addition of the speech recognition task. Although the speech recognition score was higher in the OMNI and DIR conditions than in the unaided condition, driving performance was similar across these three conditions, suggesting that listening effort was not affected by amplification and directional processing. Results from the simple dual-task paradigm showed a similar trend: hearing aid technologies improved speech recognition performance, but did not affect performance in the visual reaction-time task (i.e., reduce listening effort). The correlation between listening effort measured using the driving paradigm and the visual reaction-time task paradigm was significant. The finding showing that our older (56 to 85 years old) participants’ better speech recognition performance did not result in reduced listening effort was not consistent with literature that evaluated younger (approximately 20 years old), normal hearing adults. Because of this, a follow-up study was conducted. In the follow-up study, the visual reaction-time dual-task experiment using the same speech materials and road noises was repeated on younger adults with normal hearing. Contrary to findings with older participants, the results indicated that the directional technology significantly improved performance in both speech recognition and visual reaction-time tasks. Conclusions Adding a speech listening task to driving undermined driving performance. Hearing aid technologies significantly improved speech recognition while driving, but did not significantly reduce listening effort. Listening effort measured by dual-task experiments using a simulated real-world driving task and a conventional laboratory-style task was generally consistent. For a given listening environment, the benefit of hearing aid technologies on listening effort measured from younger adults with normal hearing may not be fully translated to older listeners with hearing impairment. PMID:25083599

  13. Effects of intelligibility on working memory demand for speech perception.

    PubMed

    Francis, Alexander L; Nusbaum, Howard C

    2009-08-01

    Understanding low-intelligibility speech is effortful. In three experiments, we examined the effects of intelligibility on working memory (WM) demands imposed by perception of synthetic speech. In all three experiments, a primary speeded word recognition task was paired with a secondary WM-load task designed to vary the availability of WM capacity during speech perception. Speech intelligibility was varied either by training listeners to use available acoustic cues in a more diagnostic manner (as in Experiment 1) or by providing listeners with more informative acoustic cues (i.e., better speech quality, as in Experiments 2 and 3). In the first experiment, training significantly improved intelligibility and recognition speed; increasing WM load significantly slowed recognition. A significant interaction between training and load indicated that the benefit of training on recognition speed was observed only under low memory load. In subsequent experiments, listeners received no training; intelligibility was manipulated by changing synthesizers. Improving intelligibility without training improved recognition accuracy, and increasing memory load still decreased it, but more intelligible speech did not produce more efficient use of available WM capacity. This suggests that perceptual learning modifies the way available capacity is used, perhaps by increasing the use of more phonetically informative features and/or by decreasing use of less informative ones.

  14. Gender differences in recognition of toy faces suggest a contribution of experience.

    PubMed

    Ryan, Kaitlin F; Gauthier, Isabel

    2016-12-01

    When there is a gender effect, women perform better then men in face recognition tasks. Prior work has not documented a male advantage on a face recognition task, suggesting that women may outperform men at face recognition generally either due to evolutionary reasons or the influence of social roles. Here, we question the idea that women excel at all face recognition and provide a proof of concept based on a face category for which men outperform women. We developed a test of face learning to measures individual differences with face categories for which men and women may differ in experience, using the faces of Barbie dolls and of Transformers. The results show a crossover interaction between subject gender and category, where men outperform women with Transformers' faces. We demonstrate that men can outperform women with some categories of faces, suggesting that explanations for a general face recognition advantage for women are in fact not needed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. Fast and Famous: Looking for the Fastest Speed at Which a Face Can be Recognized

    PubMed Central

    Barragan-Jason, Gladys; Besson, Gabriel; Ceccaldi, Mathieu; Barbeau, Emmanuel J.

    2012-01-01

    Face recognition is supposed to be fast. However, the actual speed at which faces can be recognized remains unknown. To address this issue, we report two experiments run with speed constraints. In both experiments, famous faces had to be recognized among unknown ones using a large set of stimuli to prevent pre-activation of features which would speed up recognition. In the first experiment (31 participants), recognition of famous faces was investigated using a rapid go/no-go task. In the second experiment, 101 participants performed a highly time constrained recognition task using the Speed and Accuracy Boosting procedure. Results indicate that the fastest speed at which a face can be recognized is around 360–390 ms. Such latencies are about 100 ms longer than the latencies recorded in similar tasks in which subjects have to detect faces among other stimuli. We discuss which model of activation of the visual ventral stream could account for such latencies. These latencies are not consistent with a purely feed-forward pass of activity throughout the visual ventral stream. An alternative is that face recognition relies on the core network underlying face processing identified in fMRI studies (OFA, FFA, and pSTS) and reentrant loops to refine face representation. However, the model of activation favored is that of an activation of the whole visual ventral stream up to anterior areas, such as the perirhinal cortex, combined with parallel and feed-back processes. Further studies are needed to assess which of these three models of activation can best account for face recognition. PMID:23460051

  16. Estrous cycle, pregnancy, and parity enhance performance of rats in object recognition or object placement tasks

    PubMed Central

    Paris, Jason J; Frye, Cheryl A

    2008-01-01

    Ovarian hormone elevations are associated with enhanced learning/memory. During behavioral estrus or pregnancy, progestins, such as progesterone (P4) and its metabolite 5α-pregnan-3α-ol-20-one (3α,5α-THP), are elevated due, in part, to corpora luteal and placental secretion. During ‘pseudopregnancy’, the induction of corpora luteal functioning results in a hormonal milieu analogous to pregnancy, which ceases after about 12 days, due to the lack of placental formation. Multiparity is also associated with enhanced learning/memory, perhaps due to prior steroid exposure during pregnancy. Given evidence that progestins and/or parity may influence cognition, we investigated how natural alterations in the progestin milieu influence cognitive performance. In Experiment 1, virgin rats (nulliparous) or rats with two prior pregnancies (multiparous) were assessed on the object placement and recognition tasks, when in high-estrogen/P4 (behavioral estrus) or low-estrogen/P4 (diestrus) phases of the estrous cycle. In Experiment 2, primiparous or multiparous rats were tested in the object placement and recognition tasks when not pregnant, pseudopregnant, or pregnant (between gestational days (GDs) 6 and 12). In Experiment 3, pregnant primiparous or multiparous rats were assessed daily in the object placement or recognition tasks. Females in natural states associated with higher endogenous progestins (behavioral estrus, pregnancy, multiparity) outperformed rats in low progestin states (diestrus, non-pregnancy, nulliparity) on the object placement and recognition tasks. In earlier pregnancy, multiparous, compared with primiparous, rats had a lower corticosterone, but higher estrogen levels, concomitant with better object placement performance. From GD 13 until post partum, primiparous rats had higher 3α,5α-THP levels and improved object placement performance compared with multiparous rats. PMID:18390689

  17. Hemispheric lateralization of linguistic prosody recognition in comparison to speech and speaker recognition.

    PubMed

    Kreitewolf, Jens; Friederici, Angela D; von Kriegstein, Katharina

    2014-11-15

    Hemispheric specialization for linguistic prosody is a controversial issue. While it is commonly assumed that linguistic prosody and emotional prosody are preferentially processed in the right hemisphere, neuropsychological work directly comparing processes of linguistic prosody and emotional prosody suggests a predominant role of the left hemisphere for linguistic prosody processing. Here, we used two functional magnetic resonance imaging (fMRI) experiments to clarify the role of left and right hemispheres in the neural processing of linguistic prosody. In the first experiment, we sought to confirm previous findings showing that linguistic prosody processing compared to other speech-related processes predominantly involves the right hemisphere. Unlike previous studies, we controlled for stimulus influences by employing a prosody and speech task using the same speech material. The second experiment was designed to investigate whether a left-hemispheric involvement in linguistic prosody processing is specific to contrasts between linguistic prosody and emotional prosody or whether it also occurs when linguistic prosody is contrasted against other non-linguistic processes (i.e., speaker recognition). Prosody and speaker tasks were performed on the same stimulus material. In both experiments, linguistic prosody processing was associated with activity in temporal, frontal, parietal and cerebellar regions. Activation in temporo-frontal regions showed differential lateralization depending on whether the control task required recognition of speech or speaker: recognition of linguistic prosody predominantly involved right temporo-frontal areas when it was contrasted against speech recognition; when contrasted against speaker recognition, recognition of linguistic prosody predominantly involved left temporo-frontal areas. The results show that linguistic prosody processing involves functions of both hemispheres and suggest that recognition of linguistic prosody is based on an inter-hemispheric mechanism which exploits both a right-hemispheric sensitivity to pitch information and a left-hemispheric dominance in speech processing. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Emotion-attention interactions in recognition memory for distractor faces.

    PubMed

    Srinivasan, Narayanan; Gupta, Rashmi

    2010-04-01

    Effective filtering of distractor information has been shown to be dependent on perceptual load. Given the salience of emotional information and the presence of emotion-attention interactions, we wanted to explore the recognition memory for emotional distractors especially as a function of focused attention and distributed attention by manipulating load and the spatial spread of attention. We performed two experiments to study emotion-attention interactions by measuring recognition memory performance for distractor neutral and emotional faces. Participants performed a color discrimination task (low-load) or letter identification task (high-load) with a letter string display in Experiment 1 and a high-load letter identification task with letters presented in a circular array in Experiment 2. The stimuli were presented against a distractor face background. The recognition memory results show that happy faces were recognized better than sad faces under conditions of less focused or distributed attention. When attention is more spatially focused, sad faces were recognized better than happy faces. The study provides evidence for emotion-attention interactions in which specific emotional information like sad or happy is associated with focused or distributed attention respectively. Distractor processing with emotional information also has implications for theories of attention. Copyright 2010 APA, all rights reserved.

  19. Perspective taking in older age revisited: a motivational perspective.

    PubMed

    Zhang, Xin; Fung, Helene H; Stanley, Jennifer T; Isaacowitz, Derek M; Ho, Man Yee

    2013-10-01

    How perspective-taking ability changes with age (i.e., whether older adults are better at understanding others' behaviors and intentions and show greater empathy to others or not) is not clear, with prior empirical findings on this phenomenon yielding mixed results. In a series of experiments, we investigated the phenomenon from a motivational perspective. Perceived closeness between participants and the experimenter (Study 1) or the target in an emotion recognition task (Study 2) was manipulated to examine whether the closeness could influence participants' performance in faux pas recognition (Study 1) and emotion recognition (Study 2). It was found that the well-documented negative age effect (i.e., older adults performed worse than younger adults in faux pas and emotion recognition tasks) was only replicated in the control condition for both tasks. When closeness was experimentally increased, older adults enhanced their performance, and they now performed at a comparable level as younger adults. Findings from the 2 experiments suggest that the reported poorer performance of older adults in perspective-taking tasks might be attributable to a lack of motivation instead of ability to perform in laboratory settings. With the presence of strong motivation, older adults have the ability to perform equally well as younger adults.

  20. Effect of anxiety on memory for emotional information in older adults.

    PubMed

    Herrera, Sara; Montorio, Ignacio; Cabrera, Isabel

    2017-04-01

    Several studies have shown that anxiety is associated with a better memory of negative events. However, this anxiety-related memory bias has not been studied in the elderly, in which there is a preferential processing of positive information. To study the effect of anxiety in a recognition task and an autobiographical memory task in 102 older adults with high and low levels of trait anxiety. Negative, positive and neutral pictures were used in the recognition task. In the autobiographical memory task, memories of the participants' lives were recorded, how they felt when thinking about them, and the personal relevance of these memories. In the recognition task, no anxiety-related bias was found toward negative information. Individuals with high trait anxiety were found to remember less positive pictures than those with low trait anxiety. In the autobiographical memory task, both groups remembered negative and positive events equally. However, people with high trait anxiety remembered life experiences with more negative emotions, especially when remembering negative events. Individuals with low trait anxiety tended to feel more positive emotions when remembering their life experiences and most of these referred to feeling positive emotions when remembering negative events. Older adults with anxiety tend to recognize less positive information and to present more negative emotions when remembering life events; while individuals without anxiety have a more positive experience of negative memories.

  1. Comparing the Frequency Effect Between the Lexical Decision and Naming Tasks in Chinese

    PubMed Central

    Wu, Jei-Tun

    2016-01-01

    In psycholinguistic research, the frequency effect can be one of the indicators for eligible experimental tasks that examine the nature of lexical access. Usually, only one of those tasks is chosen to examine lexical access in a study. Using two exemplar experiments, this paper introduces an approach to include both the lexical decision task and the naming task in a study. In the first experiment, the stimuli were Chinese characters with frequency and regularity manipulated. In the second experiment, the stimuli were switched to Chinese two-character words, in which the word frequency and the regularity of the leading character were manipulated. The logic of these two exemplar experiments was to explore some important issues such as the role of phonology on recognition by comparing the frequency effect between both the tasks. The results revealed different patterns of lexical access from those reported in the alphabetic systems. The results of Experiment 1 manifested a larger frequency effect in the naming task as compared to the LDT, when the stimuli were Chinese characters. And it is noteworthy that, in Experiment 1, when the stimuli were regular Chinese characters, the frequency effect observed in the naming task was roughly equivalent to that in the LDT. However, a smaller frequency effect was shown in the naming task as compared to the LDT, when the stimuli were switched to Chinese two-character words in Experiment 2. Taking advantage of the respective demands and characteristics in both tasks, researchers can obtain a more complete and precise picture of character/word recognition. PMID:27077703

  2. Pictures, images, and recollective experience.

    PubMed

    Dewhurst, S A; Conway, M A

    1994-09-01

    Five experiments investigated the influence of picture processing on recollective experience in recognition memory. Subjects studied items that differed in visual or imaginal detail, such as pictures versus words and high-imageability versus low-imageability words, and performed orienting tasks that directed processing either toward a stimulus as a word or toward a stimulus as a picture or image. Standard effects of imageability (e.g., the picture superiority effect and memory advantages following imagery) were obtained only in recognition judgments that featured recollective experience and were eliminated or reversed when recognition was not accompanied by recollective experience. It is proposed that conscious recollective experience in recognition memory is cued by attributes of retrieved memories such as sensory-perceptual attributes and records of cognitive operations performed at encoding.

  3. Development of Encoding and Decision Processes in Visual Recognition.

    ERIC Educational Resources Information Center

    Newcombe, Nora; MacKenzie, Doris L.

    This experiment examined two processes which might account for developmental increases in accuracy in visual recognition tasks: age-related increases in efficiency of scanning during inspection, and age-related increases in the ability to make decisions systematically during test. Critical details necessary for recognition were highlighted as…

  4. Adult Word Recognition and Visual Sequential Memory

    ERIC Educational Resources Information Center

    Holmes, V. M.

    2012-01-01

    Two experiments were conducted investigating the role of visual sequential memory skill in the word recognition efficiency of undergraduate university students. Word recognition was assessed in a lexical decision task using regularly and strangely spelt words, and nonwords that were either standard orthographically legal strings or items made from…

  5. The Role of Antibody in Korean Word Recognition

    ERIC Educational Resources Information Center

    Lee, Chang Hwan; Lee, Yoonhyoung; Kim, Kyungil

    2010-01-01

    A subsyllabic phonological unit, the antibody, has received little attention as a potential fundamental processing unit in word recognition. The psychological reality of the antibody in Korean recognition was investigated by looking at the performance of subjects presented with nonwords and words in the lexical decision task. In Experiment 1, the…

  6. Is Syntactic-Category Processing Obligatory in Visual Word Recognition? Evidence from Chinese

    ERIC Educational Resources Information Center

    Wong, Andus Wing-Kuen; Chen, Hsuan-Chih

    2012-01-01

    Three experiments were conducted to investigate how syntactic-category and semantic information is processed in visual word recognition. The stimuli were two-character Chinese words in which semantic and syntactic-category ambiguities were factorially manipulated. A lexical decision task was employed in Experiment 1, whereas a semantic relatedness…

  7. The memory state heuristic: A formal model based on repeated recognition judgments.

    PubMed

    Castela, Marta; Erdfelder, Edgar

    2017-02-01

    The recognition heuristic (RH) theory predicts that, in comparative judgment tasks, if one object is recognized and the other is not, the recognized one is chosen. The memory-state heuristic (MSH) extends the RH by assuming that choices are not affected by recognition judgments per se, but by the memory states underlying these judgments (i.e., recognition certainty, uncertainty, or rejection certainty). Specifically, the larger the discrepancy between memory states, the larger the probability of choosing the object in the higher state. The typical RH paradigm does not allow estimation of the underlying memory states because it is unknown whether the objects were previously experienced or not. Therefore, we extended the paradigm by repeating the recognition task twice. In line with high threshold models of recognition, we assumed that inconsistent recognition judgments result from uncertainty whereas consistent judgments most likely result from memory certainty. In Experiment 1, we fitted 2 nested multinomial models to the data: an MSH model that formalizes the relation between memory states and binary choices explicitly and an approximate model that ignores the (unlikely) possibility of consistent guesses. Both models provided converging results. As predicted, reliance on recognition increased with the discrepancy in the underlying memory states. In Experiment 2, we replicated these results and found support for choice consistency predictions of the MSH. Additionally, recognition and choice latencies were in agreement with the MSH in both experiments. Finally, we validated critical parameters of our MSH model through a cross-validation method and a third experiment. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  8. Ease of identifying words degraded by visual noise.

    PubMed

    Barber, P; de la Mahotière, C

    1982-08-01

    A technique is described for investigating word recognition involving the superimposition of 'noise' on the visual target word. For this task a word is printed in the form of letters made up of separate elements; noise consists of additional elements which serve to reduce the ease whereby the words may be recognized, and a threshold-like measure can be obtained in terms of the amount of noise. A word frequency effect was obtained for the noise task, and for words presented tachistoscopically but in conventional typography. For the tachistoscope task, however, the frequency effect depended on the method of presentation. A second study showed no effect of inspection interval on performance on the noise task. A word-frequency effect was also found in a third experiment with tachistoscopic exposure of the noise task stimuli in undegraded form. The question of whether common processes are drawn on by tasks entailing different ways of varying ease of recognition is addressed, and the suitability of different tasks for word recognition research is discussed.

  9. Eye-movement assessment of the time course in facial expression recognition: Neurophysiological implications.

    PubMed

    Calvo, Manuel G; Nummenmaa, Lauri

    2009-12-01

    Happy, surprised, disgusted, angry, sad, fearful, and neutral faces were presented extrafoveally, with fixations on faces allowed or not. The faces were preceded by a cue word that designated the face to be saccaded in a two-alternative forced-choice discrimination task (2AFC; Experiments 1 and 2), or were followed by a probe word for recognition (Experiment 3). Eye tracking was used to decompose the recognition process into stages. Relative to the other expressions, happy faces (1) were identified faster (as early as 160 msec from stimulus onset) in extrafoveal vision, as revealed by shorter saccade latencies in the 2AFC task; (2) required less encoding effort, as indexed by shorter first fixations and dwell times; and (3) required less decision-making effort, as indicated by fewer refixations on the face after the recognition probe was presented. This reveals a happy-face identification advantage both prior to and during overt attentional processing. The results are discussed in relation to prior neurophysiological findings on latencies in facial expression recognition.

  10. Pushing typists back on the learning curve: Memory chunking improves retrieval of prior typing episodes.

    PubMed

    Yamaguchi, Motonori; Randle, James M; Wilson, Thomas L; Logan, Gordon D

    2017-09-01

    Hierarchical control of skilled performance depends on chunking of several lower-level units into a single higher-level unit. The present study examined the relationship between chunking and recognition of trained materials in the context of typewriting. In 3 experiments, participants were trained with typing nonwords and were later tested on their recognition of the trained materials. In Experiment 1, participants typed the same words or nonwords in 5 consecutive trials while performing a concurrent memory task. In Experiment 2, participants typed the materials with lags between repetitions without a concurrent memory task. In both experiments, recognition of typing materials was associated with better chunking of the materials. Experiment 3 used the remember-know procedure to test the recollection and familiarity components of recognition. Remember judgments were associated with better chunking than know judgments or nonrecognition. These results indicate that chunking is associated with explicit recollection of prior typing episodes. The relevance of the existing memory models to chunking in typewriting was considered, and it is proposed that memory chunking improves retrieval of trained typing materials by integrating contextual cues into the memory traces. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  11. Voice tracking and spoken word recognition in the presence of other voices

    NASA Astrophysics Data System (ADS)

    Litong-Palima, Marisciel; Violanda, Renante; Saloma, Caesar

    2004-12-01

    We study the human hearing process by modeling the hair cell as a thresholded Hopf bifurcator and compare our calculations with experimental results involving human subjects in two different multi-source listening tasks of voice tracking and spoken-word recognition. In the model, we observed noise suppression by destructive interference between noise sources which weakens the effective noise strength acting on the hair cell. Different success rate characteristics were observed for the two tasks. Hair cell performance at low threshold levels agree well with results from voice-tracking experiments while those of word-recognition experiments are consistent with a linear model of the hearing process. The ability of humans to track a target voice is robust against cross-talk interference unlike word-recognition performance which deteriorates quickly with the number of uncorrelated noise sources in the environment which is a response behavior that is associated with linear systems.

  12. Perceptual effects on remembering: recollective processes in picture recognition memory.

    PubMed

    Rajaram, S

    1996-03-01

    In 3 experiments, the effects of perceptual manipulations on recollective experience were tested. In Experiment 1, a picture-superiority effect was obtained for overall recognition and Remember judgements in a picture recognition task. In Experiment 2, size changes of pictorial stimuli across study and test reduced recognition memory and Remember judgements. In Experiment 3, deleterious effects of changes in left-right orientation of pictorial stimuli across study and test were obtained for Remember judgements. An alternate framework that emphasizes a distinctiveness-fluency processing distinction is proposed to account for these findings because they cannot easily be accommodated within the existing account of differences in conceptual and perceptual processing for the 2 categories of recollective experience: Remembering and Knowing, respectively (J. M. Gardiner, 1988; S. Rajaram, 1993).

  13. A New Font, Specifically Designed for Peripheral Vision, Improves Peripheral Letter and Word Recognition, but Not Eye-Mediated Reading Performance

    PubMed Central

    Bernard, Jean-Baptiste; Aguilar, Carlos; Castet, Eric

    2016-01-01

    Reading speed is dramatically reduced when readers cannot use their central vision. This is because low visual acuity and crowding negatively impact letter recognition in the periphery. In this study, we designed a new font (referred to as the Eido font) in order to reduce inter-letter similarity and consequently to increase peripheral letter recognition performance. We tested this font by running five experiments that compared the Eido font with the standard Courier font. Letter spacing and x-height were identical for the two monospaced fonts. Six normally-sighted subjects used exclusively their peripheral vision to run two aloud reading tasks (with eye movements), a letter recognition task (without eye movements), a word recognition task (without eye movements) and a lexical decision task. Results show that reading speed was not significantly different between the Eido and the Courier font when subjects had to read single sentences with a round simulated gaze-contingent central scotoma (10° diameter). In contrast, Eido significantly decreased perceptual errors in peripheral crowded letter recognition (-30% errors on average for letters briefly presented at 6° eccentricity) and in peripheral word recognition (-32% errors on average for words briefly presented at 6° eccentricity). PMID:27074013

  14. A New Font, Specifically Designed for Peripheral Vision, Improves Peripheral Letter and Word Recognition, but Not Eye-Mediated Reading Performance.

    PubMed

    Bernard, Jean-Baptiste; Aguilar, Carlos; Castet, Eric

    2016-01-01

    Reading speed is dramatically reduced when readers cannot use their central vision. This is because low visual acuity and crowding negatively impact letter recognition in the periphery. In this study, we designed a new font (referred to as the Eido font) in order to reduce inter-letter similarity and consequently to increase peripheral letter recognition performance. We tested this font by running five experiments that compared the Eido font with the standard Courier font. Letter spacing and x-height were identical for the two monospaced fonts. Six normally-sighted subjects used exclusively their peripheral vision to run two aloud reading tasks (with eye movements), a letter recognition task (without eye movements), a word recognition task (without eye movements) and a lexical decision task. Results show that reading speed was not significantly different between the Eido and the Courier font when subjects had to read single sentences with a round simulated gaze-contingent central scotoma (10° diameter). In contrast, Eido significantly decreased perceptual errors in peripheral crowded letter recognition (-30% errors on average for letters briefly presented at 6° eccentricity) and in peripheral word recognition (-32% errors on average for words briefly presented at 6° eccentricity).

  15. About-face on face recognition ability and holistic processing

    PubMed Central

    Richler, Jennifer J.; Floyd, R. Jackie; Gauthier, Isabel

    2015-01-01

    Previous work found a small but significant relationship between holistic processing measured with the composite task and face recognition ability measured by the Cambridge Face Memory Test (CFMT; Duchaine & Nakayama, 2006). Surprisingly, recent work using a different measure of holistic processing (Vanderbilt Holistic Face Processing Test [VHPT-F]; Richler, Floyd, & Gauthier, 2014) and a larger sample found no evidence for such a relationship. In Experiment 1 we replicate this unexpected result, finding no relationship between holistic processing (VHPT-F) and face recognition ability (CFMT). A key difference between the VHPT-F and other holistic processing measures is that unique face parts are used on each trial in the VHPT-F, unlike in other tasks where a small set of face parts repeat across the experiment. In Experiment 2, we test the hypothesis that correlations between the CFMT and holistic processing tasks are driven by stimulus repetition that allows for learning during the composite task. Consistent with our predictions, CFMT performance was correlated with holistic processing in the composite task when a small set of face parts repeated over trials, but not when face parts did not repeat. A meta-analysis confirms that relationships between the CFMT and holistic processing depend on stimulus repetition. These results raise important questions about what is being measured by the CFMT, and challenge current assumptions about why faces are processed holistically. PMID:26223027

  16. About-face on face recognition ability and holistic processing.

    PubMed

    Richler, Jennifer J; Floyd, R Jackie; Gauthier, Isabel

    2015-01-01

    Previous work found a small but significant relationship between holistic processing measured with the composite task and face recognition ability measured by the Cambridge Face Memory Test (CFMT; Duchaine & Nakayama, 2006). Surprisingly, recent work using a different measure of holistic processing (Vanderbilt Holistic Face Processing Test [VHPT-F]; Richler, Floyd, & Gauthier, 2014) and a larger sample found no evidence for such a relationship. In Experiment 1 we replicate this unexpected result, finding no relationship between holistic processing (VHPT-F) and face recognition ability (CFMT). A key difference between the VHPT-F and other holistic processing measures is that unique face parts are used on each trial in the VHPT-F, unlike in other tasks where a small set of face parts repeat across the experiment. In Experiment 2, we test the hypothesis that correlations between the CFMT and holistic processing tasks are driven by stimulus repetition that allows for learning during the composite task. Consistent with our predictions, CFMT performance was correlated with holistic processing in the composite task when a small set of face parts repeated over trials, but not when face parts did not repeat. A meta-analysis confirms that relationships between the CFMT and holistic processing depend on stimulus repetition. These results raise important questions about what is being measured by the CFMT, and challenge current assumptions about why faces are processed holistically.

  17. Individuals with Low Working Memory Spans Show Greater Interference from Irrelevant Information Because of Poor Source Monitoring, Not Greater Activation

    PubMed Central

    Lilienthal, Lindsey; Rose, Nathan S.; Tamez, Elaine; Myerson, Joel; Hale, Sandra

    2014-01-01

    Although individuals with high and low working memory (WM) span appear to differ in the extent to which irrelevant information interferes with their performance on WM tasks, the locus of this interference is not clear. The present study investigated whether, when performing a WM task, high- and low-span individuals differ in the activation of formerly relevant, but now irrelevant items, and/or in their ability to correctly identify such irrelevant items. This was done in two experiments, both of which used modified complex WM span tasks. In Experiment 1, the span task included an embedded lexical decision task designed to obtain an implicit measure of the activation of both currently and formerly relevant items. In Experiment 2, the span task included an embedded recognition judgment task designed to obtain an explicit measure of both item and source recognition ability. The results of these experiments indicate that low-span individuals do not hold irrelevant information in a more active state in memory than high-span individuals, but rather that low-span individuals are significantly poorer at identifying such information as irrelevant at the time of retrieval. These results suggest that differences in the ability to monitor the source of information, rather than differences in the activation of irrelevant information, are the more important determinant of performance on WM tasks. PMID:25921723

  18. The Onset and Time Course of Semantic Priming during Rapid Recognition of Visual Words

    PubMed Central

    Hoedemaker, Renske S.; Gordon, Peter C.

    2016-01-01

    In two experiments, we assessed the effects of response latency and task-induced goals on the onset and time course of semantic priming during rapid processing of visual words as revealed by ocular response tasks. In Experiment 1 (Ocular Lexical Decision Task), participants performed a lexical decision task using eye-movement responses on a sequence of four words. In Experiment 2, the same words were encoded for an episodic recognition memory task that did not require a meta-linguistic judgment. For both tasks, survival analyses showed that the earliest-observable effect (Divergence Point or DP) of semantic priming on target-word reading times occurred at approximately 260 ms, and ex-Gaussian distribution fits revealed that the magnitude of the priming effect increased as a function of response time. Together, these distributional effects of semantic priming suggest that the influence of the prime increases when target processing is more effortful. This effect does not require that the task include a metalinguistic judgment; manipulation of the task goals across experiments affected the overall response speed but not the location of the DP or the overall distributional pattern of the priming effect. These results are more readily explained as the result of a retrospective rather than a prospective priming mechanism and are consistent with compound-cue models of semantic priming. PMID:28230394

  19. The onset and time course of semantic priming during rapid recognition of visual words.

    PubMed

    Hoedemaker, Renske S; Gordon, Peter C

    2017-05-01

    In 2 experiments, we assessed the effects of response latency and task-induced goals on the onset and time course of semantic priming during rapid processing of visual words as revealed by ocular response tasks. In Experiment 1 (ocular lexical decision task), participants performed a lexical decision task using eye movement responses on a sequence of 4 words. In Experiment 2, the same words were encoded for an episodic recognition memory task that did not require a metalinguistic judgment. For both tasks, survival analyses showed that the earliest observable effect (divergence point [DP]) of semantic priming on target-word reading times occurred at approximately 260 ms, and ex-Gaussian distribution fits revealed that the magnitude of the priming effect increased as a function of response time. Together, these distributional effects of semantic priming suggest that the influence of the prime increases when target processing is more effortful. This effect does not require that the task include a metalinguistic judgment; manipulation of the task goals across experiments affected the overall response speed but not the location of the DP or the overall distributional pattern of the priming effect. These results are more readily explained as the result of a retrospective, rather than a prospective, priming mechanism and are consistent with compound-cue models of semantic priming. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  20. Evaluating Effects of Divided Hemispheric Processing on Word Recognition in Foveal and Extrafoveal Displays: The Evidence from Arabic

    PubMed Central

    Almabruk, Abubaker A. A.; Paterson, Kevin B.; McGowan, Victoria; Jordan, Timothy R.

    2011-01-01

    Background Previous studies have claimed that a precise split at the vertical midline of each fovea causes all words to the left and right of fixation to project to the opposite, contralateral hemisphere, and this division in hemispheric processing has considerable consequences for foveal word recognition. However, research in this area is dominated by the use of stimuli from Latinate languages, which may induce specific effects on performance. Consequently, we report two experiments using stimuli from a fundamentally different, non-Latinate language (Arabic) that offers an alternative way of revealing effects of split-foveal processing, if they exist. Methods and Findings Words (and pseudowords) were presented to the left or right of fixation, either close to fixation and entirely within foveal vision, or further from fixation and entirely within extrafoveal vision. Fixation location and stimulus presentations were carefully controlled using an eye-tracker linked to a fixation-contingent display. To assess word recognition, Experiment 1 used the Reicher-Wheeler task and Experiment 2 used the lexical decision task. Results Performance in both experiments indicated a functional division in hemispheric processing for words in extrafoveal locations (in recognition accuracy in Experiment 1 and in reaction times and error rates in Experiment 2) but no such division for words in foveal locations. Conclusions These findings from a non-Latinate language provide new evidence that although a functional division in hemispheric processing exists for word recognition outside the fovea, this division does not extend up to the point of fixation. Some implications for word recognition and reading are discussed. PMID:21559084

  1. A high-fat high-sugar diet-induced impairment in place-recognition memory is reversible and training-dependent.

    PubMed

    Tran, Dominic M D; Westbrook, R Frederick

    2017-03-01

    A high-fat high-sugar (HFHS) diet is associated with cognitive deficits in people and produces spatial learning and memory deficits in rodents. Notable, such diets rapidly impair place-, but not object-recognition memory in rats within one week of exposure. Three experiments examined whether this impairment was reversed by removal of the diet, or prevented by pre-diet training. Experiment 1 showed that rats switched from HFHS to chow recovered from the place-recognition impairment that they displayed while on HFHS. Experiment 2 showed that control rats ("Untrained") who were exposed to an empty testing arena while on chow, were impaired in place-recognition when switched to HFHS and tested for the first time. However, rats tested ("Trained") on the place and object task while on chow, were protected from the diet-induce deficit and maintained good place-recognition when switched to HFHS. Experiment 3 examined the conditions of this protection effect by training rats in a square arena while on chow, and testing them in a rectangular arena while on HFHS. We have previously demonstrated that chow rats, but not HFHS rats, show geometry-based reorientation on a rectangular arena place-recognition task (Tran & Westbrook, 2015). Experiment 3 assessed whether rats switched to the HFHS diet after training on the place and object tasks in a square area, would show geometry-based reorientation in a rectangular arena. The protective benefit of training was replicated in the square arena, but both Untrained and Trained HFHS failed to show geometry-based reorientation in the rectangular arena. These findings are discussed in relation to the specificity of the training effect, the role of the hippocampus in diet-induced deficits, and their implications for dietary effects on cognition in people. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Working memory for pitch, timbre, and words

    PubMed Central

    Tillmann, Barbara

    2012-01-01

    Aiming to further our understanding of fundamental mechanisms of auditory working memory (WM), the present study compared performance for three auditory materials (words, tones, timbres). In a forward recognition task (Experiment 1) participants indicated whether the order of the items in the second sequence was the same as in the first sequence. In a backward recognition task (Experiment 2) participants indicated whether the items of the second sequence were played in the correct backward order. In Experiment 3 participants performed an articulatory suppression task during the retention delay of the backward task. To investigate potential length effects the number of items per sequence was manipulated. Overall findings underline the benefit of a cross-material experimental approach and suggest that human auditory WM is not a unitary system. Whereas WM processes for timbres differed from those for tones and words, similarities and differences were observed for words and tones: Both types of stimuli appear to rely on rehearsal mechanisms, but might differ in the involved sensorimotor codes. PMID:23116413

  3. Postencoding cognitive processes in the cross-race effect: Categorization and individuation during face recognition.

    PubMed

    Ho, Michael R; Pezdek, Kathy

    2016-06-01

    The cross-race effect (CRE) describes the finding that same-race faces are recognized more accurately than cross-race faces. According to social-cognitive theories of the CRE, processes of categorization and individuation at encoding account for differential recognition of same- and cross-race faces. Recent face memory research has suggested that similar but distinct categorization and individuation processes also occur postencoding, at recognition. Using a divided-attention paradigm, in Experiments 1A and 1B we tested and confirmed the hypothesis that distinct postencoding categorization and individuation processes occur during the recognition of same- and cross-race faces. Specifically, postencoding configural divided-attention tasks impaired recognition accuracy more for same-race than for cross-race faces; on the other hand, for White (but not Black) participants, postencoding featural divided-attention tasks impaired recognition accuracy more for cross-race than for same-race faces. A social categorization paradigm used in Experiments 2A and 2B tested the hypothesis that the postencoding in-group or out-group social orientation to faces affects categorization and individuation processes during the recognition of same-race and cross-race faces. Postencoding out-group orientation to faces resulted in categorization for White but not for Black participants. This was evidenced by White participants' impaired recognition accuracy for same-race but not for cross-race out-group faces. Postencoding in-group orientation to faces had no effect on recognition accuracy for either same-race or cross-race faces. The results of Experiments 2A and 2B suggest that this social orientation facilitates White but not Black participants' individuation and categorization processes at recognition. Models of recognition memory for same-race and cross-race faces need to account for processing differences that occur at both encoding and recognition.

  4. [Learning virtual routes: what does verbal coding do in working memory?].

    PubMed

    Gyselinck, Valérie; Grison, Élise; Gras, Doriane

    2015-03-01

    Two experiments were run to complete our understanding of the role of verbal and visuospatial encoding in the construction of a spatial model from visual input. In experiment 1 a dual task paradigm was applied to young adults who learned a route in a virtual environment and then performed a series of nonverbal tasks to assess spatial knowledge. Results indicated that landmark knowledge as asserted by the visual recognition of landmarks was not impaired by any of the concurrent task. Route knowledge, assessed by recognition of directions, was impaired both by a tapping task and a concurrent articulation task. Interestingly, the pattern was modulated when no landmarks were available to perform the direction task. A second experiment was designed to explore the role of verbal coding on the construction of landmark and route knowledge. A lexical-decision task was used as a verbal-semantic dual task, and a tone decision task as a nonsemantic auditory task. Results show that these new concurrent tasks impaired differently landmark knowledge and route knowledge. Results can be interpreted as showing that the coding of route knowledge could be grounded on both a coding of the sequence of events and on a semantic coding of information. These findings also point on some limits of Baddeley's working memory model. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  5. Tactile memory of deaf-blind adults on four tasks.

    PubMed

    Arnold, Paul; Heiron, Karen

    2002-02-01

    The performance of ten deaf-blind and ten sighted-hearing participants on four tactile memory tasks was investigated. Recognition and recall memory tasks and a matching pairs game were used. It was hypothesized that deaf-blind participants would be superior on each task. Performance was measured in terms of the time taken, and the number of items correctly recalled. In Experiments 1 and 2, which measured recognition memory in terms of the time taken to remember target items, the hypothesis was supported, but not by the length of time taken to recognize the target items, or for the number of target items correctly identified. The hypothesis was supported by Experiment 3, which measured recall memory, with regard to time taken to complete some of the tasks but not for the number of correctly recalled positions. Experiment 4, which used the matching pairs game, supported the hypothesis in terms of both time taken and the number of moves required. It is concluded that the deaf-blind people's tactile encoding is more efficient than that of sighted-hearing people, and that it is probable that their storage and retrieval are normal.

  6. Development of novel tasks for studying view-invariant object recognition in rodents: Sensitivity to scopolamine.

    PubMed

    Mitchnick, Krista A; Wideman, Cassidy E; Huff, Andrew E; Palmer, Daniel; McNaughton, Bruce L; Winters, Boyer D

    2018-05-15

    The capacity to recognize objects from different view-points or angles, referred to as view-invariance, is an essential process that humans engage in daily. Currently, the ability to investigate the neurobiological underpinnings of this phenomenon is limited, as few ethologically valid view-invariant object recognition tasks exist for rodents. Here, we report two complementary, novel view-invariant object recognition tasks in which rodents physically interact with three-dimensional objects. Prior to experimentation, rats and mice were given extensive experience with a set of 'pre-exposure' objects. In a variant of the spontaneous object recognition task, novelty preference for pre-exposed or new objects was assessed at various angles of rotation (45°, 90° or 180°); unlike control rodents, for whom the objects were novel, rats and mice tested with pre-exposed objects did not discriminate between rotated and un-rotated objects in the choice phase, indicating substantial view-invariant object recognition. Secondly, using automated operant touchscreen chambers, rats were tested on pre-exposed or novel objects in a pairwise discrimination task, where the rewarded stimulus (S+) was rotated (180°) once rats had reached acquisition criterion; rats tested with pre-exposed objects re-acquired the pairwise discrimination following S+ rotation more effectively than those tested with new objects. Systemic scopolamine impaired performance on both tasks, suggesting involvement of acetylcholine at muscarinic receptors in view-invariant object processing. These tasks present novel means of studying the behavioral and neural bases of view-invariant object recognition in rodents. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. Misremembering What You See or Hear: Dissociable Effects of Modality on Short- and Long-Term False Recognition

    ERIC Educational Resources Information Center

    Olszewska, Justyna M.; Reuter-Lorenz, Patricia A.; Munier, Emily; Bendler, Sara A.

    2015-01-01

    False working memories readily emerge using a visual item-recognition variant of the converging associates task. Two experiments, manipulating study and test modality, extended prior working memory results by demonstrating a reliable false recognition effect (more false alarms to associatively related lures than to unrelated lures) within seconds…

  8. How to Say No: Single- and Dual-Process Theories of Short-Term Recognition Tested on Negative Probes

    ERIC Educational Resources Information Center

    Oberauer, Klaus

    2008-01-01

    Three experiments with short-term recognition tasks are reported. In Experiments 1 and 2, participants decided whether a probe matched a list item specified by its spatial location. Items presented at study in a different location (intrusion probes) had to be rejected. Serial position curves of positive, new, and intrusion probes over the probed…

  9. Adrenergic enhancement of consolidation of object recognition memory.

    PubMed

    Dornelles, Arethuza; de Lima, Maria Noemia Martins; Grazziotin, Manoela; Presti-Torres, Juliana; Garcia, Vanessa Athaide; Scalco, Felipe Siciliani; Roesler, Rafael; Schröder, Nadja

    2007-07-01

    Extensive evidence indicates that epinephrine (EPI) modulates memory consolidation for emotionally arousing tasks in animals and human subjects. However, previous studies have not examined the effects of EPI on consolidation of recognition memory. Here we report that systemic administration of EPI enhances consolidation of memory for a novel object recognition (NOR) task under different training conditions. Control male rats given a systemic injection of saline (0.9% NaCl) immediately after NOR training showed significant memory retention when tested at 1.5 or 24, but not 96h after training. In contrast, rats given a post-training injection of EPI showed significant retention of NOR at all delays. In a second experiment using a different training condition, rats treated with EPI, but not SAL-treated animals, showed significant NOR retention at both 1.5 and 24-h delays. We next showed that the EPI-induced enhancement of retention tested at 96h after training was prevented by pretraining systemic administration of the beta-adrenoceptor antagonist propranolol. The findings suggest that, as previously observed in experiments using aversively motivated tasks, epinephrine modulates consolidation of recognition memory and that the effects require activation of beta-adrenoceptors.

  10. Semantic Neighborhood Effects for Abstract versus Concrete Words

    PubMed Central

    Danguecan, Ashley N.; Buchanan, Lori

    2016-01-01

    Studies show that semantic effects may be task-specific, and thus, that semantic representations are flexible and dynamic. Such findings are critical to the development of a comprehensive theory of semantic processing in visual word recognition, which should arguably account for how semantic effects may vary by task. It has been suggested that semantic effects are more directly examined using tasks that explicitly require meaning processing relative to those for which meaning processing is not necessary (e.g., lexical decision task). The purpose of the present study was to chart the processing of concrete versus abstract words in the context of a global co-occurrence variable, semantic neighborhood density (SND), by comparing word recognition response times (RTs) across four tasks varying in explicit semantic demands: standard lexical decision task (with non-pronounceable non-words), go/no-go lexical decision task (with pronounceable non-words), progressive demasking task, and sentence relatedness task. The same experimental stimulus set was used across experiments and consisted of 44 concrete and 44 abstract words, with half of these being low SND, and half being high SND. In this way, concreteness and SND were manipulated in a factorial design using a number of visual word recognition tasks. A consistent RT pattern emerged across tasks, in which SND effects were found for abstract (but not necessarily concrete) words. Ultimately, these findings highlight the importance of studying interactive effects in word recognition, and suggest that linguistic associative information is particularly important for abstract words. PMID:27458422

  11. Semantic Neighborhood Effects for Abstract versus Concrete Words.

    PubMed

    Danguecan, Ashley N; Buchanan, Lori

    2016-01-01

    Studies show that semantic effects may be task-specific, and thus, that semantic representations are flexible and dynamic. Such findings are critical to the development of a comprehensive theory of semantic processing in visual word recognition, which should arguably account for how semantic effects may vary by task. It has been suggested that semantic effects are more directly examined using tasks that explicitly require meaning processing relative to those for which meaning processing is not necessary (e.g., lexical decision task). The purpose of the present study was to chart the processing of concrete versus abstract words in the context of a global co-occurrence variable, semantic neighborhood density (SND), by comparing word recognition response times (RTs) across four tasks varying in explicit semantic demands: standard lexical decision task (with non-pronounceable non-words), go/no-go lexical decision task (with pronounceable non-words), progressive demasking task, and sentence relatedness task. The same experimental stimulus set was used across experiments and consisted of 44 concrete and 44 abstract words, with half of these being low SND, and half being high SND. In this way, concreteness and SND were manipulated in a factorial design using a number of visual word recognition tasks. A consistent RT pattern emerged across tasks, in which SND effects were found for abstract (but not necessarily concrete) words. Ultimately, these findings highlight the importance of studying interactive effects in word recognition, and suggest that linguistic associative information is particularly important for abstract words.

  12. Music to my ears: Age-related decline in musical and facial emotion recognition.

    PubMed

    Sutcliffe, Ryan; Rendell, Peter G; Henry, Julie D; Bailey, Phoebe E; Ruffman, Ted

    2017-12-01

    We investigated young-old differences in emotion recognition using music and face stimuli and tested explanatory hypotheses regarding older adults' typically worse emotion recognition. In Experiment 1, young and older adults labeled emotions in an established set of faces, and in classical piano stimuli that we pilot-tested on other young and older adults. Older adults were worse at detecting anger, sadness, fear, and happiness in music. Performance on the music and face emotion tasks was not correlated for either age group. Because musical expressions of fear were not equated for age groups in the pilot study of Experiment 1, we conducted a second experiment in which we created a novel set of music stimuli that included more accessible musical styles, and which we again pilot-tested on young and older adults. In this pilot study, all musical emotions were identified similarly by young and older adults. In Experiment 2, participants also made age estimations in another set of faces to examine whether potential relations between the face and music emotion tasks would be shared with the age estimation task. Older adults did worse in each of the tasks, and had specific difficulty recognizing happy, sad, peaceful, angry, and fearful music clips. Older adults' difficulties in each of the 3 tasks-music emotion, face emotion, and face age-were not correlated with each other. General cognitive decline did not appear to explain our results as increasing age predicted emotion performance even after fluid IQ was controlled for within the older adult group. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  13. Stress enhances reconsolidation of declarative memory.

    PubMed

    Bos, Marieke G N; Schuijer, Jantien; Lodestijn, Fleur; Beckers, Tom; Kindt, Merel

    2014-08-01

    Retrieval of negative emotional memories is often accompanied by the experience of stress. Upon retrieval, a memory trace can temporarily return into a labile state, where it is vulnerable to change. An unresolved question is whether post-retrieval stress may affect the strength of declarative memory in humans by modulating the reconsolidation process. Here, we tested in two experiments whether post-reactivation stress may affect the strength of declarative memory in humans. In both experiments, participants were instructed to learn neutral, positive and negative words. Approximately 24h later, participants received a reminder of the word list followed by exposure to the social evaluative cold pressor task (reactivation/stress group, nexp1=20; nexp2=18) or control task (reactivation/no-stress group, nexp1=23; nexp2=18). An additional control group was solely exposed to the stress task, without memory reactivation (no-reactivation/stress group, nexp1=23; nexp2=21). The next day, memory performance was tested using a free recall and a recognition task. In the first experiment we showed that participants in the reactivation/stress group recalled more words than participants in the reactivation/no-stress and no-reactivation/stress group, irrespective of valence of the word stimuli. Furthermore, participants in the reactivation/stress group made more false recognition errors. In the second experiment we replicated our observations on the free recall task for a new set of word stimuli, but we did not find any differences in false recognition. The current findings indicate that post-reactivation stress can improve declarative memory performance by modulating the process of reconsolidation. This finding contributes to our understanding why some memories are more persistent than others. Copyright © 2014. Published by Elsevier Ltd.

  14. Action Recognition in a Crowded Environment

    PubMed Central

    Nieuwenhuis, Judith; Bülthoff, Isabelle; Barraclough, Nick; de la Rosa, Stephan

    2017-01-01

    So far, action recognition has been mainly examined with small point-light human stimuli presented alone within a narrow central area of the observer’s visual field. Yet, we need to recognize the actions of life-size humans viewed alone or surrounded by bystanders, whether they are seen in central or peripheral vision. Here, we examined the mechanisms in central vision and far periphery (40° eccentricity) involved in the recognition of the actions of a life-size actor (target) and their sensitivity to the presence of a crowd surrounding the target. In Experiment 1, we used an action adaptation paradigm to probe whether static or idly moving crowds might interfere with the recognition of a target’s action (hug or clap). We found that this type of crowds whose movements were dissimilar to the target action hardly affected action recognition in central and peripheral vision. In Experiment 2, we examined whether crowd actions that were more similar to the target actions affected action recognition. Indeed, the presence of that crowd diminished adaptation aftereffects in central vision as wells as in the periphery. We replicated Experiment 2 using a recognition task instead of an adaptation paradigm. With this task, we found evidence of decreased action recognition accuracy, but this was significant in peripheral vision only. Our results suggest that the presence of a crowd carrying out actions similar to that of the target affects its recognition. We outline how these results can be understood in terms of high-level crowding effects that operate on action-sensitive perceptual channels. PMID:29308177

  15. Non-native Listeners’ Recognition of High-Variability Speech Using PRESTO

    PubMed Central

    Tamati, Terrin N.; Pisoni, David B.

    2015-01-01

    Background Natural variability in speech is a significant challenge to robust successful spoken word recognition. In everyday listening environments, listeners must quickly adapt and adjust to multiple sources of variability in both the signal and listening environments. High-variability speech may be particularly difficult to understand for non-native listeners, who have less experience with the second language (L2) phonological system and less detailed knowledge of sociolinguistic variation of the L2. Purpose The purpose of this study was to investigate the effects of high-variability sentences on non-native speech recognition and to explore the underlying sources of individual differences in speech recognition abilities of non-native listeners. Research Design Participants completed two sentence recognition tasks involving high-variability and low-variability sentences. They also completed a battery of behavioral tasks and self-report questionnaires designed to assess their indexical processing skills, vocabulary knowledge, and several core neurocognitive abilities. Study Sample Native speakers of Mandarin (n = 25) living in the United States recruited from the Indiana University community participated in the current study. A native comparison group consisted of scores obtained from native speakers of English (n = 21) in the Indiana University community taken from an earlier study. Data Collection and Analysis Speech recognition in high-variability listening conditions was assessed with a sentence recognition task using sentences from PRESTO (Perceptually Robust English Sentence Test Open-Set) mixed in 6-talker multitalker babble. Speech recognition in low-variability listening conditions was assessed using sentences from HINT (Hearing In Noise Test) mixed in 6-talker multitalker babble. Indexical processing skills were measured using a talker discrimination task, a gender discrimination task, and a forced-choice regional dialect categorization task. Vocabulary knowledge was assessed with the WordFam word familiarity test, and executive functioning was assessed with the BRIEF-A (Behavioral Rating Inventory of Executive Function – Adult Version) self-report questionnaire. Scores from the non-native listeners on behavioral tasks and self-report questionnaires were compared with scores obtained from native listeners tested in a previous study and were examined for individual differences. Results Non-native keyword recognition scores were significantly lower on PRESTO sentences than on HINT sentences. Non-native listeners’ keyword recognition scores were also lower than native listeners’ scores on both sentence recognition tasks. Differences in performance on the sentence recognition tasks between non-native and native listeners were larger on PRESTO than on HINT, although group differences varied by signal-to-noise ratio. The non-native and native groups also differed in the ability to categorize talkers by region of origin and in vocabulary knowledge. Individual non-native word recognition accuracy on PRESTO sentences in multitalker babble at more favorable signal-to-noise ratios was found to be related to several BRIEF-A subscales and composite scores. However, non-native performance on PRESTO was not related to regional dialect categorization, talker and gender discrimination, or vocabulary knowledge. Conclusions High-variability sentences in multitalker babble were particularly challenging for non-native listeners. Difficulty under high-variability testing conditions was related to lack of experience with the L2, especially L2 sociolinguistic information, compared with native listeners. Individual differences among the non-native listeners were related to weaknesses in core neurocognitive abilities affecting behavioral control in everyday life. PMID:25405842

  16. Haloperidol increases false recognition memory of thematically related pictures in healthy volunteers.

    PubMed

    Guarnieri, Regina V; Buratto, Luciano G; Gomes, Carlos F A; Ribeiro, Rafaela L; de Souza, Altay A Lino; Stein, Lilian M; Galduróz, José C; Bueno, Orlando F A

    2017-01-01

    Dopamine can modulate long-term episodic memory. Its potential role on the generation of false memories, however, is less well known. In a randomized, double-blind, placebo-controlled experiment, 24 young healthy volunteers ingested a 4-mg oral dose of haloperidol, a dopamine D 2 -receptor antagonist, or placebo, before taking part in a recognition memory task. Haloperidol was active during both study and test phases of the experiment. Participants in the haloperidol group produced more false recognition responses than those in the placebo group, despite similar levels of correct recognition. These findings show that dopamine blockade in healthy volunteers can specifically increase false recognition memory. Copyright © 2016 John Wiley & Sons, Ltd.

  17. The Roles of Spreading Activation and Retrieval Mode in Producing False Recognition in the DRM Paradigm

    ERIC Educational Resources Information Center

    Meade, Michelle L.; Watson, Jason M.; Balota, David A.; Roediger, Henry L., III

    2007-01-01

    The nature of persisting spreading activation from list presentation in eliciting false recognition in the Deese-Roediger-McDermott (DRM) paradigm was examined in two experiments. We compared the time course of semantic priming in the lexical decision task (LDT) and false alarms in speeded recognition under identical study and test conditions. The…

  18. Modulating Memory Performance in Healthy Subjects with Transcranial Direct Current Stimulation Over the Right Dorsolateral Prefrontal Cortex.

    PubMed

    Smirni, Daniela; Turriziani, Patrizia; Mangano, Giuseppa Renata; Cipolotti, Lisa; Oliveri, Massimiliano

    2015-01-01

    The role of the Dorsolateral Prefrontal Cortex (DLPFC) in recognition memory has been well documented in lesion, neuroimaging and repetitive Transcranial Magnetic Stimulation (rTMS) studies. The aim of the present study was to investigate the effects of transcranial Direct Current Stimulation (tDCS) over the left and the right DLPFC during the delay interval of a non-verbal recognition memory task. 36 right-handed young healthy subjects participated in the study. The experimental task was an Italian version of Recognition Memory Test for unknown faces. Study included two experiments: in a first experiment, each subject underwent one session of sham tDCS and one session of left or right cathodal tDCS; in a second experiment each subject underwent one session of sham tDCS and one session of left or right anodal tDCS. Cathodal tDCS over the right DLPFC significantly improved non verbal recognition memory performance, while cathodal tDCS over the left DLPFC had no effect. Anodal tDCS of both the left and right DLPFC did not modify non verbal recognition memory performance. Complementing the majority of previous studies, reporting long term memory facilitations following left prefrontal anodal tDCS, the present findings show that cathodal tDCS of the right DLPFC can also improve recognition memory in healthy subjects.

  19. Electrophysiological evidence for flexible goal-directed cue processing during episodic retrieval.

    PubMed

    Herron, Jane E; Evans, Lisa H; Wilding, Edward L

    2016-05-15

    A widely held assumption is that memory retrieval is aided by cognitive control processes that are engaged flexibly in service of memory retrieval and memory decisions. While there is some empirical support for this view, a notable exception is the absence of evidence for the flexible use of retrieval control in functional neuroimaging experiments requiring frequent switches between tasks with different cognitive demands. This absence is troublesome in so far as frequent switches between tasks mimic some of the challenges that are typically placed on memory outside the laboratory. In this experiment we instructed participants to alternate frequently between three episodic memory tasks requiring item recognition or retrieval of one of two different kinds of contextual information encoded in a prior study phase (screen location or encoding task). Event-related potentials (ERPs) elicited by unstudied items in the two tasks requiring retrieval of study context were reliably different, demonstrating for the first time that ERPs index task-specific processing of retrieval cues when retrieval goals change frequently. The inclusion of the item recognition task was a novel and important addition in this study, because only the ERPs elicited by unstudied items in one of the two context conditions diverged from those in the item recognition condition. This outcome constrains functional interpretations of the differences that emerged between the two context conditions and emphasises the utility of this baseline in functional imaging studies of retrieval processing operations. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  20. Electrophysiological evidence for flexible goal-directed cue processing during episodic retrieval

    PubMed Central

    Herron, Jane E.; Evans, Lisa H.; Wilding, Edward L.

    2016-01-01

    A widely held assumption is that memory retrieval is aided by cognitive control processes that are engaged flexibly in service of memory retrieval and memory decisions. While there is some empirical support for this view, a notable exception is the absence of evidence for the flexible use of retrieval control in functional neuroimaging experiments requiring frequent switches between tasks with different cognitive demands. This absence is troublesome in so far as frequent switches between tasks mimic some of the challenges that are typically placed on memory outside the laboratory. In this experiment we instructed participants to alternate frequently between three episodic memory tasks requiring item recognition or retrieval of one of two different kinds of contextual information encoded in a prior study phase (screen location or encoding task). Event-related potentials (ERPs) elicited by unstudied items in the two tasks requiring retrieval of study context were reliably different, demonstrating for the first time that ERPs index task-specific processing of retrieval cues when retrieval goals change frequently. The inclusion of the item recognition task was a novel and important addition in this study, because only the ERPs elicited by unstudied items in one of the two context conditions diverged from those in the item recognition condition. This outcome constrains functional interpretations of the differences that emerged between the two context conditions and emphasises the utility of this baseline in functional imaging studies of retrieval processing operations. PMID:26892854

  1. Impact of Childhood Maltreatment on the Recognition of Facial Expressions of Emotions.

    PubMed

    Ardizzi, Martina; Martini, Francesca; Umiltà, Maria Alessandra; Evangelista, Valentina; Ravera, Roberto; Gallese, Vittorio

    2015-01-01

    The development of the explicit recognition of facial expressions of emotions can be affected by childhood maltreatment experiences. A previous study demonstrated the existence of an explicit recognition bias for angry facial expressions among a population of adolescent Sierra Leonean street-boys exposed to high levels of maltreatment. In the present study, the recognition bias for angry facial expressions was investigated in a younger population of street-children and age-matched controls. Participants performed a forced-choice facial expressions recognition task. Recognition bias was measured as participants' tendency to over-attribute anger label to other negative facial expressions. Participants' heart rate was assessed and related to their behavioral performance, as index of their stress-related physiological responses. Results demonstrated the presence of a recognition bias for angry facial expressions among street-children, also pinpointing a similar, although significantly less pronounced, tendency among controls. Participants' performance was controlled for age, cognitive and educational levels and for naming skills. None of these variables influenced the recognition bias for angry facial expressions. Differently, a significant effect of heart rate on participants' tendency to use anger label was evidenced. Taken together, these results suggest that childhood exposure to maltreatment experiences amplifies children's "pre-existing bias" for anger labeling in forced-choice emotion recognition task. Moreover, they strengthen the thesis according to which the recognition bias for angry facial expressions is a manifestation of a functional adaptive mechanism that tunes victim's perceptive and attentive focus on salient environmental social stimuli.

  2. Impact of Childhood Maltreatment on the Recognition of Facial Expressions of Emotions

    PubMed Central

    Ardizzi, Martina; Martini, Francesca; Umiltà, Maria Alessandra; Evangelista, Valentina; Ravera, Roberto; Gallese, Vittorio

    2015-01-01

    The development of the explicit recognition of facial expressions of emotions can be affected by childhood maltreatment experiences. A previous study demonstrated the existence of an explicit recognition bias for angry facial expressions among a population of adolescent Sierra Leonean street-boys exposed to high levels of maltreatment. In the present study, the recognition bias for angry facial expressions was investigated in a younger population of street-children and age-matched controls. Participants performed a forced-choice facial expressions recognition task. Recognition bias was measured as participants’ tendency to over-attribute anger label to other negative facial expressions. Participants’ heart rate was assessed and related to their behavioral performance, as index of their stress-related physiological responses. Results demonstrated the presence of a recognition bias for angry facial expressions among street-children, also pinpointing a similar, although significantly less pronounced, tendency among controls. Participants’ performance was controlled for age, cognitive and educational levels and for naming skills. None of these variables influenced the recognition bias for angry facial expressions. Differently, a significant effect of heart rate on participants’ tendency to use anger label was evidenced. Taken together, these results suggest that childhood exposure to maltreatment experiences amplifies children’s “pre-existing bias” for anger labeling in forced-choice emotion recognition task. Moreover, they strengthen the thesis according to which the recognition bias for angry facial expressions is a manifestation of a functional adaptive mechanism that tunes victim’s perceptive and attentive focus on salient environmental social stimuli. PMID:26509890

  3. Remembering and knowing: two means of access to the personal past.

    PubMed

    Rajaram, S

    1993-01-01

    The nature of recollective experience was examined in a recognition memory task. Subjects gave "remember" judgments to recognized items that were accompanied by conscious recollection and "know" judgments to items that were recognized on some other basis. Although a levels-of-processing effect (Experiment 1) and a picture-superiority effect (Experiment 2) were obtained for overall recognition, these effects occurred only for "remember" judgments, and were reversed for "know" judgments. In Experiment 3, targets and lures were either preceded by a masked repetition of their own presentation (thought to increase perceptual fluency) or of an unrelated word. The effect of perceptual fluency was obtained for overall recognition and "know" judgments but not for "remember" judgments. The data obtained for confidence judgments using the same design (Experiment 4) indicated that "remember"/"know" judgments are not made solely on the basis of confidence. These data support the two-factor theories of recognition memory by dissociating two forms of recognition, and shed light on the nature of conscious recollection.

  4. Domain-Generality of Timing-Based Serial Order Processes in Short-Term Memory: New Insights from Musical and Verbal Domains

    PubMed Central

    Kowialiewski, Benjamin; Majerus, Steve

    2016-01-01

    Several models in the verbal domain of short-term memory (STM) consider a dissociation between item and order processing. This view is supported by data demonstrating that different types of time-based interference have a greater effect on memory for the order of to-be-remembered items than on memory for the items themselves. The present study investigated the domain-generality of the item versus serial order dissociation by comparing the differential effects of time-based interfering tasks, such as rhythmic interference and articulatory suppression, on item and order processing in verbal and musical STM domains. In Experiment 1, participants had to maintain sequences of verbal or musical information in STM, followed by a probe sequence, this under different conditions of interference (no-interference, rhythmic interference, articulatory suppression). They were required to decide whether all items of the probe list matched those of the memory list (item condition) or whether the order of the items in the probe sequence matched the order in the memory list (order condition). In Experiment 2, participants performed a serial order probe recognition task for verbal and musical sequences ensuring sequential maintenance processes, under no-interference or rhythmic interference conditions. For Experiment 1, serial order recognition was not significantly more impacted by interfering tasks than was item recognition, this for both verbal and musical domains. For Experiment 2, we observed selective interference of the rhythmic interference condition on both musical and verbal order STM tasks. Overall, the results suggest a similar and selective sensitivity to time-based interference for serial order STM in verbal and musical domains, but only when the STM tasks ensure sequential maintenance processes. PMID:27992565

  5. Domain-Generality of Timing-Based Serial Order Processes in Short-Term Memory: New Insights from Musical and Verbal Domains.

    PubMed

    Gorin, Simon; Kowialiewski, Benjamin; Majerus, Steve

    2016-01-01

    Several models in the verbal domain of short-term memory (STM) consider a dissociation between item and order processing. This view is supported by data demonstrating that different types of time-based interference have a greater effect on memory for the order of to-be-remembered items than on memory for the items themselves. The present study investigated the domain-generality of the item versus serial order dissociation by comparing the differential effects of time-based interfering tasks, such as rhythmic interference and articulatory suppression, on item and order processing in verbal and musical STM domains. In Experiment 1, participants had to maintain sequences of verbal or musical information in STM, followed by a probe sequence, this under different conditions of interference (no-interference, rhythmic interference, articulatory suppression). They were required to decide whether all items of the probe list matched those of the memory list (item condition) or whether the order of the items in the probe sequence matched the order in the memory list (order condition). In Experiment 2, participants performed a serial order probe recognition task for verbal and musical sequences ensuring sequential maintenance processes, under no-interference or rhythmic interference conditions. For Experiment 1, serial order recognition was not significantly more impacted by interfering tasks than was item recognition, this for both verbal and musical domains. For Experiment 2, we observed selective interference of the rhythmic interference condition on both musical and verbal order STM tasks. Overall, the results suggest a similar and selective sensitivity to time-based interference for serial order STM in verbal and musical domains, but only when the STM tasks ensure sequential maintenance processes.

  6. When the face fits: recognition of celebrities from matching and mismatching faces and voices.

    PubMed

    Stevenage, Sarah V; Neil, Greg J; Hamlin, Iain

    2014-01-01

    The results of two experiments are presented in which participants engaged in a face-recognition or a voice-recognition task. The stimuli were face-voice pairs in which the face and voice were co-presented and were either "matched" (same person), "related" (two highly associated people), or "mismatched" (two unrelated people). Analysis in both experiments confirmed that accuracy and confidence in face recognition was consistently high regardless of the identity of the accompanying voice. However accuracy of voice recognition was increasingly affected as the relationship between voice and accompanying face declined. Moreover, when considering self-reported confidence in voice recognition, confidence remained high for correct responses despite the proportion of these responses declining across conditions. These results converged with existing evidence indicating the vulnerability of voice recognition as a relatively weak signaller of identity, and results are discussed in the context of a person-recognition framework.

  7. Impaired recognition of body expressions in the behavioral variant of frontotemporal dementia.

    PubMed

    Van den Stock, Jan; De Winter, François-Laurent; de Gelder, Beatrice; Rangarajan, Janaki Raman; Cypers, Gert; Maes, Frederik; Sunaert, Stefan; Goffin, Karolien; Vandenberghe, Rik; Vandenbulcke, Mathieu

    2015-08-01

    Progressive deterioration of social cognition and emotion processing are core symptoms of the behavioral variant of frontotemporal dementia (bvFTD). Here we investigate whether bvFTD is also associated with impaired recognition of static (Experiment 1) and dynamic (Experiment 2) bodily expressions. In addition, we compared body expression processing with processing of static (Experiment 3) and dynamic (Experiment 4) facial expressions, as well as with face identity processing (Experiment 5). The results reveal that bvFTD is associated with impaired recognition of static and dynamic bodily and facial expressions, while identity processing was intact. No differential impairments were observed regarding motion (static vs. dynamic) or category (body vs. face). Within the bvFTD group, we observed a significant partial correlation between body and face expression recognition, when controlling for performance on the identity task. Voxel-Based Morphometry (VBM) analysis revealed that body emotion recognition was positively associated with gray matter volume in a region of the inferior frontal gyrus (pars orbitalis/triangularis). The results are in line with a supramodal emotion recognition deficit in bvFTD. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Evidence for view-invariant face recognition units in unfamiliar face learning.

    PubMed

    Etchells, David B; Brooks, Joseph L; Johnston, Robert A

    2017-05-01

    Many models of face recognition incorporate the idea of a face recognition unit (FRU), an abstracted representation formed from each experience of a face which aids recognition under novel viewing conditions. Some previous studies have failed to find evidence of this FRU representation. Here, we report three experiments which investigated this theoretical construct by modifying the face learning procedure from that in previous work. During learning, one or two views of previously unfamiliar faces were shown to participants in a serial matching task. Later, participants attempted to recognize both seen and novel views of the learned faces (recognition phase). Experiment 1 tested participants' recognition of a novel view, a day after learning. Experiment 2 was identical, but tested participants on the same day as learning. Experiment 3 repeated Experiment 1, but tested participants on a novel view that was outside the rotation of those views learned. Results revealed a significant advantage, across all experiments, for recognizing a novel view when two views had been learned compared to single view learning. The observed view invariance supports the notion that an FRU representation is established during multi-view face learning under particular learning conditions.

  9. Impact of Intention on the ERP Correlates of Face Recognition

    ERIC Educational Resources Information Center

    Guillaume, Fabrice; Tiberghien, Guy

    2013-01-01

    The present study investigated the impact of study-test similarity on face recognition by manipulating, in the same experiment, the expression change (same vs. different) and the task-processing context (inclusion vs. exclusion instructions) as within-subject variables. Consistent with the dual-process framework, the present results showed that…

  10. Interfering with memory for faces: The cost of doing two things at once.

    PubMed

    Wammes, Jeffrey D; Fernandes, Myra A

    2016-01-01

    We inferred the processes critical for episodic retrieval of faces by measuring susceptibility to memory interference from different distracting tasks. Experiment 1 examined recognition of studied faces under full attention (FA) or each of two divided attention (DA) conditions requiring concurrent decisions to auditorily presented letters. Memory was disrupted in both DA relative to FA conditions, a result contrary to a material-specific account of interference effects. Experiment 2 investigated whether the magnitude of interference depended on competition between concurrent tasks for common processing resources. Studied faces were presented either upright (configurally processed) or inverted (featurally processed). Recognition was completed under FA, or DA with one of two face-based distracting tasks requiring either featural or configural processing. We found an interaction: memory for upright faces was lower under DA when the distracting task required configural than featural processing, while the reverse was true for memory of inverted faces. Across experiments, the magnitude of memory interference was similar (a 19% or 20% decline from FA) regardless of whether the materials in the distracting task overlapped with the to-be-remembered information. Importantly, interference was significantly larger (42%) when the processing demands of the distracting and target retrieval task overlapped, suggesting a processing-specific account of memory interference.

  11. Exploring the Neural Representation of Novel Words Learned through Enactment in a Word Recognition Task

    PubMed Central

    Macedonia, Manuela; Mueller, Karsten

    2016-01-01

    Vocabulary learning in a second language is enhanced if learners enrich the learning experience with self-performed iconic gestures. This learning strategy is called enactment. Here we explore how enacted words are functionally represented in the brain and which brain regions contribute to enhance retention. After an enactment training lasting 4 days, participants performed a word recognition task in the functional Magnetic Resonance Imaging (fMRI) scanner. Data analysis suggests the participation of different and partially intertwined networks that are engaged in higher cognitive processes, i.e., enhanced attention and word recognition. Also, an experience-related network seems to map word representation. Besides core language regions, this latter network includes sensory and motor cortices, the basal ganglia, and the cerebellum. On the basis of its complexity and the involvement of the motor system, this sensorimotor network might explain superior retention for enactment. PMID:27445918

  12. Working memory capacity may influence perceived effort during aided speech recognition in noise.

    PubMed

    Rudner, Mary; Lunner, Thomas; Behrens, Thomas; Thorén, Elisabet Sundewall; Rönnberg, Jerker

    2012-09-01

    Recently there has been interest in using subjective ratings as a measure of perceived effort during speech recognition in noise. Perceived effort may be an indicator of cognitive load. Thus, subjective effort ratings during speech recognition in noise may covary both with signal-to-noise ratio (SNR) and individual cognitive capacity. The present study investigated the relation between subjective ratings of the effort involved in listening to speech in noise, speech recognition performance, and individual working memory (WM) capacity in hearing impaired hearing aid users. In two experiments, participants with hearing loss rated perceived effort during aided speech perception in noise. Noise type and SNR were manipulated in both experiments, and in the second experiment hearing aid compression release settings were also manipulated. Speech recognition performance was measured along with WM capacity. There were 46 participants in all with bilateral mild to moderate sloping hearing loss. In Experiment 1 there were 16 native Danish speakers (eight women and eight men) with a mean age of 63.5 yr (SD = 12.1) and average pure tone (PT) threshold of 47. 6 dB (SD = 9.8). In Experiment 2 there were 30 native Swedish speakers (19 women and 11 men) with a mean age of 70 yr (SD = 7.8) and average PT threshold of 45.8 dB (SD = 6.6). A visual analog scale (VAS) was used for effort rating in both experiments. In Experiment 1, effort was rated at individually adapted SNRs while in Experiment 2 it was rated at fixed SNRs. Speech recognition in noise performance was measured using adaptive procedures in both experiments with Dantale II sentences in Experiment 1 and Hagerman sentences in Experiment 2. WM capacity was measured using a letter-monitoring task in Experiment 1 and the reading span task in Experiment 2. In both experiments, there was a strong and significant relation between rated effort and SNR that was independent of individual WM capacity, whereas the relation between rated effort and noise type seemed to be influenced by individual WM capacity. Experiment 2 showed that hearing aid compression setting influenced rated effort. Subjective ratings of the effort involved in speech recognition in noise reflect SNRs, and individual cognitive capacity seems to influence relative rating of noise type. American Academy of Audiology.

  13. Body schema and corporeal self-recognition in the alien hand syndrome.

    PubMed

    Olgiati, Elena; Maravita, Angelo; Spandri, Viviana; Casati, Roberta; Ferraro, Francesco; Tedesco, Lucia; Agostoni, Elio Clemente; Bolognini, Nadia

    2017-07-01

    The alien hand syndrome (AHS) is a rare neuropsychological disorder characterized by involuntary, yet purposeful, hand movements. Patients with the AHS typically complain about a loss of agency associated with a feeling of estrangement for actions performed by the affected limb. The present study explores the integrity of the body representation in AHS, focusing on 2 main processes: multisensory integration and visual self-recognition of body parts. Three patients affected by AHS following a right-hemisphere stroke, with clinical symptoms akin to the posterior variant of AHS, were tested and their performance was compared with that of 18 age-matched healthy controls. AHS patients and controls underwent 2 experimental tasks: a same-different visual matching task for body postures, which assessed the ability of using your own body schema for encoding others' body postural changes (Experiment 1), and an explicit self-hand recognition task, which assessed the ability to visually recognize your own hands (Experiment 2). As compared to controls, all AHS patients were unable to access a reliable multisensory representation of their alien hand and use it for decoding others' postural changes; however, they could rely on an efficient multisensory representation of their intact (ipsilesional) hand. Two AHS patients also presented with a specific impairment in the visual self-recognition of their alien hand, but normal recognition of their intact hand. This evidence suggests that the AHS following a right-hemisphere stroke may involve a disruption of the multisensory representation of the alien limb; instead, self-hand recognition mechanisms may be spared. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  14. Learning the moves: the effect of familiarity and facial motion on person recognition across large changes in viewing format.

    PubMed

    Roark, Dana A; O'Toole, Alice J; Abdi, Hervé; Barrett, Susan E

    2006-01-01

    Familiarity with a face or person can support recognition in tasks that require generalization to novel viewing contexts. Using naturalistic viewing conditions requiring recognition of people from face or whole body gait stimuli, we investigated the effects of familiarity, facial motion, and direction of learning/test transfer on person recognition. Participants were familiarized with previously unknown people from gait videos and were tested on faces (experiment 1a) or were familiarized with faces and were tested with gait videos (experiment 1b). Recognition was more accurate when learning from the face and testing with the gait videos, than when learning from the gait videos and testing with the face. The repetition of a single stimulus, either the face or gait, produced strong recognition gains across transfer conditions. Also, the presentation of moving faces resulted in better performance than that of static faces. In experiment 2, we investigated the role of facial motion further by testing recognition with static profile images. Motion provided no benefit for recognition, indicating that structure-from-motion is an unlikely source of the motion advantage found in the first set of experiments.

  15. Cue quality and criterion setting in recognition memory.

    PubMed

    Kent, Christopher; Lamberts, Koen; Patton, Richard

    2018-02-02

    Previous studies on how people set and modify decision criteria in old-new recognition tasks (in which they have to decide whether or not a stimulus was seen in a study phase) have almost exclusively focused on properties of the study items, such as presentation frequency or study list length. In contrast, in the three studies reported here, we manipulated the quality of the test cues in a scene-recognition task, either by degrading through Gaussian blurring (Experiment 1) or by limiting presentation duration (Experiment 2 and 3). In Experiments 1 and 2, degradation of the test cue led to worse old-new discrimination. Most importantly, however, participants were more liberal in their responses to degraded cues (i.e., more likely to call the cue "old"), demonstrating strong within-list, item-by-item, criterion shifts. This liberal response bias toward degraded stimuli came at the cost of increasing the false alarm rate while maintaining a constant hit rate. Experiment 3 replicated Experiment 2 with additional stimulus types (words and faces) but did not provide accuracy feedback to participants. The criterion shifts in Experiment 3 were smaller in magnitude than Experiments 1 and 2 and varied in consistency across stimulus type, suggesting, in line with previous studies, that feedback is important for participants to shift their criteria.

  16. Deficits in object-in-place but not relative recency performance in the APPswe/PS1dE9 mouse model of Alzheimer's disease: Implications for object recognition.

    PubMed

    Bonardi, Charlotte; Pardon, Marie-Christine; Armstrong, Paul

    2016-10-15

    Performance was examined on three variants of the spontaneous object recognition (SOR) task, in 5-month old APPswe/PS1dE9 mice and wild-type littermate controls. A deficit was observed in an object-in-place (OIP) task, in which mice are preexposed to four different objects in specific locations, and then at test two of the objects swap locations (Experiment 2). Typically more exploration is seen of the objects which have switched location, which is taken as evidence of a retrieval-generated priming mechanism. However, no significant transgenic deficit was found in a relative recency (RR) task (Experiment 1), in which mice are exposed to two different objects in two separate sample phases, and then tested with both objects. Typically more exploration of the first-presented object is observed, which is taken as evidence of a self-generated priming mechanism. Nor was there any impairment in the simplest variant, the spontaneous object recognition (SOR) task, in which mice are preexposed to one object and then tested with the familiar and a novel object. This was true regardless of whether the sample-test interval was 5min (Experiment 1) or 24h (Experiments 1 and 2). It is argued that SOR performance depends on retrieval-generated priming as well as self-generated priming, and our preliminary evidence suggests that the retrieval-generated priming process is especially impaired in these young transgenic animals. Copyright © 2016 Elsevier B.V. All rights reserved.

  17. Semantic contribution to verbal short-term memory: are pleasant words easier to remember than neutral words in serial recall and serial recognition?

    PubMed

    Monnier, Catherine; Syssau, Arielle

    2008-01-01

    In the four experiments reported here, we examined the role of word pleasantness on immediate serial recall and immediate serial recognition. In Experiment 1, we compared verbal serial recall of pleasant and neutral words, using a limited set of items. In Experiment 2, we replicated Experiment 1 with an open set of words (i.e., new items were used on every trial). In Experiments 3 and 4, we assessed immediate serial recognition of pleasant and neutral words, using item sets from Experiments 1 and 2. Pleasantness was found to have a facilitation effect on both immediate serial recall and immediate serial recognition. This study supplies some new supporting arguments in favor of a semantic contribution to verbal short-term memory performance. The pleasantness effect observed in immediate serial recognition showed that, contrary to a number of earlier findings, performance on this task can also turn out to be dependent on semantic factors. The results are discussed in relation to nonlinguistic and psycholinguistic models of short-term memory.

  18. Changes in Visual Object Recognition Precede the Shape Bias in Early Noun Learning

    PubMed Central

    Yee, Meagan; Jones, Susan S.; Smith, Linda B.

    2012-01-01

    Two of the most formidable skills that characterize human beings are language and our prowess in visual object recognition. They may also be developmentally intertwined. Two experiments, a large sample cross-sectional study and a smaller sample 6-month longitudinal study of 18- to 24-month-olds, tested a hypothesized developmental link between changes in visual object representation and noun learning. Previous findings in visual object recognition indicate that children’s ability to recognize common basic level categories from sparse structural shape representations of object shape emerges between the ages of 18 and 24 months, is related to noun vocabulary size, and is lacking in children with language delay. Other research shows in artificial noun learning tasks that during this same developmental period, young children systematically generalize object names by shape, that this shape bias predicts future noun learning, and is lacking in children with language delay. The two experiments examine the developmental relation between visual object recognition and the shape bias for the first time. The results show that developmental changes in visual object recognition systematically precede the emergence of the shape bias. The results suggest a developmental pathway in which early changes in visual object recognition that are themselves linked to category learning enable the discovery of higher-order regularities in category structure and thus the shape bias in novel noun learning tasks. The proposed developmental pathway has implications for understanding the role of specific experience in the development of both visual object recognition and the shape bias in early noun learning. PMID:23227015

  19. Superordinate Level Processing Has Priority Over Basic-Level Processing in Scene Gist Recognition

    PubMed Central

    Sun, Qi; Zheng, Yang; Sun, Mingxia; Zheng, Yuanjie

    2016-01-01

    By combining a perceptual discrimination task and a visuospatial working memory task, the present study examined the effects of visuospatial working memory load on the hierarchical processing of scene gist. In the perceptual discrimination task, two scene images from the same (manmade–manmade pairing or natural–natural pairing) or different superordinate level categories (manmade–natural pairing) were presented simultaneously, and participants were asked to judge whether these two images belonged to the same basic-level category (e.g., street–street pairing) or not (e.g., street–highway pairing). In the concurrent working memory task, spatial load (position-based load in Experiment 1) and object load (figure-based load in Experiment 2) were manipulated. The results were as follows: (a) spatial load and object load have stronger effects on discrimination of same basic-level scene pairing than same superordinate level scene pairing; (b) spatial load has a larger impact on the discrimination of scene pairings at early stages than at later stages; on the contrary, object information has a larger influence on at later stages than at early stages. It followed that superordinate level processing has priority over basic-level processing in scene gist recognition and spatial information contributes to the earlier and object information to the later stages in scene gist recognition. PMID:28382195

  20. Spatial release of cognitive load measured in a dual-task paradigm in normal-hearing and hearing-impaired listeners.

    PubMed

    Xia, Jing; Nooraei, Nazanin; Kalluri, Sridhar; Edwards, Brent

    2015-04-01

    This study investigated whether spatial separation between talkers helps reduce cognitive processing load, and how hearing impairment interacts with the cognitive load of individuals listening in multi-talker environments. A dual-task paradigm was used in which performance on a secondary task (visual tracking) served as a measure of the cognitive load imposed by a speech recognition task. Visual tracking performance was measured under four conditions in which the target and the interferers were distinguished by (1) gender and spatial location, (2) gender only, (3) spatial location only, and (4) neither gender nor spatial location. Results showed that when gender cues were available, a 15° spatial separation between talkers reduced the cognitive load of listening even though it did not provide further improvement in speech recognition (Experiment I). Compared to normal-hearing listeners, large individual variability in spatial release of cognitive load was observed among hearing-impaired listeners. Cognitive load was lower when talkers were spatially separated by 60° than when talkers were of different genders, even though speech recognition was comparable in these two conditions (Experiment II). These results suggest that a measure of cognitive load might provide valuable insight into the benefit of spatial cues in multi-talker environments.

  1. Age differences in accuracy and choosing in eyewitness identification and face recognition.

    PubMed

    Searcy, J H; Bartlett, J C; Memon, A

    1999-05-01

    Studies of aging and face recognition show age-related increases in false recognitions of new faces. To explore implications of this false alarm effect, we had young and senior adults perform (1) three eye-witness identification tasks, using both target present and target absent lineups, and (2) and old/new recognition task in which a study list of faces was followed by a test including old and new faces, along with conjunctions of old faces. Compared with the young, seniors had lower accuracy and higher choosing rates on the lineups, and they also falsely recognized more new faces on the recognition test. However, after screening for perceptual processing deficits, there was no age difference in false recognition of conjunctions, or in discriminating old faces from conjunctions. We conclude that the false alarm effect generalizes to lineup identification, but does not extend to conjunction faces. The findings are consistent with age-related deficits in recollection of context and relative age invariance in perceptual integrative processes underlying the experience of familiarity.

  2. Facial Expression Influences Face Identity Recognition During the Attentional Blink

    PubMed Central

    2014-01-01

    Emotional stimuli (e.g., negative facial expressions) enjoy prioritized memory access when task relevant, consistent with their ability to capture attention. Whether emotional expression also impacts on memory access when task-irrelevant is important for arbitrating between feature-based and object-based attentional capture. Here, the authors address this question in 3 experiments using an attentional blink task with face photographs as first and second target (T1, T2). They demonstrate reduced neutral T2 identity recognition after angry or happy T1 expression, compared to neutral T1, and this supports attentional capture by a task-irrelevant feature. Crucially, after neutral T1, T2 identity recognition was enhanced and not suppressed when T2 was angry—suggesting that attentional capture by this task-irrelevant feature may be object-based and not feature-based. As an unexpected finding, both angry and happy facial expressions suppress memory access for competing objects, but only angry facial expression enjoyed privileged memory access. This could imply that these 2 processes are relatively independent from one another. PMID:25286076

  3. Facial expression influences face identity recognition during the attentional blink.

    PubMed

    Bach, Dominik R; Schmidt-Daffy, Martin; Dolan, Raymond J

    2014-12-01

    Emotional stimuli (e.g., negative facial expressions) enjoy prioritized memory access when task relevant, consistent with their ability to capture attention. Whether emotional expression also impacts on memory access when task-irrelevant is important for arbitrating between feature-based and object-based attentional capture. Here, the authors address this question in 3 experiments using an attentional blink task with face photographs as first and second target (T1, T2). They demonstrate reduced neutral T2 identity recognition after angry or happy T1 expression, compared to neutral T1, and this supports attentional capture by a task-irrelevant feature. Crucially, after neutral T1, T2 identity recognition was enhanced and not suppressed when T2 was angry-suggesting that attentional capture by this task-irrelevant feature may be object-based and not feature-based. As an unexpected finding, both angry and happy facial expressions suppress memory access for competing objects, but only angry facial expression enjoyed privileged memory access. This could imply that these 2 processes are relatively independent from one another.

  4. Foveational Complexity in Single Word Identification: Contralateral Visual Pathways Are Advantaged over Ipsilateral Pathways

    ERIC Educational Resources Information Center

    Obregon, Mateo; Shillcock, Richard

    2012-01-01

    Recognition of a single word is an elemental task in innumerable cognitive psychology experiments, but involves unexpected complexity. We test a controversial claim that the human fovea is vertically divided, with each half projecting to either the contralateral or ipsilateral hemisphere, thereby influencing foveal word recognition. We report a…

  5. Lexical and Metrical Stress in Word Recognition: Lexical or Pre-Lexical Influences?

    ERIC Educational Resources Information Center

    Slowiaczek, Louisa M.; Soltano, Emily G.; Bernstein, Hilary L.

    2006-01-01

    The influence of lexical stress and/or metrical stress on spoken word recognition was examined. Two experiments were designed to determine whether response times in lexical decision or shadowing tasks are influenced when primes and targets share lexical stress patterns (JUVenile-BIBlical [Syllables printed in capital letters indicate those…

  6. Developmental Trajectories of Part-Based and Configural Object Recognition in Adolescence

    ERIC Educational Resources Information Center

    Juttner, Martin; Wakui, Elley; Petters, Dean; Kaur, Surinder; Davidoff, Jules

    2013-01-01

    Three experiments assessed the development of children's part and configural (part-relational) processing in object recognition during adolescence. In total, 312 school children aged 7-16 years and 80 adults were tested in 3-alternative forced choice (3-AFC) tasks. They judged the correct appearance of upright and inverted presented familiar…

  7. Can corrective feedback improve recognition memory?

    PubMed

    Kantner, Justin; Lindsay, D Stephen

    2010-06-01

    An understanding of the effects of corrective feedback on recognition memory can inform both recognition theory and memory training programs, but few published studies have investigated the issue. Although the evidence to date suggests that feedback does not improve recognition accuracy, few studies have directly examined its effect on sensitivity, and fewer have created conditions that facilitate a feedback advantage by encouraging controlled processing at test. In Experiment 1, null effects of feedback were observed following both deep and shallow encoding of categorized study lists. In Experiment 2, feedback robustly influenced response bias by allowing participants to discern highly uneven base rates of old and new items, but sensitivity remained unaffected. In Experiment 3, a false-memory procedure, feedback failed to attenuate false recognition of critical lures. In Experiment 4, participants were unable to use feedback to learn a simple category rule separating old items from new items, despite the fact that feedback was of substantial benefit in a nearly identical categorization task. The recognition system, despite a documented ability to utilize controlled strategic or inferential decision-making processes, appears largely impenetrable to a benefit of corrective feedback.

  8. The Episodic Nature of Episodic-Like Memories

    ERIC Educational Resources Information Center

    Easton, Alexander; Webster, Lisa A. D.; Eacott, Madeline J.

    2012-01-01

    Studying episodic memory in nonhuman animals has proved difficult because definitions in humans require conscious recollection. Here, we assessed humans' experience of episodic-like recognition memory tasks that have been used with animals. It was found that tasks using contextual information to discriminate events could only be accurately…

  9. Learning task affects ERP-correlates of the own-race bias, but not recognition memory performance.

    PubMed

    Stahl, Johanna; Wiese, Holger; Schweinberger, Stefan R

    2010-06-01

    People are generally better in recognizing faces from their own ethnic group as opposed to faces from another ethnic group, a finding which has been interpreted in the context of two opposing theories. Whereas perceptual expertise theories stress the role of long-term experience with one's own ethnic group, race feature theories assume that the processing of an other-race-defining feature triggers inferior coding and recognition of faces. The present study tested these hypotheses by manipulating the learning task in a recognition memory test. At learning, one group of participants categorized faces according to ethnicity, whereas another group rated facial attractiveness. Subsequent recognition tests indicated clear and similar own-race biases for both groups. However, ERPs from learning and test phases demonstrated an influence of learning task on neurophysiological processing of own- and other-race faces. While both groups exhibited larger N170 responses to Asian as compared to Caucasian faces, task-dependent differences were seen in a subsequent P2 ERP component. Whereas the P2 was more pronounced for Caucasian faces in the categorization group, this difference was absent in the attractiveness rating group. The learning task thus influences early face encoding. Moreover, comparison with recent research suggests that this attractiveness rating task influences the processes reflected in the P2 in a similar manner as perceptual expertise for other-race faces does. By contrast, the behavioural own-race bias suggests that long-term expertise is required to increase other-race face recognition and hence attenuate the own-race bias. Copyright 2010 Elsevier Ltd. All rights reserved.

  10. Covert face recognition in congenital prosopagnosia: a group study.

    PubMed

    Rivolta, Davide; Palermo, Romina; Schmalzl, Laura; Coltheart, Max

    2012-03-01

    Even though people with congenital prosopagnosia (CP) never develop a normal ability to "overtly" recognize faces, some individuals show indices of "covert" (or implicit) face recognition. The aim of this study was to demonstrate covert face recognition in CP when participants could not overtly recognize the faces. Eleven people with CP completed three tasks assessing their overt face recognition ability, and three tasks assessing their "covert" face recognition: a Forced choice familiarity task, a Forced choice cued task, and a Priming task. Evidence of covert recognition was observed with the Forced choice familiarity task, but not the Priming task. In addition, we propose that the Forced choice cued task does not measure covert processing as such, but instead "provoked-overt" recognition. Our study clearly shows that people with CP demonstrate covert recognition for faces that they cannot overtly recognize, and that behavioural tasks vary in their sensitivity to detect covert recognition in CP. Copyright © 2011 Elsevier Srl. All rights reserved.

  11. Areas Recruited during Action Understanding Are Not Modulated by Auditory or Sign Language Experience.

    PubMed

    Fang, Yuxing; Chen, Quanjing; Lingnau, Angelika; Han, Zaizhu; Bi, Yanchao

    2016-01-01

    The observation of other people's actions recruits a network of areas including the inferior frontal gyrus (IFG), the inferior parietal lobule (IPL), and posterior middle temporal gyrus (pMTG). These regions have been shown to be activated through both visual and auditory inputs. Intriguingly, previous studies found no engagement of IFG and IPL for deaf participants during non-linguistic action observation, leading to the proposal that auditory experience or sign language usage might shape the functionality of these areas. To understand which variables induce plastic changes in areas recruited during the processing of other people's actions, we examined the effects of tasks (action understanding and passive viewing) and effectors (arm actions vs. leg actions), as well as sign language experience in a group of 12 congenitally deaf signers and 13 hearing participants. In Experiment 1, we found a stronger activation during an action recognition task in comparison to a low-level visual control task in IFG, IPL and pMTG in both deaf signers and hearing individuals, but no effect of auditory or sign language experience. In Experiment 2, we replicated the results of the first experiment using a passive viewing task. Together, our results provide robust evidence demonstrating that the response obtained in IFG, IPL, and pMTG during action recognition and passive viewing is not affected by auditory or sign language experience, adding further support for the supra-modal nature of these regions.

  12. It Takes Time to Prime: Semantic Priming in the Ocular Lexical Decision Task

    PubMed Central

    Hoedemaker, Renske S.; Gordon, Peter C.

    2014-01-01

    Two eye-tracking experiments were conducted in which the manual response mode typically used in lexical decision tasks (LDT) was replaced with an eye-movement response through a sequence of three words. This ocular LDT combines the explicit control of task goals found in LDTs with the highly practiced ocular response used in reading text. In Experiment 1, forward saccades indicated an affirmative LD on each word in the triplet. In Experiment 2, LD responses were delayed until all three letter strings had been read. The goal of the study was to evaluate the contribution of task goals and response mode to semantic priming. Semantic priming is very robust in tasks that involve recognition of words in isolation, such as LDT, while limited during text reading as measured using eye movements. Gaze durations in both experiments showed robust semantic priming even though ocular response times were much shorter than manual LDs for the same words in the English Lexicon Project. Ex-Gaussian distribution fits revealed that the priming effect was concentrated in estimates of τ, meaning that priming was most pronounced in the slow tail of the distribution. This pattern shows differential use of the prime information, which may be more heavily recruited in cases where the LD is difficult as indicated by longer response times. Compared to the manual LD responses, ocular LDs provide a more sensitive measure of this task-related influence on word recognition as measured by the LDT. PMID:25181368

  13. Cultural differences in visual object recognition in 3-year-old children

    PubMed Central

    Kuwabara, Megumi; Smith, Linda B.

    2016-01-01

    Recent research indicates that culture penetrates fundamental processes of perception and cognition (e.g. Nisbett & Miyamoto, 2005). Here, we provide evidence that these influences begin early and influence how preschool children recognize common objects. The three tasks (n=128) examined the degree to which nonface object recognition by 3 year olds was based on individual diagnostic features versus more configural and holistic processing. Task 1 used a 6-alternative forced choice task in which children were asked to find a named category in arrays of masked objects in which only 3 diagnostic features were visible for each object. U.S. children outperformed age-matched Japanese children. Task 2 presented pictures of objects to children piece by piece. U.S. children recognized the objects given fewer pieces than Japanese children and likelihood of recognition increased for U.S., but not Japanese children when the piece added was rated by both U.S. and Japanese adults as highly defining. Task 3 used a standard measure of configural progressing, asking the degree to which recognition of matching pictures was disrupted by the rotation of one picture. Japanese children’s recognition was more disrupted by inversion than was that of U.S. children, indicating more configural processing by Japanese than U.S. children. The pattern suggests early cross-cultural differences in visual processing; findings that raise important questions about how visual experiences differ across cultures and about universal patterns of cognitive development. PMID:26985576

  14. Cultural differences in visual object recognition in 3-year-old children.

    PubMed

    Kuwabara, Megumi; Smith, Linda B

    2016-07-01

    Recent research indicates that culture penetrates fundamental processes of perception and cognition. Here, we provide evidence that these influences begin early and influence how preschool children recognize common objects. The three tasks (N=128) examined the degree to which nonface object recognition by 3-year-olds was based on individual diagnostic features versus more configural and holistic processing. Task 1 used a 6-alternative forced choice task in which children were asked to find a named category in arrays of masked objects where only three diagnostic features were visible for each object. U.S. children outperformed age-matched Japanese children. Task 2 presented pictures of objects to children piece by piece. U.S. children recognized the objects given fewer pieces than Japanese children, and the likelihood of recognition increased for U.S. children, but not Japanese children, when the piece added was rated by both U.S. and Japanese adults as highly defining. Task 3 used a standard measure of configural progressing, asking the degree to which recognition of matching pictures was disrupted by the rotation of one picture. Japanese children's recognition was more disrupted by inversion than was that of U.S. children, indicating more configural processing by Japanese than U.S. children. The pattern suggests early cross-cultural differences in visual processing; findings that raise important questions about how visual experiences differ across cultures and about universal patterns of cognitive development. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Dissociations of the number and precision of visual short-term memory representations in change detection.

    PubMed

    Xie, Weizhen; Zhang, Weiwei

    2017-11-01

    The present study dissociated the number (i.e., quantity) and precision (i.e., quality) of visual short-term memory (STM) representations in change detection using receiver operating characteristic (ROC) and experimental manipulations. Across three experiments, participants performed both recognition and recall tests of visual STM using the change-detection task and the continuous color-wheel recall task, respectively. Experiment 1 demonstrated that the estimates of the number and precision of visual STM representations based on the ROC model of change-detection performance were robustly correlated with the corresponding estimates based on the mixture model of continuous-recall performance. Experiments 2 and 3 showed that the experimental manipulation of mnemonic precision using white-noise masking and the experimental manipulation of the number of encoded STM representations using consolidation masking produced selective effects on the corresponding measures of mnemonic precision and the number of encoded STM representations, respectively, in both change-detection and continuous-recall tasks. Altogether, using the individual-differences (Experiment 1) and experimental dissociation (Experiment 2 and 3) approaches, the present study demonstrated the some-or-none nature of visual STM representations across recall and recognition.

  16. The “parts and wholes” of face recognition: a review of the literature

    PubMed Central

    Tanaka, James W.; Simonyi, Diana

    2016-01-01

    It has been claimed that faces are recognized as a “whole” rather than the recognition of individual parts. In a paper published in the Quarterly Journal of Experimental Psychology in 1993, Martha Farah and I attempted to operationalize the holistic claim using the part/whole task. In this task, participants studied a face and then their memory presented in isolation and in the whole face. Consistent with the holistic view, recognition of the part was superior when tested in the whole-face condition compared to when it was tested in isolation. The “whole face” or holistic advantage was not found for faces that were inverted, or scrambled, nor for non-face objects suggesting that holistic encoding was specific to normal, intact faces. In this paper, we reflect on the part/whole paradigm and how it has contributed to our understanding of what it means to recognize a face as a “whole” stimulus. We describe the value of part/whole task for developing theories of holistic and non-holistic recognition of faces and objects. We discuss the research that has probed the neural substrates of holistic processing in healthy adults and people with prosopagnosia and autism. Finally, we examine how experience shapes holistic face recognition in children and recognition of own- and other-race faces in adults. The goal of this article is to summarize the research on the part/whole task and speculate on how it has informed our understanding of holistic face processing. PMID:26886495

  17. The "parts and wholes" of face recognition: A review of the literature.

    PubMed

    Tanaka, James W; Simonyi, Diana

    2016-10-01

    It has been claimed that faces are recognized as a "whole" rather than by the recognition of individual parts. In a paper published in the Quarterly Journal of Experimental Psychology in 1993, Martha Farah and I attempted to operationalize the holistic claim using the part/whole task. In this task, participants studied a face and then their memory presented in isolation and in the whole face. Consistent with the holistic view, recognition of the part was superior when tested in the whole-face condition compared to when it was tested in isolation. The "whole face" or holistic advantage was not found for faces that were inverted, or scrambled, nor for non-face objects, suggesting that holistic encoding was specific to normal, intact faces. In this paper, we reflect on the part/whole paradigm and how it has contributed to our understanding of what it means to recognize a face as a "whole" stimulus. We describe the value of part/whole task for developing theories of holistic and non-holistic recognition of faces and objects. We discuss the research that has probed the neural substrates of holistic processing in healthy adults and people with prosopagnosia and autism. Finally, we examine how experience shapes holistic face recognition in children and recognition of own- and other-race faces in adults. The goal of this article is to summarize the research on the part/whole task and speculate on how it has informed our understanding of holistic face processing.

  18. Visual Word Recognition Across the Adult Lifespan

    PubMed Central

    Cohen-Shikora, Emily R.; Balota, David A.

    2016-01-01

    The current study examines visual word recognition in a large sample (N = 148) across the adult lifespan and across a large set of stimuli (N = 1187) in three different lexical processing tasks (pronunciation, lexical decision, and animacy judgments). Although the focus of the present study is on the influence of word frequency, a diverse set of other variables are examined as the system ages and acquires more experience with language. Computational models and conceptual theories of visual word recognition and aging make differing predictions for age-related changes in the system. However, these have been difficult to assess because prior studies have produced inconsistent results, possibly due to sample differences, analytic procedures, and/or task-specific processes. The current study confronts these potential differences by using three different tasks, treating age and word variables as continuous, and exploring the influence of individual differences such as vocabulary, vision, and working memory. The primary finding is remarkable stability in the influence of a diverse set of variables on visual word recognition across the adult age spectrum. This pattern is discussed in reference to previous inconsistent findings in the literature and implications for current models of visual word recognition. PMID:27336629

  19. Predicting Reasoning from Memory

    ERIC Educational Resources Information Center

    Heit, Evan; Hayes, Brett K.

    2011-01-01

    In an effort to assess the relations between reasoning and memory, in 8 experiments, the authors examined how well responses on an inductive reasoning task are predicted from responses on a recognition memory task for the same picture stimuli. Across several experimental manipulations, such as varying study time, presentation frequency, and the…

  20. The role of color information on object recognition: a review and meta-analysis.

    PubMed

    Bramão, Inês; Reis, Alexandra; Petersson, Karl Magnus; Faísca, Luís

    2011-09-01

    In this study, we systematically review the scientific literature on the effect of color on object recognition. Thirty-five independent experiments, comprising 1535 participants, were included in a meta-analysis. We found a moderate effect of color on object recognition (d=0.28). Specific effects of moderator variables were analyzed and we found that color diagnosticity is the factor with the greatest moderator effect on the influence of color in object recognition; studies using color diagnostic objects showed a significant color effect (d=0.43), whereas a marginal color effect was found in studies that used non-color diagnostic objects (d=0.18). The present study did not permit the drawing of specific conclusions about the moderator effect of the object recognition task; while the meta-analytic review showed that color information improves object recognition mainly in studies using naming tasks (d=0.36), the literature review revealed a large body of evidence showing positive effects of color information on object recognition in studies using a large variety of visual recognition tasks. We also found that color is important for the ability to recognize artifacts and natural objects, to recognize objects presented as types (line-drawings) or as tokens (photographs), and to recognize objects that are presented without surface details, such as texture or shadow. Taken together, the results of the meta-analysis strongly support the contention that color plays a role in object recognition. This suggests that the role of color should be taken into account in models of visual object recognition. Copyright © 2011 Elsevier B.V. All rights reserved.

  1. Auditory perception of a human walker.

    PubMed

    Cottrell, David; Campbell, Megan E J

    2014-01-01

    When one hears footsteps in the hall, one is able to instantly recognise it as a person: this is an everyday example of auditory biological motion perception. Despite the familiarity of this experience, research into this phenomenon is in its infancy compared with visual biological motion perception. Here, two experiments explored sensitivity to, and recognition of, auditory stimuli of biological and nonbiological origin. We hypothesised that the cadence of a walker gives rise to a temporal pattern of impact sounds that facilitates the recognition of human motion from auditory stimuli alone. First a series of detection tasks compared sensitivity with three carefully matched impact sounds: footsteps, a ball bouncing, and drumbeats. Unexpectedly, participants were no more sensitive to footsteps than to impact sounds of nonbiological origin. In the second experiment participants made discriminations between pairs of the same stimuli, in a series of recognition tasks in which the temporal pattern of impact sounds was manipulated to be either that of a walker or the pattern more typical of the source event (a ball bouncing or a drumbeat). Under these conditions, there was evidence that both temporal and nontemporal cues were important in recognising theses stimuli. It is proposed that the interval between footsteps, which reflects a walker's cadence, is a cue for the recognition of the sounds of a human walking.

  2. The short- and long-term consequences of directed forgetting in a working memory task.

    PubMed

    Festini, Sara B; Reuter-Lorenz, Patricia A

    2013-01-01

    Directed forgetting requires the voluntary control of memory. Whereas many studies have examined directed forgetting in long-term memory (LTM), the mechanisms and effects of directed forgetting within working memory (WM) are less well understood. The current study tests how directed forgetting instructions delivered in a WM task influence veridical memory, as well as false memory, over the short and long term. In a modified item recognition task Experiment 1 tested WM only and demonstrated that directed forgetting reduces false recognition errors and semantic interference. Experiment 2 replicated these WM effects and used a surprise LTM recognition test to assess the long-term effects of directed forgetting in WM. Long-term veridical memory for to-be-remembered lists was better than memory for to-be-forgotten lists-the directed forgetting effect. Moreover, fewer false memories emerged for to-be-forgotten information than for to-be-remembered information in LTM as well. These results indicate that directed forgetting during WM reduces semantic processing of to-be-forgotten lists over the short and long term. Implications for theories of false memory and the mechanisms of directed forgetting within working memory are discussed.

  3. The image-interpretation-workstation of the future: lessons learned

    NASA Astrophysics Data System (ADS)

    Maier, S.; van de Camp, F.; Hafermann, J.; Wagner, B.; Peinsipp-Byma, E.; Beyerer, J.

    2017-05-01

    In recent years, professionally used workstations got increasingly complex and multi-monitor systems are more and more common. Novel interaction techniques like gesture recognition were developed but used mostly for entertainment and gaming purposes. These human computer interfaces are not yet widely used in professional environments where they could greatly improve the user experience. To approach this problem, we combined existing tools in our imageinterpretation-workstation of the future, a multi-monitor workplace comprised of four screens. Each screen is dedicated to a special task in the image interpreting process: a geo-information system to geo-reference the images and provide a spatial reference for the user, an interactive recognition support tool, an annotation tool and a reporting tool. To further support the complex task of image interpreting, self-developed interaction systems for head-pose estimation and hand tracking were used in addition to more common technologies like touchscreens, face identification and speech recognition. A set of experiments were conducted to evaluate the usability of the different interaction systems. Two typical extensive tasks of image interpreting were devised and approved by military personal. They were then tested with a current setup of an image interpreting workstation using only keyboard and mouse against our image-interpretationworkstation of the future. To get a more detailed look at the usefulness of the interaction techniques in a multi-monitorsetup, the hand tracking, head pose estimation and the face recognition were further evaluated using tests inspired by everyday tasks. The results of the evaluation and the discussion are presented in this paper.

  4. The aftermath of memory retrieval for recycling visual working memory representations.

    PubMed

    Park, Hyung-Bum; Zhang, Weiwei; Hyun, Joo-Seok

    2017-07-01

    We examined the aftermath of accessing and retrieving a subset of information stored in visual working memory (VWM)-namely, whether detection of a mismatch between memory and perception can impair the original memory of an item while triggering recognition-induced forgetting for the remaining, untested items. For this purpose, we devised a consecutive-change detection task wherein two successive testing probes were displayed after a single set of memory items. Across two experiments utilizing different memory-testing methods (whole vs. single probe), we observed a reliable pattern of poor performance in change detection for the second test when the first test had exhibited a color change. The impairment after a color change was evident even when the same memory item was repeatedly probed; this suggests that an attention-driven, salient visual change made it difficult to reinstate the previously remembered item. The second change detection, for memory items untested during the first change detection, was also found to be inaccurate, indicating that recognition-induced forgetting had occurred for the unprobed items in VWM. In a third experiment, we conducted a task that involved change detection plus continuous recall, wherein a memory recall task was presented after the change detection task. The analyses of the distributions of recall errors with a probabilistic mixture model revealed that the memory impairments from both visual changes and recognition-induced forgetting are explained better by the stochastic loss of memory items than by their degraded resolution. These results indicate that attention-driven visual change and recognition-induced forgetting jointly influence the "recycling" of VWM representations.

  5. Binding and Inhibition in Working Memory: Individual and Age Differences in Short-Term Recognition

    ERIC Educational Resources Information Center

    Oberauer, Klaus

    2005-01-01

    Two studies investigated the relationship between working memory capacity (WMC), adult age, and the resolution of conflict between familiarity and recollection in short-term recognition tasks. Experiment 1 showed a specific deficit of young adults with low WMC in rejecting intrusion probes (i.e., highly familiar probes) in a modified Sternberg…

  6. Movement Contributes to Infants' Recognition of the Human Form

    ERIC Educational Resources Information Center

    Christie, Tamara; Slaughter, Virginia

    2010-01-01

    Three experiments demonstrate that biological movement facilitates young infants' recognition of the whole human form. A body discrimination task was used in which 6-, 9-, and 12-month-old infants were habituated to typical human bodies and then shown scrambled human bodies at the test. Recovery of interest to the scrambled bodies was observed in…

  7. Emotion Recognition in Children with Down Syndrome: Influence of Emotion Label and Expression Intensity

    ERIC Educational Resources Information Center

    Cebula, Katie R.; Wishart, Jennifer G.; Willis, Diane S.; Pitcairn, Tom K.

    2017-01-01

    Some children with Down syndrome may experience difficulties in recognizing facial emotions, particularly fear, but it is not clear why, nor how such skills can best be facilitated. Using a photo-matching task, emotion recognition was tested in children with Down syndrome, children with nonspecific intellectual disability and cognitively matched,…

  8. Concreteness effects in short-term memory: a test of the item-order hypothesis.

    PubMed

    Roche, Jaclynn; Tolan, G Anne; Tehan, Gerald

    2011-12-01

    The following experiments explore word length and concreteness effects in short-term memory within an item-order processing framework. This framework asserts order memory is better for those items that are relatively easy to process at the item level. However, words that are difficult to process benefit at the item level for increased attention/resources being applied. The prediction of the model is that differential item and order processing can be detected in episodic tasks that differ in the degree to which item or order memory are required by the task. The item-order account has been applied to the word length effect such that there is a short word advantage in serial recall but a long word advantage in item recognition. The current experiment considered the possibility that concreteness effects might be explained within the same framework. In two experiments, word length (Experiment 1) and concreteness (Experiment 2) are examined using forward serial recall, backward serial recall, and item recognition. These results for word length replicate previous studies showing the dissociation in item and order tasks. The same was not true for the concreteness effect. In all three tasks concrete words were better remembered than abstract words. The concreteness effect cannot be explained in terms of an item-order trade off. PsycINFO Database Record (c) 2011 APA, all rights reserved.

  9. Young pigs exhibit differential exploratory behavior during novelty preference tasks in response to age, sex, and delay.

    PubMed

    Fleming, Stephen A; Dilger, Ryan N

    2017-03-15

    Novelty preference paradigms have been widely used to study recognition memory and its neural substrates. The piglet model continues to advance the study of neurodevelopment, and as such, tasks that use novelty preference will serve especially useful due to their translatable nature to humans. However, there has been little use of this behavioral paradigm in the pig, and previous studies using the novel object recognition paradigm in piglets have yielded inconsistent results. The current study was conducted to determine if piglets were capable of displaying a novelty preference. Herein a series of experiments were conducted using novel object recognition or location in 3- and 4-week-old piglets. In the novel object recognition task, piglets were able to discriminate between novel and sample objects after delays of 2min, 1h, 1 day, and 2 days (all P<0.039) at both ages. Performance was sex-dependent, as females could perform both 1- and 2-day delays (P<0.036) and males could perform the 2-day delay (P=0.008) but not the 1-day delay (P=0.347). Furthermore, 4-week-old piglets and females tended to exhibit greater exploratory behavior compared with males. Such performance did not extend to novel location recognition tasks, as piglets were only able to discriminate between novel and sample locations after a short delay (P>0.046). In conclusion, this study determined that piglets are able to perform the novel object and location recognition tasks at 3-to-4 weeks of age, however performance was dependent on sex, age, and delay. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Sensory Contributions to Impaired Emotion Processing in Schizophrenia

    PubMed Central

    Butler, Pamela D.; Abeles, Ilana Y.; Weiskopf, Nicole G.; Tambini, Arielle; Jalbrzikowski, Maria; Legatt, Michael E.; Zemon, Vance; Loughead, James; Gur, Ruben C.; Javitt, Daniel C.

    2009-01-01

    Both emotion and visual processing deficits are documented in schizophrenia, and preferential magnocellular visual pathway dysfunction has been reported in several studies. This study examined the contribution to emotion-processing deficits of magnocellular and parvocellular visual pathway function, based on stimulus properties and shape of contrast response functions. Experiment 1 examined the relationship between contrast sensitivity to magnocellular- and parvocellular-biased stimuli and emotion recognition using the Penn Emotion Recognition (ER-40) and Emotion Differentiation (EMODIFF) tests. Experiment 2 altered the contrast levels of the faces themselves to determine whether emotion detection curves would show a pattern characteristic of magnocellular neurons and whether patients would show a deficit in performance related to early sensory processing stages. Results for experiment 1 showed that patients had impaired emotion processing and a preferential magnocellular deficit on the contrast sensitivity task. Greater deficits in ER-40 and EMODIFF performance correlated with impaired contrast sensitivity to the magnocellular-biased condition, which remained significant for the EMODIFF task even when nonspecific correlations due to group were considered in a step-wise regression. Experiment 2 showed contrast response functions indicative of magnocellular processing for both groups, with patients showing impaired performance. Impaired emotion identification on this task was also correlated with magnocellular-biased visual sensory processing dysfunction. These results provide evidence for a contribution of impaired early-stage visual processing in emotion recognition deficits in schizophrenia and suggest that a bottom-up approach to remediation may be effective. PMID:19793797

  11. Sensory contributions to impaired emotion processing in schizophrenia.

    PubMed

    Butler, Pamela D; Abeles, Ilana Y; Weiskopf, Nicole G; Tambini, Arielle; Jalbrzikowski, Maria; Legatt, Michael E; Zemon, Vance; Loughead, James; Gur, Ruben C; Javitt, Daniel C

    2009-11-01

    Both emotion and visual processing deficits are documented in schizophrenia, and preferential magnocellular visual pathway dysfunction has been reported in several studies. This study examined the contribution to emotion-processing deficits of magnocellular and parvocellular visual pathway function, based on stimulus properties and shape of contrast response functions. Experiment 1 examined the relationship between contrast sensitivity to magnocellular- and parvocellular-biased stimuli and emotion recognition using the Penn Emotion Recognition (ER-40) and Emotion Differentiation (EMODIFF) tests. Experiment 2 altered the contrast levels of the faces themselves to determine whether emotion detection curves would show a pattern characteristic of magnocellular neurons and whether patients would show a deficit in performance related to early sensory processing stages. Results for experiment 1 showed that patients had impaired emotion processing and a preferential magnocellular deficit on the contrast sensitivity task. Greater deficits in ER-40 and EMODIFF performance correlated with impaired contrast sensitivity to the magnocellular-biased condition, which remained significant for the EMODIFF task even when nonspecific correlations due to group were considered in a step-wise regression. Experiment 2 showed contrast response functions indicative of magnocellular processing for both groups, with patients showing impaired performance. Impaired emotion identification on this task was also correlated with magnocellular-biased visual sensory processing dysfunction. These results provide evidence for a contribution of impaired early-stage visual processing in emotion recognition deficits in schizophrenia and suggest that a bottom-up approach to remediation may be effective.

  12. The low-frequency encoding disadvantage: Word frequency affects processing demands.

    PubMed

    Diana, Rachel A; Reder, Lynne M

    2006-07-01

    Low-frequency words produce more hits and fewer false alarms than high-frequency words in a recognition task. The low-frequency hit rate advantage has sometimes been attributed to processes that operate during the recognition test (e.g., L. M. Reder et al., 2000). When tasks other than recognition, such as recall, cued recall, or associative recognition, are used, the effects seem to contradict a low-frequency advantage in memory. Four experiments are presented to support the claim that in addition to the advantage of low-frequency words at retrieval, there is a low-frequency disadvantage during encoding. That is, low-frequency words require more processing resources to be encoded episodically than high-frequency words. Under encoding conditions in which processing resources are limited, low-frequency words show a larger decrement in recognition than high-frequency words. Also, studying items (pictures and words of varying frequencies) along with low-frequency words reduces performance for those stimuli. Copyright 2006 APA, all rights reserved.

  13. L2 Word Recognition: Influence of L1 Orthography on Multi-syllabic Word Recognition.

    PubMed

    Hamada, Megumi

    2017-10-01

    L2 reading research suggests that L1 orthographic experience influences L2 word recognition. Nevertheless, the findings on multi-syllabic words in English are still limited despite the fact that a vast majority of words are multi-syllabic. The study investigated whether L1 orthography influences the recognition of multi-syllabic words, focusing on the position of an embedded word. The participants were Arabic ESL learners, Chinese ESL learners, and native speakers of English. The task was a word search task, in which the participants identified a target word embedded in a pseudoword at the initial, middle, or final position. The search accuracy and speed indicated that all groups showed a strong preference for the initial position. The accuracy data further indicated group differences. The Arabic group showed higher accuracy in the final than middle, while the Chinese group showed the opposite and the native speakers showed no difference between the two positions. The findings suggest that L2 multi-syllabic word recognition involves unique processes.

  14. The Dynamic Multisensory Engram: Neural Circuitry Underlying Crossmodal Object Recognition in Rats Changes with the Nature of Object Experience.

    PubMed

    Jacklin, Derek L; Cloke, Jacob M; Potvin, Alphonse; Garrett, Inara; Winters, Boyer D

    2016-01-27

    Rats, humans, and monkeys demonstrate robust crossmodal object recognition (CMOR), identifying objects across sensory modalities. We have shown that rats' performance of a spontaneous tactile-to-visual CMOR task requires functional integration of perirhinal (PRh) and posterior parietal (PPC) cortices, which seemingly provide visual and tactile object feature processing, respectively. However, research with primates has suggested that PRh is sufficient for multisensory object representation. We tested this hypothesis in rats using a modification of the CMOR task in which multimodal preexposure to the to-be-remembered objects significantly facilitates performance. In the original CMOR task, with no preexposure, reversible lesions of PRh or PPC produced patterns of impairment consistent with modality-specific contributions. Conversely, in the CMOR task with preexposure, PPC lesions had no effect, whereas PRh involvement was robust, proving necessary for phases of the task that did not require PRh activity when rats did not have preexposure; this pattern was supported by results from c-fos imaging. We suggest that multimodal preexposure alters the circuitry responsible for object recognition, in this case obviating the need for PPC contributions and expanding PRh involvement, consistent with the polymodal nature of PRh connections and results from primates indicating a key role for PRh in multisensory object representation. These findings have significant implications for our understanding of multisensory information processing, suggesting that the nature of an individual's past experience with an object strongly determines the brain circuitry involved in representing that object's multisensory features in memory. The ability to integrate information from multiple sensory modalities is crucial to the survival of organisms living in complex environments. Appropriate responses to behaviorally relevant objects are informed by integration of multisensory object features. We used crossmodal object recognition tasks in rats to study the neurobiological basis of multisensory object representation. When rats had no prior exposure to the to-be-remembered objects, the spontaneous ability to recognize objects across sensory modalities relied on functional interaction between multiple cortical regions. However, prior multisensory exploration of the task-relevant objects remapped cortical contributions, negating the involvement of one region and significantly expanding the role of another. This finding emphasizes the dynamic nature of cortical representation of objects in relation to past experience. Copyright © 2016 the authors 0270-6474/16/361273-17$15.00/0.

  15. Biometric recognition via texture features of eye movement trajectories in a visual searching task.

    PubMed

    Li, Chunyong; Xue, Jiguo; Quan, Cheng; Yue, Jingwei; Zhang, Chenggang

    2018-01-01

    Biometric recognition technology based on eye-movement dynamics has been in development for more than ten years. Different visual tasks, feature extraction and feature recognition methods are proposed to improve the performance of eye movement biometric system. However, the correct identification and verification rates, especially in long-term experiments, as well as the effects of visual tasks and eye trackers' temporal and spatial resolution are still the foremost considerations in eye movement biometrics. With a focus on these issues, we proposed a new visual searching task for eye movement data collection and a new class of eye movement features for biometric recognition. In order to demonstrate the improvement of this visual searching task being used in eye movement biometrics, three other eye movement feature extraction methods were also tested on our eye movement datasets. Compared with the original results, all three methods yielded better results as expected. In addition, the biometric performance of these four feature extraction methods was also compared using the equal error rate (EER) and Rank-1 identification rate (Rank-1 IR), and the texture features introduced in this paper were ultimately shown to offer some advantages with regard to long-term stability and robustness over time and spatial precision. Finally, the results of different combinations of these methods with a score-level fusion method indicated that multi-biometric methods perform better in most cases.

  16. Biometric recognition via texture features of eye movement trajectories in a visual searching task

    PubMed Central

    Li, Chunyong; Xue, Jiguo; Quan, Cheng; Yue, Jingwei

    2018-01-01

    Biometric recognition technology based on eye-movement dynamics has been in development for more than ten years. Different visual tasks, feature extraction and feature recognition methods are proposed to improve the performance of eye movement biometric system. However, the correct identification and verification rates, especially in long-term experiments, as well as the effects of visual tasks and eye trackers’ temporal and spatial resolution are still the foremost considerations in eye movement biometrics. With a focus on these issues, we proposed a new visual searching task for eye movement data collection and a new class of eye movement features for biometric recognition. In order to demonstrate the improvement of this visual searching task being used in eye movement biometrics, three other eye movement feature extraction methods were also tested on our eye movement datasets. Compared with the original results, all three methods yielded better results as expected. In addition, the biometric performance of these four feature extraction methods was also compared using the equal error rate (EER) and Rank-1 identification rate (Rank-1 IR), and the texture features introduced in this paper were ultimately shown to offer some advantages with regard to long-term stability and robustness over time and spatial precision. Finally, the results of different combinations of these methods with a score-level fusion method indicated that multi-biometric methods perform better in most cases. PMID:29617383

  17. Perceiving and Remembering Events Cross-Linguistically: Evidence from Dual-Task Paradigms

    ERIC Educational Resources Information Center

    Trueswell, John C.; Papafragou, Anna

    2010-01-01

    What role does language play during attention allocation in perceiving and remembering events? We recorded adults' eye movements as they studied animated motion events for a later recognition task. We compared native speakers of two languages that use different means of expressing motion (Greek and English). In Experiment 1, eye movements revealed…

  18. Identifiable Orthographically Similar Word Primes Interfere in Visual Word Identification

    ERIC Educational Resources Information Center

    Burt, Jennifer S.

    2009-01-01

    University students participated in five experiments concerning the effects of unmasked, orthographically similar, primes on visual word recognition in the lexical decision task (LDT) and naming tasks. The modal prime-target stimulus onset asynchrony (SOA) was 350 ms. When primes were words that were orthographic neighbors of the targets, and…

  19. Age-Related Effects of Stimulus Type and Congruency on Inattentional Blindness.

    PubMed

    Liu, Han-Hui

    2018-01-01

    Background: Most of the previous inattentional blindness (IB) studies focused on the factors that contributed to the detection of unattended stimuli. The age-related changes on IB have rarely been investigated across all age groups. In the current study, by using the dual-task IB paradigm, we aimed to explore the age-related effects of attended stimuli type and congruency between attended and unattended stimuli on IB. Methods: The current study recruited 111 participants (30 adolescents, 48 young adults, and 33 middle-aged adults) in the baseline recognition experiments and 341 participants (135 adolescents, 135 young adults, and 71 middle-aged adults) in the IB experiment. We applied the superimposed picture and word streams experimental paradigm to explore the age-related effects of attended stimuli type and congruency between attended and unattended stimuli on IB. An ANOVA was performed to analyze the results. Results: Participants across all age groups presented significantly lower recognition scores for both pictures and words in comparison with baseline recognition. Participants presented decreased recognition for unattended pictures or words from adolescents to young adults and middle-aged adults. When the pictures and words are congruent, all the participants showed significantly higher recognition scores for unattended stimuli in comparison with incongruent condition. Adolescents and young adults did not show recognition differences when primary tasks were attending pictures or words. Conclusion: The current findings showed that all participants presented better recognition scores for attended stimuli in comparison with unattended stimuli, and the recognition scores decreased from the adolescents to young and middle-aged adults. The findings partly supported the attention capacity models of IB.

  20. Color and context: an ERP study on intrinsic and extrinsic feature binding in episodic memory.

    PubMed

    Ecker, Ullrich K H; Zimmer, Hubert D; Groh-Bordin, Christian

    2007-09-01

    Episodic memory for intrinsic item and extrinsic context information is postulated to rely on two distinct types of representation: object and episodic tokens. These provide the basis for familiarity and recollection, respectively. Electrophysiological indices of these processes (ERP old-new effects) were used together with behavioral data to test these assumptions. We manipulated an intrinsic object feature (color; Experiment 1) and a contextual feature (background; Experiments 1 and 2). In an inclusion task (Experiment 1), the study-test manipulation of color affected object recognition performance and modulated ERP old-new effects associated with both familiarity and recollection. In contrast, a contextual manipulation had no effect, although both intrinsic and extrinsic information was available in a direct feature (source memory) test. When made task relevant (exclusion task; Experiment 2), however, context affected the ERP recollection effect, while still leaving the ERP familiarity effect uninfluenced. We conclude that intrinsic features bound in object tokens are involuntarily processed during object recognition, thus influencing familiarity, whereas context features bound in episodic tokens are voluntarily accessed, exclusively influencing recollection. Figures depicting all the electrodes analyzed are available in an online supplement at www.psychonomic.org/archive.

  1. When a Picasso is a "Picasso": the entry point in the identification of visual art.

    PubMed

    Belke, B; Leder, H; Harsanyi, G; Carbon, C C

    2010-02-01

    We investigated whether art is distinguished from other real world objects in human cognition, in that art allows for a special memorial representation and identification based on artists' specific stylistic appearances. Testing art-experienced viewers, converging empirical evidence from three experiments, which have proved sensitive to addressing the question of initial object recognition, suggest that identification of visual art is at the subordinate level of the producing artist. Specifically, in a free naming task it was found that art-objects as opposed to non-art-objects were most frequently named with subordinate level categories, with the artist's name as the most frequent category (Experiment 1). In a category-verification task (Experiment 2), art-objects were recognized faster than non-art-objects on the subordinate level with the artist's name. In a conceptual priming task, subordinate primes of artists' names facilitated matching responses to art-objects but subordinate primes did not facilitate responses to non-art-objects (Experiment 3). Collectively, these results suggest that the artist's name has a special status in the memorial representation of visual art and serves as a predominant entry point in recognition in art perception. Copyright 2009 Elsevier B.V. All rights reserved.

  2. Effects of repeated collaborative retrieval on individual memory vary as a function of recall versus recognition tasks.

    PubMed

    Blumen, Helena M; Rajaram, Suparna

    2009-11-01

    Our research examines how prior group collaboration modulates later individual memory. We recently showed that repeated collaborative recall sessions benefit later individual recall more than a single collaborative recall session (Blumen & Rajaram, 2008). Current research compared the effects of repeated collaborative recall and repeated collaborative recognition on later individual recall and later individual recognition. A total of 192 participants studied a list of nouns and then completed three successive retrieval sessions in one of four conditions. While two collaborative recall sessions and two collaborative recognition sessions generated comparable levels of individual recall (CRecall-CRecall-I Recall approximately CRecognition-CRecognition-I Recall , Experiment 1a), two collaborative recognition sessions generated greater levels of individual recognition than two collaborative recall sessions (CRecognition-CRecognition- IRecognition > CRecall-CRecall- I Recognition , Experiment 1b). These findings are discussed in terms of two opposing mechanisms that operate during collaborative retrieval-re-exposure and retrieval disruption-and in terms of transfer-appropriate processing across collaborative and individual retrieval sessions.

  3. A task-irrelevant stimulus attribute affects perception and short-term memory

    PubMed Central

    Huang, Jie; Kahana, Michael J.; Sekuler, Robert

    2010-01-01

    Selective attention protects cognition against intrusions of task-irrelevant stimulus attributes. This protective function was tested in coordinated psychophysical and memory experiments. Stimuli were superimposed, horizontally and vertically oriented gratings of varying spatial frequency; only one orientation was task relevant. Experiment 1 demonstrated that a task-irrelevant spatial frequency interfered with visual discrimination of the task-relevant spatial frequency. Experiment 2 adopted a two-item Sternberg task, using stimuli that had been scaled to neutralize interference at the level of vision. Despite being visually neutralized, the task-irrelevant attribute strongly influenced recognition accuracy and associated reaction times (RTs). This effect was sharply tuned, with the task-irrelevant spatial frequency having an impact only when the task-relevant spatial frequencies of the probe and study items were highly similar to one another. Model-based analyses of judgment accuracy and RT distributional properties converged on the point that the irrelevant orientation operates at an early stage in memory processing, not at a later one that supports decision making. PMID:19933454

  4. Higher-Order Neural Networks Applied to 2D and 3D Object Recognition

    NASA Technical Reports Server (NTRS)

    Spirkovska, Lilly; Reid, Max B.

    1994-01-01

    A Higher-Order Neural Network (HONN) can be designed to be invariant to geometric transformations such as scale, translation, and in-plane rotation. Invariances are built directly into the architecture of a HONN and do not need to be learned. Thus, for 2D object recognition, the network needs to be trained on just one view of each object class, not numerous scaled, translated, and rotated views. Because the 2D object recognition task is a component of the 3D object recognition task, built-in 2D invariance also decreases the size of the training set required for 3D object recognition. We present results for 2D object recognition both in simulation and within a robotic vision experiment and for 3D object recognition in simulation. We also compare our method to other approaches and show that HONNs have distinct advantages for position, scale, and rotation-invariant object recognition. The major drawback of HONNs is that the size of the input field is limited due to the memory required for the large number of interconnections in a fully connected network. We present partial connectivity strategies and a coarse-coding technique for overcoming this limitation and increasing the input field to that required by practical object recognition problems.

  5. Gait Recognition Based on Convolutional Neural Networks

    NASA Astrophysics Data System (ADS)

    Sokolova, A.; Konushin, A.

    2017-05-01

    In this work we investigate the problem of people recognition by their gait. For this task, we implement deep learning approach using the optical flow as the main source of motion information and combine neural feature extraction with the additional embedding of descriptors for representation improvement. In order to find the best heuristics, we compare several deep neural network architectures, learning and classification strategies. The experiments were made on two popular datasets for gait recognition, so we investigate their advantages and disadvantages and the transferability of considered methods.

  6. Effects of radiofrequency radiation emitted by cellular telephones on the cognitive functions of humans.

    PubMed

    Eliyahu, Ilan; Luria, Roy; Hareuveny, Ronen; Margaliot, Menachem; Meiran, Nachshon; Shani, Gad

    2006-02-01

    The present study examined the effects of exposure to Electromagnetic Radiation emitted by a standard GSM phone at 890 MHz on human cognitive functions. This study attempted to establish a connection between the exposure of a specific area of the brain and the cognitive functions associated with that area. A total of 36 healthy right-handed male subjects performed four distinct cognitive tasks: spatial item recognition, verbal item recognition, and two spatial compatibility tasks. Tasks were chosen according to the brain side they are assumed to activate. All subjects performed the tasks under three exposure conditions: right side, left side, and sham exposure. The phones were controlled by a base station simulator and operated at their full power. We have recorded the reaction times (RTs) and accuracy of the responses. The experiments consisted of two sections, of 1 h each, with a 5 min break in between. The tasks and the exposure regimes were counterbalanced. The results indicated that the exposure of the left side of the brain slows down the left-hand response time, in the second-later-part of the experiment. This effect was apparent in three of the four tasks, and was highly significant in only one of the tests. The exposure intensity and its duration exceeded the common exposure of cellular phone users.

  7. Distinguishing highly confident accurate and inaccurate memory: insights about relevant and irrelevant influences on memory confidence

    PubMed Central

    Chua, Elizabeth F.; Hannula, Deborah E.; Ranganath, Charan

    2012-01-01

    It is generally believed that accuracy and confidence in one’s memory are related, but there are many instances when they diverge. Accordingly, it is important to disentangle the factors which contribute to memory accuracy and confidence, especially those factors that contribute to confidence, but not accuracy. We used eye movements to separately measure fluent cue processing, the target recognition experience, and relative evidence assessment on recognition confidence and accuracy. Eye movements were monitored during a face-scene associative recognition task, in which participants first saw a scene cue, followed by a forced-choice recognition test for the associated face, with confidence ratings. Eye movement indices of the target recognition experience were largely indicative of accuracy, and showed a relationship to confidence for accurate decisions. In contrast, eye movements during the scene cue raised the possibility that more fluent cue processing was related to higher confidence for both accurate and inaccurate recognition decisions. In a second experiment, we manipulated cue familiarity, and therefore cue fluency. Participants showed higher confidence for cue-target associations for when the cue was more familiar, especially for incorrect responses. These results suggest that over-reliance on cue familiarity and under-reliance on the target recognition experience may lead to erroneous confidence. PMID:22171810

  8. Distinguishing highly confident accurate and inaccurate memory: insights about relevant and irrelevant influences on memory confidence.

    PubMed

    Chua, Elizabeth F; Hannula, Deborah E; Ranganath, Charan

    2012-01-01

    It is generally believed that accuracy and confidence in one's memory are related, but there are many instances when they diverge. Accordingly it is important to disentangle the factors that contribute to memory accuracy and confidence, especially those factors that contribute to confidence, but not accuracy. We used eye movements to separately measure fluent cue processing, the target recognition experience, and relative evidence assessment on recognition confidence and accuracy. Eye movements were monitored during a face-scene associative recognition task, in which participants first saw a scene cue, followed by a forced-choice recognition test for the associated face, with confidence ratings. Eye movement indices of the target recognition experience were largely indicative of accuracy, and showed a relationship to confidence for accurate decisions. In contrast, eye movements during the scene cue raised the possibility that more fluent cue processing was related to higher confidence for both accurate and inaccurate recognition decisions. In a second experiment we manipulated cue familiarity, and therefore cue fluency. Participants showed higher confidence for cue-target associations for when the cue was more familiar, especially for incorrect responses. These results suggest that over-reliance on cue familiarity and under-reliance on the target recognition experience may lead to erroneous confidence.

  9. Sensory experience ratings (SERs) for 1,659 French words: Relationships with other psycholinguistic variables and visual word recognition.

    PubMed

    Bonin, Patrick; Méot, Alain; Ferrand, Ludovic; Bugaïska, Aurélia

    2015-09-01

    We collected sensory experience ratings (SERs) for 1,659 French words in adults. Sensory experience for words is a recently introduced variable that corresponds to the degree to which words elicit sensory and perceptual experiences (Juhasz & Yap Behavior Research Methods, 45, 160-168, 2013; Juhasz, Yap, Dicke, Taylor, & Gullick Quarterly Journal of Experimental Psychology, 64, 1683-1691, 2011). The relationships of the sensory experience norms with other psycholinguistic variables (e.g., imageability and age of acquisition) were analyzed. We also investigated the degree to which SER predicted performance in visual word recognition tasks (lexical decision, word naming, and progressive demasking). The analyses indicated that SER reliably predicted response times in lexical decision, but not in word naming or progressive demasking. The findings are discussed in relation to the status of SER, the role of semantic code activation in visual word recognition, and the embodied view of cognition.

  10. 29 CFR 1960.56 - Training of safety and health specialists.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., laboratory experiences, field study, and other formal learning experiences to prepare them to perform the... program development and implementation, as well as hazard recognition, evaluation and control, equipment... tasks. (b) Each agency shall implement career development programs for their occupational safety and...

  11. Conflicting but close: Readers' integration of information sources as a function of their disagreement.

    PubMed

    Saux, Gaston; Britt, Anne; Le Bigot, Ludovic; Vibert, Nicolas; Burin, Debora; Rouet, Jean-François

    2017-01-01

    According to the documents model framework (Britt, Perfetti, Sandak, & Rouet, 1999), readers' detection of contradictions within texts increases their integration of source-content links (i.e., who says what). This study examines whether conflict may also strengthen the relationship between the respective sources. In two experiments, participants read brief news reports containing two critical statements attributed to different sources. In half of the reports, the statements were consistent with each other, whereas in the other half they were discrepant. Participants were tested for source memory and source integration in an immediate item-recognition task (Experiment 1) and a cued recall task (Experiments 1 and 2). In both experiments, discrepancies increased readers' memory for sources. We found that discrepant sources enhanced retrieval of the other source compared to consistent sources (using a delayed recall measure; Experiments 1 and 2). However, discrepant sources failed to prime the other source as evidenced in an online recognition measure (Experiment 1). We argue that discrepancies promoted the construction of links between sources, but that integration did not take place during reading.

  12. Deep--deeper--deepest? Encoding strategies and the recognition of human faces.

    PubMed

    Sporer, S L

    1991-03-01

    Various encoding strategies that supposedly promote deeper processing of human faces (e.g., character judgments) have led to better recognition than more shallow processing tasks (judging the width of the nose). However, does deeper processing actually lead to an improvement in recognition, or, conversely, does shallow processing lead to a deterioration in performance when compared with naturally employed encoding strategies? Three experiments systematically compared a total of 8 different encoding strategies manipulating depth of processing, amount of elaboration, and self-generation of judgmental categories. All strategies that required a scanning of the whole face were basically equivalent but no better than natural strategy controls. The consistently worst groups were the ones that rated faces along preselected physical dimensions. This can be explained by subjects' lesser task involvement as revealed by manipulation checks.

  13. Address entry while driving: speech recognition versus a touch-screen keyboard.

    PubMed

    Tsimhoni, Omer; Smith, Daniel; Green, Paul

    2004-01-01

    A driving simulator experiment was conducted to determine the effects of entering addresses into a navigation system during driving. Participants drove on roads of varying visual demand while entering addresses. Three address entry methods were explored: word-based speech recognition, character-based speech recognition, and typing on a touch-screen keyboard. For each method, vehicle control and task measures, glance timing, and subjective ratings were examined. During driving, word-based speech recognition yielded the shortest total task time (15.3 s), followed by character-based speech recognition (41.0 s) and touch-screen keyboard (86.0 s). The standard deviation of lateral position when performing keyboard entry (0.21 m) was 60% higher than that for all other address entry methods (0.13 m). Degradation of vehicle control associated with address entry using a touch screen suggests that the use of speech recognition is favorable. Speech recognition systems with visual feedback, however, even with excellent accuracy, are not without performance consequences. Applications of this research include the design of in-vehicle navigation systems as well as other systems requiring significant driver input, such as E-mail, the Internet, and text messaging.

  14. The subjective experience of object recognition: comparing metacognition for object detection and object categorization.

    PubMed

    Meuwese, Julia D I; van Loon, Anouk M; Lamme, Victor A F; Fahrenfort, Johannes J

    2014-05-01

    Perceptual decisions seem to be made automatically and almost instantly. Constructing a unitary subjective conscious experience takes more time. For example, when trying to avoid a collision with a car on a foggy road you brake or steer away in a reflex, before realizing you were in a near accident. This subjective aspect of object recognition has been given little attention. We used metacognition (assessed with confidence ratings) to measure subjective experience during object detection and object categorization for degraded and masked objects, while objective performance was matched. Metacognition was equal for degraded and masked objects, but categorization led to higher metacognition than did detection. This effect turned out to be driven by a difference in metacognition for correct rejection trials, which seemed to be caused by an asymmetry of the distractor stimulus: It does not contain object-related information in the detection task, whereas it does contain such information in the categorization task. Strikingly, this asymmetry selectively impacted metacognitive ability when objective performance was matched. This finding reveals a fundamental difference in how humans reflect versus act on information: When matching the amount of information required to perform two tasks at some objective level of accuracy (acting), metacognitive ability (reflecting) is still better in tasks that rely on positive evidence (categorization) than in tasks that rely more strongly on an absence of evidence (detection).

  15. Exogenous temporal cues enhance recognition memory in an object-based manner.

    PubMed

    Ohyama, Junji; Watanabe, Katsumi

    2010-11-01

    Exogenous attention enhances the perception of attended items in both a space-based and an object-based manner. Exogenous attention also improves recognition memory for attended items in the space-based mode. However, it has not been examined whether object-based exogenous attention enhances recognition memory. To address this issue, we examined whether a sudden visual change in a task-irrelevant stimulus (an exogenous cue) would affect participants' recognition memory for items that were serially presented around a cued time. The results showed that recognition accuracy for an item was strongly enhanced when the visual cue occurred at the same location and time as the item (Experiments 1 and 2). The memory enhancement effect occurred when the exogenous visual cue and an item belonged to the same object (Experiments 3 and 4) and even when the cue was counterpredictive of the timing of an item to be asked about (Experiment 5). The present study suggests that an exogenous temporal cue automatically enhances the recognition accuracy for an item that is presented at close temporal proximity to the cue and that recognition memory enhancement occurs in an object-based manner.

  16. Impact of body posture on laterality judgement and explicit recognition tasks performed on self and others' hands.

    PubMed

    Conson, Massimiliano; Errico, Domenico; Mazzarella, Elisabetta; De Bellis, Francesco; Grossi, Dario; Trojano, Luigi

    2015-04-01

    Judgments on laterality of hand stimuli are faster and more accurate when dealing with one's own than others' hand, i.e. the self-advantage. This advantage seems to be related to activation of a sensorimotor mechanism while implicitly processing one's own hands, but not during explicit one's own hand recognition. Here, we specifically tested the influence of proprioceptive information on the self-hand advantage by manipulating participants' body posture during self and others' hand processing. In Experiment 1, right-handed healthy participants judged laterality of either self or others' hands, whereas in Experiment 2, an explicit recognition of one's own hands was required. In both experiments, the participants performed the task while holding their left or right arm flexed with their hand in direct contact with their chest ("flexed self-touch posture") or with their hand placed on a wooden smooth surface in correspondence with their chest ("flexed proprioceptive-only posture"). In an "extended control posture", both arms were extended and in contact with thighs. In Experiment 1 (hand laterality judgment), we confirmed the self-advantage and demonstrated that it was enhanced when the subjects judged left-hand stimuli at 270° orientation while keeping their left arm in the flexed proprioceptive-only posture. In Experiment 2 (explicit self-hand recognition), instead, we found an advantage for others' hand ("self-disadvantage") independently from posture manipulation. Thus, position-related proprioceptive information from left non-dominant arm can enhance sensorimotor one's own body representation selectively favouring implicit self-hands processing.

  17. On the relation between feeling of knowing and lexical decision: persistent subthreshold activation or topic familiarity?

    PubMed

    Connor, L T; Balota, D A; Neely, J H

    1992-05-01

    Experiment 1 replicated Yaniv and Meyer's (1987) finding that lexical decision and episodic recognition performance was better for words previously yielding high-accessibility levels (a combination of feeling-of-knowing and tip-of-the-tongue ratings) in comparison with those yielding low-accessibility levels in a rare word definition task. Experiment 2 yielded the same pattern even though lexical decisions preceded accessibility estimates by a full week. Experiment 3 dismissed the possibility that the Experiment 2 results may have been due to a long-term influence from the lexical decision task to the rare word judgment task. These results support a model in which Ss (a) retrieve topic familiarity information in making accessibility estimates in the rare word definition task and (b) use this information to modulate lexical decision performance.

  18. Effects of visual and verbal interference tasks on olfactory memory: the role of task complexity.

    PubMed

    Annett, J M; Leslie, J C

    1996-08-01

    Recent studies have demonstrated that visual and verbal suppression tasks interfere with olfactory memory in a manner which is partially consistent with a dual coding interpretation. However, it has been suggested that total task complexity rather than modality specificity of the suppression tasks might account for the observed pattern of results. This study addressed the issue of whether or not the level of difficulty and complexity of suppression tasks could explain the apparent modality effects noted in earlier experiments. A total of 608 participants were each allocated to one of 19 experimental conditions involving interference tasks which varied suppression type (visual or verbal), nature of complexity (single, double or mixed) and level of difficulty (easy, optimal or difficult) and presented with 13 target odours. Either recognition of the odours or free recall of the odour names was tested on one occasion, either within 15 minutes of presentation or one week later. Both recognition and recall performance showed an overall effect for suppression nature, suppression level and time of testing with no effect for suppression type. The results lend only limited support to Paivio's (1986) dual coding theory, but have a number of characteristics which suggest that an adequate account of olfactory memory may be broadly similar to current theories of face and object recognition. All of these phenomena might be dealt with by an appropriately modified version of dual coding theory.

  19. Activation of G-protein-coupled receptor 30 is sufficient to enhance spatial recognition memory in ovariectomized rats.

    PubMed

    Hawley, Wayne R; Grissom, Elin M; Moody, Nicole M; Dohanich, Gary P; Vasudevan, Nandini

    2014-04-01

    In ovariectomized rats, administration of estradiol, or selective estrogen receptor agonists that activate either the α or β isoforms, have been shown to enhance spatial cognition on a variety of learning and memory tasks, including those that capitalize on the preference of rats to seek out novelty. Although the effects of the putative estrogen G-protein-coupled receptor 30 (GPR30) on hippocampus-based tasks have been reported using food-motivated tasks, the effects of activation of GPR30 receptors on tasks that depend on the preference of rats to seek out spatial novelty remain to be determined. Therefore, the aim of the current study was to determine if short-term treatment of ovariectomized rats with G-1, an agonist for GPR30, would mimic the effects on spatial recognition memory observed following short-term estradiol treatment. In Experiment 1, ovariectomized rats treated with a low dose (1 μg) of estradiol 48 h and 24 h prior to the information trial of a Y-maze task exhibited a preference for the arm associated with the novel environment on the retention trial conducted 48 h later. In Experiment 2, treatment of ovariectomized rats with G-1 (25 μg) 48 h and 24 h prior to the information trial of a Y-maze task resulted in a greater preference for the arm associated with the novel environment on the retention trial. Collectively, the results indicated that short-term treatment of ovariectomized rats with a GPR30 agonist was sufficient to enhance spatial recognition memory, an effect that also occurred following short-term treatment with a low dose of estradiol. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Perceiving patterns of play in dynamic sport tasks: investigating the essential information underlying skilled performance.

    PubMed

    Willams, A Mark; Hodges, Nicola J; North, Jamie S; Barton, Gabor

    2006-01-01

    The perceptual-cognitive information used to support pattern-recognition skill in soccer was examined. In experiment 1, skilled players were quicker and more accurate than less-skilled players at recognising familiar and unfamiliar soccer action sequences presented on film. In experiment 2, these action sequences were converted into point-light displays, with superficial display features removed and the positions of players and the relational information between them made more salient. Skilled players were more accurate than less-skilled players in recognising sequences presented in point-light form, implying that each pattern of play can be defined by the unique relations between players. In experiment 3, various offensive and defensive players were occluded for the duration of each trial in an attempt to identify the most important sources of information underpinning successful performance. A decrease in response accuracy was observed under occluded compared with non-occluded conditions and the expertise effect was no longer observed. The relational information between certain key players, team-mates and their defensive counterparts may provide the essential information for effective pattern-recognition skill in soccer. Structural feature analysis, temporal phase relations, and knowledge-based information are effectively integrated to facilitate pattern recognition in dynamic sport tasks.

  1. Rapid extraction of gist from visual text and its influence on word recognition.

    PubMed

    Asano, Michiko; Yokosawa, Kazuhiko

    2011-01-01

    Two experiments explored rapid extraction of gist from a visual text and its influence on word recognition. In both, a short text (sentence) containing a target word was presented for 200 ms and was followed by a target recognition task. Results showed that participants recognized contextually anomalous word targets less frequently than contextually consistent counterparts (Experiment 1). This context effect was obtained when sentences contained the same semantic content but with disrupted syntactic structure (Experiment 2). Results demonstrate that words in a briefly presented visual sentence are processed in parallel and that rapid extraction of sentence gist relies on a primitive representation of sentence context (termed protocontext) that is semantically activated by the simultaneous presentation of multiple words (i.e., a sentence) before syntactic processing.

  2. An Investigation of Differential Encoding and Retrieval in Older Adult College Students.

    ERIC Educational Resources Information Center

    Shaughnessy, Michael F.; Reif, Laurie

    Three experiments were conducted in order to clarify the encoding/retrieval dilemma in older adult students; and the recognition/recall test issue was also explored. First, a mnemonic technique based on the "key word" method of Funk and Tarshis was used; secondly, a semantic processing task was tried; and lastly, a repetition task, based…

  3. Measuring Search Efficiency in Complex Visual Search Tasks: Global and Local Clutter

    ERIC Educational Resources Information Center

    Beck, Melissa R.; Lohrenz, Maura C.; Trafton, J. Gregory

    2010-01-01

    Set size and crowding affect search efficiency by limiting attention for recognition and attention against competition; however, these factors can be difficult to quantify in complex search tasks. The current experiments use a quantitative measure of the amount and variability of visual information (i.e., clutter) in highly complex stimuli (i.e.,…

  4. Happy faces, sad faces: Emotion understanding in toddlers and preschoolers with language impairments.

    PubMed

    Rieffe, Carolien; Wiefferink, Carin H

    2017-03-01

    The capacity for emotion recognition and understanding is crucial for daily social functioning. We examined to what extent this capacity is impaired in young children with a Language Impairment (LI). In typical development, children learn to recognize emotions in faces and situations through social experiences and social learning. Children with LI have less access to these experiences and are therefore expected to fall behind their peers without LI. In this study, 89 preschool children with LI and 202 children without LI (mean age 3 years and 10 months in both groups) were tested on three indices for facial emotion recognition (discrimination, identification, and attribution in emotion evoking situations). Parents reported on their children's emotion vocabulary and ability to talk about their own emotions. Preschoolers with and without LI performed similarly on the non-verbal task for emotion discrimination. Children with LI fell behind their peers without LI on both other tasks for emotion recognition that involved labelling the four basic emotions (happy, sad, angry, fear). The outcomes of these two tasks were also related to children's level of emotion language. These outcomes emphasize the importance of 'emotion talk' at the youngest age possible for children with LI. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Is talking to an automated teller machine natural and fun?

    PubMed

    Chan, F Y; Khalid, H M

    Usability and affective issues of using automatic speech recognition technology to interact with an automated teller machine (ATM) are investigated in two experiments. The first uncovered dialogue patterns of ATM users for the purpose of designing the user interface for a simulated speech ATM system. Applying the Wizard-of-Oz methodology, multiple mapping and word spotting techniques, the speech driven ATM accommodates bilingual users of Bahasa Melayu and English. The second experiment evaluates the usability of a hybrid speech ATM, comparing it with a simulated manual ATM. The aim is to investigate how natural and fun can talking to a speech ATM be for these first-time users. Subjects performed the withdrawal and balance enquiry tasks. The ANOVA was performed on the usability and affective data. The results showed significant differences between systems in the ability to complete the tasks as well as in transaction errors. Performance was measured on the time taken by subjects to complete the task and the number of speech recognition errors that occurred. On the basis of user emotions, it can be said that the hybrid speech system enabled pleasurable interaction. Despite the limitations of speech recognition technology, users are set to talk to the ATM when it becomes available for public use.

  6. Slow potentials in a melody recognition task.

    PubMed

    Verleger, R; Schellberg, D

    1990-01-01

    In a previous study, slow negative shifts were found in the EEG of subjects listening to well-known melodies. The two experiments reported here were designed to investigate the variables to which these slow potentials are related. In the first experiment, two opposite hypotheses were tested: The slow shifts might express subjects' acquaintance with the melodies or, on the contrary, the effort invested to identify them. To this end, some of the melodies were presented in the rhythms of other melodies to make recognition more difficult. Further, melodies rated as very well-known and as very unknown were analysed separately. However, the slow shifts were not affected by these experimental variations. Therefore in the second experiment, on the one hand the purely physical parameters intensity and duration were varied, but this variation had no impact on the slow shifts either. On the other hand, recognition was made more difficult by monotonously repeating the pitch of the 4th tone for the rest of some melodies. The slow negative shifts were enhanced with these monotonous melodies. This enhancement supports the "effort" hypothesis. Accordingly, the ofter shifts obtained in both experiments might likewise reflect effort. But since the task was not demanding, it is suggested that these constant shifts reflect the effort invested for coping with the entire underarousing situation rather than with the task. Frequently, slow eye movements occurred in the same time range as the slow potentials, resulting in EOG potentials spreading to the EEG recording sites. Yet results did not change substantially when the EEG recordings were corrected for the influence of EOG potentials.

  7. Toward open set recognition.

    PubMed

    Scheirer, Walter J; de Rezende Rocha, Anderson; Sapkota, Archana; Boult, Terrance E

    2013-07-01

    To date, almost all experimental evaluations of machine learning-based recognition algorithms in computer vision have taken the form of "closed set" recognition, whereby all testing classes are known at training time. A more realistic scenario for vision applications is "open set" recognition, where incomplete knowledge of the world is present at training time, and unknown classes can be submitted to an algorithm during testing. This paper explores the nature of open set recognition and formalizes its definition as a constrained minimization problem. The open set recognition problem is not well addressed by existing algorithms because it requires strong generalization. As a step toward a solution, we introduce a novel "1-vs-set machine," which sculpts a decision space from the marginal distances of a 1-class or binary SVM with a linear kernel. This methodology applies to several different applications in computer vision where open set recognition is a challenging problem, including object recognition and face verification. We consider both in this work, with large scale cross-dataset experiments performed over the Caltech 256 and ImageNet sets, as well as face matching experiments performed over the Labeled Faces in the Wild set. The experiments highlight the effectiveness of machines adapted for open set evaluation compared to existing 1-class and binary SVMs for the same tasks.

  8. Multitasking During Degraded Speech Recognition in School-Age Children

    PubMed Central

    Ward, Kristina M.; Brehm, Laurel

    2017-01-01

    Multitasking requires individuals to allocate their cognitive resources across different tasks. The purpose of the current study was to assess school-age children’s multitasking abilities during degraded speech recognition. Children (8 to 12 years old) completed a dual-task paradigm including a sentence recognition (primary) task containing speech that was either unprocessed or noise-band vocoded with 8, 6, or 4 spectral channels and a visual monitoring (secondary) task. Children’s accuracy and reaction time on the visual monitoring task was quantified during the dual-task paradigm in each condition of the primary task and compared with single-task performance. Children experienced dual-task costs in the 6- and 4-channel conditions of the primary speech recognition task with decreased accuracy on the visual monitoring task relative to baseline performance. In all conditions, children’s dual-task performance on the visual monitoring task was strongly predicted by their single-task (baseline) performance on the task. Results suggest that children’s proficiency with the secondary task contributes to the magnitude of dual-task costs while multitasking during degraded speech recognition. PMID:28105890

  9. Multitasking During Degraded Speech Recognition in School-Age Children.

    PubMed

    Grieco-Calub, Tina M; Ward, Kristina M; Brehm, Laurel

    2017-01-01

    Multitasking requires individuals to allocate their cognitive resources across different tasks. The purpose of the current study was to assess school-age children's multitasking abilities during degraded speech recognition. Children (8 to 12 years old) completed a dual-task paradigm including a sentence recognition (primary) task containing speech that was either unprocessed or noise-band vocoded with 8, 6, or 4 spectral channels and a visual monitoring (secondary) task. Children's accuracy and reaction time on the visual monitoring task was quantified during the dual-task paradigm in each condition of the primary task and compared with single-task performance. Children experienced dual-task costs in the 6- and 4-channel conditions of the primary speech recognition task with decreased accuracy on the visual monitoring task relative to baseline performance. In all conditions, children's dual-task performance on the visual monitoring task was strongly predicted by their single-task (baseline) performance on the task. Results suggest that children's proficiency with the secondary task contributes to the magnitude of dual-task costs while multitasking during degraded speech recognition.

  10. Predicting reasoning from memory.

    PubMed

    Heit, Evan; Hayes, Brett K

    2011-02-01

    In an effort to assess the relations between reasoning and memory, in 8 experiments, the authors examined how well responses on an inductive reasoning task are predicted from responses on a recognition memory task for the same picture stimuli. Across several experimental manipulations, such as varying study time, presentation frequency, and the presence of stimuli from other categories, there was a high correlation between reasoning and memory responses (average r = .87), and these manipulations showed similar effects on the 2 tasks. The results point to common mechanisms underlying inductive reasoning and recognition memory abilities. A mathematical model, GEN-EX (generalization from examples), derived from exemplar models of categorization, is presented, which predicts both reasoning and memory responses from pairwise similarities among the stimuli, allowing for additional influences of subtyping and deterministic responding. (c) 2010 APA, all rights reserved.

  11. Control of the Contents of Working Memory--A Comparison of Two Paradigms and Two Age Groups

    ERIC Educational Resources Information Center

    Oberauer, Klaus

    2005-01-01

    Two experiments investigated whether young and old adults can temporarily remove information from a capacity-limited central component of working memory (WM) into another component, the activated part of long-term memory (LTM). Experiment 1 used a modified Sternberg recognition task (S. Sternberg, 1969); Experiment 2 used an arithmetic…

  12. Dynamic facial expression recognition based on geometric and texture features

    NASA Astrophysics Data System (ADS)

    Li, Ming; Wang, Zengfu

    2018-04-01

    Recently, dynamic facial expression recognition in videos has attracted growing attention. In this paper, we propose a novel dynamic facial expression recognition method by using geometric and texture features. In our system, the facial landmark movements and texture variations upon pairwise images are used to perform the dynamic facial expression recognition tasks. For one facial expression sequence, pairwise images are created between the first frame and each of its subsequent frames. Integration of both geometric and texture features further enhances the representation of the facial expressions. Finally, Support Vector Machine is used for facial expression recognition. Experiments conducted on the extended Cohn-Kanade database show that our proposed method can achieve a competitive performance with other methods.

  13. Learning and Consolidation of Novel Spoken Words

    ERIC Educational Resources Information Center

    Davis, Matthew H.; Di Betta, Anna Maria; Macdonald, Mark J. E.; Gaskell, Gareth

    2009-01-01

    Two experiments explored the neural mechanisms underlying the learning and consolidation of novel spoken words. In Experiment 1, participants learned two sets of novel words on successive days. A subsequent recognition test revealed high levels of familiarity for both sets. However, a lexical decision task showed that only novel words learned on…

  14. Children's Metacognitive Judgments in an Eyewitness Identification Task

    ERIC Educational Resources Information Center

    Keast, Amber; Brewer, Neil; Wells, Gary L.

    2007-01-01

    Two experiments examined children's metacognitive monitoring of recognition judgments within an eyewitness identification paradigm. A confidence-accuracy (CA) calibration approach was used to examine patterns of calibration, over-/underconfidence, and resolution. In Experiment 1, children (n=619, mean age=11 years 10 months) and adults (n=600)…

  15. Navon letters affect face learning and face retrieval.

    PubMed

    Lewis, Michael B; Mills, Claire; Hills, Peter J; Weston, Nicola

    2009-01-01

    Identifying the local letters of a Navon letter (a large letter made up of smaller different letters) prior to recognition causes impairment in accuracy, while identifying the global letters of a Navon letter causes an enhancement in recognition accuracy (Macrae & Lewis, 2002). This effect may result from a transfer-inappropriate processing shift (TIPS) (Schooler, 2002). The present experiment extends research on the underlying mechanism of this effect by exploring this Navon effect on face learning as well as face recognition. The results of the two experiments revealed that when the Navon task used at retrieval was the same as that used at encoding then the performance accuracy is enhanced, whereas when the processing operations mismatch at retrieval and at encoding, this impairs recognition accuracy. These results provide support for the TIPS explanation of the Navon effect.

  16. Partially converted stereoscopic images and the effects on visual attention and memory

    NASA Astrophysics Data System (ADS)

    Kim, Sanghyun; Morikawa, Hiroyuki; Mitsuya, Reiko; Kawai, Takashi; Watanabe, Katsumi

    2015-03-01

    This study contained two experimental examinations of the cognitive activities such as visual attention and memory in viewing stereoscopic (3D) images. For this study, partially converted 3D images were used with binocular parallax added to a specific region of the image. In Experiment 1, change blindness was used as a presented stimulus. The visual attention and impact on memory were investigated by measuring the response time to accomplish the given task. In the change blindness task, an 80 ms blank was intersected between the original and altered images, and the two images were presented alternatingly for 240 ms each. Subjects were asked to temporarily memorize the two switching images and to compare them, visually recognizing the difference between the two. The stimuli for four conditions (2D, 3D, Partially converted 3D, distracted partially converted 3D) were randomly displayed for 20 subjects. The results of Experiment 1 showed that partially converted 3D images tend to attract visual attention and are prone to remain in viewer's memory in the area where moderate negative parallax has been added. In order to examine the impact of a dynamic binocular disparity on partially converted 3D images, an evaluation experiment was conducted that applied learning, distraction, and recognition tasks for 33 subjects. The learning task involved memorizing the location of cells in a 5 × 5 matrix pattern using two different colors. Two cells were positioned with alternating colors, and one of the gray cells was moved up, down, left, or right by one cell width. Experimental conditions was set as a partially converted 3D condition in which a gray cell moved diagonally for a certain period of time with a dynamic binocular disparity added, a 3D condition in which binocular disparity was added to all gray cells, and a 2D condition. The correct response rates for recognition of each task after the distraction task were compared. The results of Experiment 2 showed that the correct response rate in the partial 3D condition was significantly higher with the recognition task than in the other conditions. These results showed that partially converted 3D images tended to have a visual attraction and affect viewer's memory.

  17. Blur Detection is Unaffected by Cognitive Load.

    PubMed

    Loschky, Lester C; Ringer, Ryan V; Johnson, Aaron P; Larson, Adam M; Neider, Mark; Kramer, Arthur F

    2014-03-01

    Blur detection is affected by retinal eccentricity, but is it also affected by attentional resources? Research showing effects of selective attention on acuity and contrast sensitivity suggests that allocating attention should increase blur detection. However, research showing that blur affects selection of saccade targets suggests that blur detection may be pre-attentive. To investigate this question, we carried out experiments in which viewers detected blur in real-world scenes under varying levels of cognitive load manipulated by the N -back task. We used adaptive threshold estimation to measure blur detection thresholds at 0°, 3°, 6°, and 9° eccentricity. Participants carried out blur detection as a single task, a single task with to-be-ignored letters, or an N-back task with four levels of cognitive load (0, 1, 2, or 3-back). In Experiment 1, blur was presented gaze-contingently for occasional single eye fixations while participants viewed scenes in preparation for an easy picture recognition memory task, and the N -back stimuli were presented auditorily. The results for three participants showed a large effect of retinal eccentricity on blur thresholds, significant effects of N -back level on N -back performance, scene recognition memory, and gaze dispersion, but no effect of N -back level on blur thresholds. In Experiment 2, we replicated Experiment 1 but presented the images tachistoscopically for 200 ms (half with, half without blur), to determine whether gaze-contingent blur presentation in Experiment 1 had produced attentional capture by blur onset during a fixation, thus eliminating any effect of cognitive load on blur detection. The results with three new participants replicated those of Experiment 1, indicating that the use of gaze-contingent blur presentation could not explain the lack of effect of cognitive load on blur detection. Thus, apparently blur detection in real-world scene images is unaffected by attentional resources, as manipulated by the cognitive load produced by the N -back task.

  18. The relationships between trait anxiety, place recognition memory, and learning strategy.

    PubMed

    Hawley, Wayne R; Grissom, Elin M; Dohanich, Gary P

    2011-01-20

    Rodents learn to navigate mazes using various strategies that are governed by specific regions of the brain. The type of strategy used when learning to navigate a spatial environment is moderated by a number of factors including emotional states. Heightened anxiety states, induced by exposure to stressors or administration of anxiogenic agents, have been found to bias male rats toward the use of a striatum-based stimulus-response strategy rather than a hippocampus-based place strategy. However, no study has yet examined the relationship between natural anxiety levels, or trait anxiety, and the type of learning strategy used by rats on a dual-solution task. In the current experiment, levels of inherent anxiety were measured in an open field and compared to performance on two separate cognitive tasks, a Y-maze task that assessed place recognition memory, and a visible platform water maze task that assessed learning strategy. Results indicated that place recognition memory on the Y-maze correlated with the use of place learning strategy on the water maze. Furthermore, lower levels of trait anxiety correlated positively with better place recognition memory and with the preferred use of place learning strategy. Therefore, competency in place memory and bias in place strategy are linked to the levels of inherent anxiety in male rats. Copyright © 2010 Elsevier B.V. All rights reserved.

  19. A benefit of context reinstatement to recognition memory in aging: the role of familiarity processes.

    PubMed

    Ward, Emma V; Maylor, Elizabeth A; Poirier, Marie; Korko, Malgorzata; Ruud, Jens C M

    2017-11-01

    Reinstatement of encoding context facilitates memory for targets in young and older individuals (e.g., a word studied on a particular background scene is more likely to be remembered later if it is presented on the same rather than a different scene or no scene), yet older adults are typically inferior at recalling and recognizing target-context pairings. This study examined the mechanisms of the context effect in normal aging. Age differences in word recognition by context condition (original, switched, none, new), and the ability to explicitly remember target-context pairings were investigated using word-scene pairs (Experiment 1) and word-word pairs (Experiment 2). Both age groups benefited from context reinstatement in item recognition, although older adults were significantly worse than young adults at identifying original pairings and at discriminating between original and switched pairings. In Experiment 3, participants were given a three-alternative forced-choice recognition task that allowed older individuals to draw upon intact familiarity processes in selecting original pairings. Performance was age equivalent. Findings suggest that heightened familiarity associated with context reinstatement is useful for boosting recognition memory in aging.

  20. Relaxing decision criteria does not improve recognition memory in amnesic patients.

    PubMed

    Reber, P J; Squire, L R

    1999-05-01

    An important question about the organization of memory is whether information available in non-declarative memory can contribute to performance on tasks of declarative memory. Dorfman, Kihlstrom, Cork, and Misiaszek (1995) described a circumstance in which the phenomenon of priming might benefit recognition memory performance. They reported that patients receiving electroconvulsive therapy improved their recognition performance when they were encouraged to relax their criteria for endorsing test items as familiar. It was suggested that priming improved recognition by making information available about the familiarity of test items. In three experiments, we sought unsuccessfully to reproduce this phenomenon in amnesic patients. In Experiment 3, we reproduced the methods and procedure used by Dorfman et al. but still found no evidence for improved recognition memory following the manipulation of decision criteria. Although negative findings have their own limitations, our findings suggest that the phenomenon reported by Dorfman et al. does not generalize well. Our results agree with several recent findings that suggest that priming is independent of recognition memory and does not contribute to recognition memory scores.

  1. Short-term memory in autism spectrum disorder.

    PubMed

    Poirier, Marie; Martin, Jonathan S; Gaigg, Sebastian B; Bowler, Dermot M

    2011-02-01

    Three experiments examined verbal short-term memory in comparison and autism spectrum disorder (ASD) participants. Experiment 1 involved forward and backward digit recall. Experiment 2 used a standard immediate serial recall task where, contrary to the digit-span task, items (words) were not repeated from list to list. Hence, this task called more heavily on item memory. Experiment 3 tested short-term order memory with an order recognition test: Each word list was repeated with or without the position of 2 adjacent items swapped. The ASD group showed poorer performance in all 3 experiments. Experiments 1 and 2 showed that group differences were due to memory for the order of the items, not to memory for the items themselves. Confirming these findings, the results of Experiment 3 showed that the ASD group had more difficulty detecting a change in the temporal sequence of the items. (c) 2010 APA, all rights reserved.

  2. Banknote recognition: investigating processing and cognition framework using competitive neural network.

    PubMed

    Oyedotun, Oyebade K; Khashman, Adnan

    2017-02-01

    Humans are apt at recognizing patterns and discovering even abstract features which are sometimes embedded therein. Our ability to use the banknotes in circulation for business transactions lies in the effortlessness with which we can recognize the different banknote denominations after seeing them over a period of time. More significant is that we can usually recognize these banknote denominations irrespective of what parts of the banknotes are exposed to us visually. Furthermore, our recognition ability is largely unaffected even when these banknotes are partially occluded. In a similar analogy, the robustness of intelligent systems to perform the task of banknote recognition should not collapse under some minimum level of partial occlusion. Artificial neural networks are intelligent systems which from inception have taken many important cues related to structure and learning rules from the human nervous/cognition processing system. Likewise, it has been shown that advances in artificial neural network simulations can help us understand the human nervous/cognition system even furthermore. In this paper, we investigate three cognition hypothetical frameworks to vision-based recognition of banknote denominations using competitive neural networks. In order to make the task more challenging and stress-test the investigated hypotheses, we also consider the recognition of occluded banknotes. The implemented hypothetical systems are tasked to perform fast recognition of banknotes with up to 75 % occlusion. The investigated hypothetical systems are trained on Nigeria's Naira banknotes and several experiments are performed to demonstrate the findings presented within this work.

  3. A Multidimensional Approach to the Study of Emotion Recognition in Autism Spectrum Disorders

    PubMed Central

    Xavier, Jean; Vignaud, Violaine; Ruggiero, Rosa; Bodeau, Nicolas; Cohen, David; Chaby, Laurence

    2015-01-01

    Although deficits in emotion recognition have been widely reported in autism spectrum disorder (ASD), experiments have been restricted to either facial or vocal expressions. Here, we explored multimodal emotion processing in children with ASD (N = 19) and with typical development (TD, N = 19), considering uni (faces and voices) and multimodal (faces/voices simultaneously) stimuli and developmental comorbidities (neuro-visual, language and motor impairments). Compared to TD controls, children with ASD had rather high and heterogeneous emotion recognition scores but showed also several significant differences: lower emotion recognition scores for visual stimuli, for neutral emotion, and a greater number of saccades during visual task. Multivariate analyses showed that: (1) the difficulties they experienced with visual stimuli were partially alleviated with multimodal stimuli. (2) Developmental age was significantly associated with emotion recognition in TD children, whereas it was the case only for the multimodal task in children with ASD. (3) Language impairments tended to be associated with emotion recognition scores of ASD children in the auditory modality. Conversely, in the visual or bimodal (visuo-auditory) tasks, the impact of developmental coordination disorder or neuro-visual impairments was not found. We conclude that impaired emotion processing constitutes a dimension to explore in the field of ASD, as research has the potential to define more homogeneous subgroups and tailored interventions. However, it is clear that developmental age, the nature of the stimuli, and other developmental comorbidities must also be taken into account when studying this dimension. PMID:26733928

  4. Components of executive functioning in metamemory.

    PubMed

    Mäntylä, Timo; Rönnlund, Michael; Kliegel, Matthias

    2010-10-01

    This study examined metamemory in relation to three basic executive functions (set shifting, working memory updating, and response inhibition) measured as latent variables. Young adults (Experiment 1) and middle-aged adults (Experiment 2) completed a set of executive functioning tasks and the Prospective and Retrospective Memory Questionnaire (PRMQ). In Experiment 1, source recall and face recognition tasks were included as indicators of objective memory performance. In both experiments, analyses of the executive functioning data yielded a two-factor solution, with the updating and inhibition tasks constituting a common factor and the shifting tasks a separate factor. Self-reported memory problems showed low predictive validity, but subjective and objective memory performance were related to different components of executive functioning. In both experiments, set shifting, but not updating and inhibition, was related to PRMQ, whereas source recall showed the opposite pattern of correlations in Experiment 1. These findings suggest that metamemorial judgments reflect selective effects of executive functioning and that individual differences in mental flexibility contribute to self-beliefs of efficacy.

  5. Social Experience Does Not Abolish Cultural Diversity in Eye Movements

    PubMed Central

    Kelly, David J.; Jack, Rachael E.; Miellet, Sébastien; De Luca, Emanuele; Foreman, Kay; Caldara, Roberto

    2011-01-01

    Adults from Eastern (e.g., China) and Western (e.g., USA) cultural groups display pronounced differences in a range of visual processing tasks. For example, the eye movement strategies used for information extraction during a variety of face processing tasks (e.g., identification and facial expressions of emotion categorization) differs across cultural groups. Currently, many of the differences reported in previous studies have asserted that culture itself is responsible for shaping the way we process visual information, yet this has never been directly investigated. In the current study, we assessed the relative contribution of genetic and cultural factors by testing face processing in a population of British Born Chinese adults using face recognition and expression classification tasks. Contrary to predictions made by the cultural differences framework, the majority of British Born Chinese adults deployed “Eastern” eye movement strategies, while approximately 25% of participants displayed “Western” strategies. Furthermore, the cultural eye movement strategies used by individuals were consistent across recognition and expression tasks. These findings suggest that “culture” alone cannot straightforwardly account for diversity in eye movement patterns. Instead a more complex understanding of how the environment and individual experiences can influence the mechanisms that govern visual processing is required. PMID:21886626

  6. Assessment of Self-Recognition in Young Children with Handicaps.

    ERIC Educational Resources Information Center

    Kelley, Michael F.; And Others

    1988-01-01

    Thirty young children with handicaps were assessed on five self-recognition mirror tasks. The set of tasks formed a reproducible scale, indicating that these tasks are an appropriate measure of self-recognition in this population. Data analysis suggested that stage of self-recognition is positively and significantly related to cognitive…

  7. Evidence for the Activation of Sensorimotor Information during Visual Word Recognition: The Body-Object Interaction Effect

    ERIC Educational Resources Information Center

    Siakaluk, Paul D.; Pexman, Penny M.; Aguilera, Laura; Owen, William J.; Sears, Christopher R.

    2008-01-01

    We examined the effects of sensorimotor experience in two visual word recognition tasks. Body-object interaction (BOI) ratings were collected for a large set of words. These ratings assess perceptions of the ease with which a human body can physically interact with a word's referent. A set of high BOI words (e.g., "mask") and a set of low BOI…

  8. Differential amygdala response during facial recognition in patients with schizophrenia: an fMRI study.

    PubMed

    Kosaka, H; Omori, M; Murata, T; Iidaka, T; Yamada, H; Okada, T; Takahashi, T; Sadato, N; Itoh, H; Yonekura, Y; Wada, Y

    2002-09-01

    Human lesion or neuroimaging studies suggest that amygdala is involved in facial emotion recognition. Although impairments in recognition of facial and/or emotional expression have been reported in schizophrenia, there are few neuroimaging studies that have examined differential brain activation during facial recognition between patients with schizophrenia and normal controls. To investigate amygdala responses during facial recognition in schizophrenia, we conducted a functional magnetic resonance imaging (fMRI) study with 12 right-handed medicated patients with schizophrenia and 12 age- and sex-matched healthy controls. The experiment task was a type of emotional intensity judgment task. During the task period, subjects were asked to view happy (or angry/disgusting/sad) and neutral faces simultaneously presented every 3 s and to judge which face was more emotional (positive or negative face discrimination). Imaging data were investigated in voxel-by-voxel basis for single-group analysis and for between-group analysis according to the random effect model using Statistical Parametric Mapping (SPM). No significant difference in task accuracy was found between the schizophrenic and control groups. Positive face discrimination activated the bilateral amygdalae of both controls and schizophrenics, with more prominent activation of the right amygdala shown in the schizophrenic group. Negative face discrimination activated the bilateral amygdalae in the schizophrenic group whereas the right amygdala alone in the control group, although no significant group difference was found. Exaggerated amygdala activation during emotional intensity judgment found in the schizophrenic patients may reflect impaired gating of sensory input containing emotion. Copyright 2002 Elsevier Science B.V.

  9. Improved memory for error feedback.

    PubMed

    Van der Borght, Liesbet; Schouppe, Nathalie; Notebaert, Wim

    2016-11-01

    Surprising feedback in a general knowledge test leads to an improvement in memory for both the surface features and the content of the feedback (Psychon Bull Rev 16:88-92, 2009). Based on the idea that in cognitive tasks, error is surprising (the orienting account, Cognition 111:275-279, 2009), we tested whether error feedback would be better remembered than correct feedback. Colored words were presented as feedback signals in a flanker task, where the color indicated the accuracy. Subsequently, these words were again presented during a recognition task (Experiment 1) or a lexical decision task (Experiments 2 and 3). In all experiments, memory was improved for words seen as error feedback. These results are compared to the attentional boost effect (J Exp Psychol Learn Mem Cogn 39:1223-12231, 2013) and related to the orienting account for post-error slowing (Cognition 111:275-279, 2009).

  10. Discriminability effect on Garner interference: evidence from recognition of facial identity and expression

    PubMed Central

    Wang, Yamin; Fu, Xiaolan; Johnston, Robert A.; Yan, Zheng

    2013-01-01

    Using Garner’s speeded classification task existing studies demonstrated an asymmetric interference in the recognition of facial identity and facial expression. It seems that expression is hard to interfere with identity recognition. However, discriminability of identity and expression, a potential confounding variable, had not been carefully examined in existing studies. In current work, we manipulated discriminability of identity and expression by matching facial shape (long or round) in identity and matching mouth (opened or closed) in facial expression. Garner interference was found either from identity to expression (Experiment 1) or from expression to identity (Experiment 2). Interference was also found in both directions (Experiment 3) or in neither direction (Experiment 4). The results support that Garner interference tends to occur under condition of low discriminability of relevant dimension regardless of facial property. Our findings indicate that Garner interference is not necessarily related to interdependent processing in recognition of facial identity and expression. The findings also suggest that discriminability as a mediating factor should be carefully controlled in future research. PMID:24391609

  11. Multisensory emotion perception in congenitally, early, and late deaf CI users

    PubMed Central

    Nava, Elena; Villwock, Agnes K.; Büchner, Andreas; Lenarz, Thomas; Röder, Brigitte

    2017-01-01

    Emotions are commonly recognized by combining auditory and visual signals (i.e., vocal and facial expressions). Yet it is unknown whether the ability to link emotional signals across modalities depends on early experience with audio-visual stimuli. In the present study, we investigated the role of auditory experience at different stages of development for auditory, visual, and multisensory emotion recognition abilities in three groups of adolescent and adult cochlear implant (CI) users. CI users had a different deafness onset and were compared to three groups of age- and gender-matched hearing control participants. We hypothesized that congenitally deaf (CD) but not early deaf (ED) and late deaf (LD) CI users would show reduced multisensory interactions and a higher visual dominance in emotion perception than their hearing controls. The CD (n = 7), ED (deafness onset: <3 years of age; n = 7), and LD (deafness onset: >3 years; n = 13) CI users and the control participants performed an emotion recognition task with auditory, visual, and audio-visual emotionally congruent and incongruent nonsense speech stimuli. In different blocks, participants judged either the vocal (Voice task) or the facial expressions (Face task). In the Voice task, all three CI groups performed overall less efficiently than their respective controls and experienced higher interference from incongruent facial information. Furthermore, the ED CI users benefitted more than their controls from congruent faces and the CD CI users showed an analogous trend. In the Face task, recognition efficiency of the CI users and controls did not differ. Our results suggest that CI users acquire multisensory interactions to some degree, even after congenital deafness. When judging affective prosody they appear impaired and more strongly biased by concurrent facial information than typically hearing individuals. We speculate that limitations inherent to the CI contribute to these group differences. PMID:29023525

  12. Multisensory emotion perception in congenitally, early, and late deaf CI users.

    PubMed

    Fengler, Ineke; Nava, Elena; Villwock, Agnes K; Büchner, Andreas; Lenarz, Thomas; Röder, Brigitte

    2017-01-01

    Emotions are commonly recognized by combining auditory and visual signals (i.e., vocal and facial expressions). Yet it is unknown whether the ability to link emotional signals across modalities depends on early experience with audio-visual stimuli. In the present study, we investigated the role of auditory experience at different stages of development for auditory, visual, and multisensory emotion recognition abilities in three groups of adolescent and adult cochlear implant (CI) users. CI users had a different deafness onset and were compared to three groups of age- and gender-matched hearing control participants. We hypothesized that congenitally deaf (CD) but not early deaf (ED) and late deaf (LD) CI users would show reduced multisensory interactions and a higher visual dominance in emotion perception than their hearing controls. The CD (n = 7), ED (deafness onset: <3 years of age; n = 7), and LD (deafness onset: >3 years; n = 13) CI users and the control participants performed an emotion recognition task with auditory, visual, and audio-visual emotionally congruent and incongruent nonsense speech stimuli. In different blocks, participants judged either the vocal (Voice task) or the facial expressions (Face task). In the Voice task, all three CI groups performed overall less efficiently than their respective controls and experienced higher interference from incongruent facial information. Furthermore, the ED CI users benefitted more than their controls from congruent faces and the CD CI users showed an analogous trend. In the Face task, recognition efficiency of the CI users and controls did not differ. Our results suggest that CI users acquire multisensory interactions to some degree, even after congenital deafness. When judging affective prosody they appear impaired and more strongly biased by concurrent facial information than typically hearing individuals. We speculate that limitations inherent to the CI contribute to these group differences.

  13. Cingulo-opercular activity affects incidental memory encoding for speech in noise.

    PubMed

    Vaden, Kenneth I; Teubner-Rhodes, Susan; Ahlstrom, Jayne B; Dubno, Judy R; Eckert, Mark A

    2017-08-15

    Correctly understood speech in difficult listening conditions is often difficult to remember. A long-standing hypothesis for this observation is that the engagement of cognitive resources to aid speech understanding can limit resources available for memory encoding. This hypothesis is consistent with evidence that speech presented in difficult conditions typically elicits greater activity throughout cingulo-opercular regions of frontal cortex that are proposed to optimize task performance through adaptive control of behavior and tonic attention. However, successful memory encoding of items for delayed recognition memory tasks is consistently associated with increased cingulo-opercular activity when perceptual difficulty is minimized. The current study used a delayed recognition memory task to test competing predictions that memory encoding for words is enhanced or limited by the engagement of cingulo-opercular activity during challenging listening conditions. An fMRI experiment was conducted with twenty healthy adult participants who performed a word identification in noise task that was immediately followed by a delayed recognition memory task. Consistent with previous findings, word identification trials in the poorer signal-to-noise ratio condition were associated with increased cingulo-opercular activity and poorer recognition memory scores on average. However, cingulo-opercular activity decreased for correctly identified words in noise that were not recognized in the delayed memory test. These results suggest that memory encoding in difficult listening conditions is poorer when elevated cingulo-opercular activity is not sustained. Although increased attention to speech when presented in difficult conditions may detract from more active forms of memory maintenance (e.g., sub-vocal rehearsal), we conclude that task performance monitoring and/or elevated tonic attention supports incidental memory encoding in challenging listening conditions. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. 76 FR 78969 - National Technical Assistance Center for Senior Transportation: Solicitation for Proposals

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-12-20

    ..., authorized the National Senior Center under 49 U.S.C. 5314(c). In recognition of the fundamental importance..., Capacity and experience for conducting face-to-face and Web-based training. IV. Proposal Submission... tasks, including capacity and experience for conducting face-to-face and Web- based [[Page 78973...

  15. Fast Morphological Effects in First and Second Language Word Recognition

    ERIC Educational Resources Information Center

    Diependaele, Kevin; Dunabeitia, Jon Andoni; Morris, Joanna; Keuleers, Emmanuel

    2011-01-01

    In three experiments we compared the performance of native English speakers to that of Spanish-English and Dutch-English bilinguals on a masked morphological priming lexical decision task. The results do not show significant differences across the three experiments. In line with recent meta-analyses, we observed a graded pattern of facilitation…

  16. Influence of Stimulus Symmetry and Complexity upon Haptic Scanning Strategies During Detection, Learning and Recognition Tasks.

    ERIC Educational Resources Information Center

    Locher, Paul J.; Simmons, Roger W.

    Two experiments were conducted to investigate the perceptual processes involved in haptic exploration of randomly generated shapes. Experiment one required subjects to detect symmetrical or asymmetrical characteristics of individually presented plastic shapes, also varying in complexity. Scanning time for both symmetrical and asymmetrical shapes…

  17. Continuous recognition of spatial and nonspatial stimuli in hippocampal-lesioned rats.

    PubMed

    Jackson-Smith, P; Kesner, R P; Chiba, A A

    1993-03-01

    The present experiments compared the performance of hippocampal-lesioned rats to control rats on a spatial continuous recognition task and an analogous nonspatial task with similar processing demands. Daily sessions for Experiment 1 involved sequential presentation of individual arms on a 12-arm radial maze. Each arm contained a Froot Loop reinforcement the first time it was presented, and latency to traverse the arm was measured. A subset of the arms were repeated, but did not contain reinforcement. Repeated arms were presented with lags ranging from 0 to 6 (0 to 6 different arm presentations occurred between the first and the repeated presentation). Difference scores were computed by subtracting the latency on first presentations from the latency on repeated presentations, and these scores were high in all rats prior to surgery, with a decreasing function across lag. There were no differences in performance following cortical control or sham surgery. However, there was a total deficit in performance following large electrolytic lesions of the hippocampus. The second experiment employed the same continuous recognition memory procedure, but used three-dimensional visual objects (toys, junk items, etc., in various shapes, sizes, and textures) as stimuli on a flat runway. As in Experiment 1, the stimuli were presented successively and latency to run to and move the object was measured. Objects were repeated with lags ranging from 0 to 4. Performance on this task following surgery did not differ from performance prior to surgery for either the control group or the hippocampal lesion group. These results provide support for Kesner's attribute model of hippocampal function in that the hippocampus is assumed to mediate data-based memory for spatial locations, but not three-dimensional visual objects.

  18. Development of Collaborative Research Initiatives to Advance the Aerospace Sciences-via the Communications, Electronics, Information Systems Focus Group

    NASA Technical Reports Server (NTRS)

    Knasel, T. Michael

    1996-01-01

    The primary goal of the Adaptive Vision Laboratory Research project was to develop advanced computer vision systems for automatic target recognition. The approach used in this effort combined several machine learning paradigms including evolutionary learning algorithms, neural networks, and adaptive clustering techniques to develop the E-MOR.PH system. This system is capable of generating pattern recognition systems to solve a wide variety of complex recognition tasks. A series of simulation experiments were conducted using E-MORPH to solve problems in OCR, military target recognition, industrial inspection, and medical image analysis. The bulk of the funds provided through this grant were used to purchase computer hardware and software to support these computationally intensive simulations. The payoff from this effort is the reduced need for human involvement in the design and implementation of recognition systems. We have shown that the techniques used in E-MORPH are generic and readily transition to other problem domains. Specifically, E-MORPH is multi-phase evolutionary leaming system that evolves cooperative sets of features detectors and combines their response using an adaptive classifier to form a complete pattern recognition system. The system can operate on binary or grayscale images. In our most recent experiments, we used multi-resolution images that are formed by applying a Gabor wavelet transform to a set of grayscale input images. To begin the leaming process, candidate chips are extracted from the multi-resolution images to form a training set and a test set. A population of detector sets is randomly initialized to start the evolutionary process. Using a combination of evolutionary programming and genetic algorithms, the feature detectors are enhanced to solve a recognition problem. The design of E-MORPH and recognition results for a complex problem in medical image analysis are described at the end of this report. The specific task involves the identification of vertebrae in x-ray images of human spinal columns. This problem is extremely challenging because the individual vertebra exhibit variation in shape, scale, orientation, and contrast. E-MORPH generated several accurate recognition systems to solve this task. This dual use of this ATR technology clearly demonstrates the flexibility and power of our approach.

  19. Judgments of Learning are Influenced by Multiple Cues In Addition to Memory for Past Test Accuracy.

    PubMed

    Hertzog, Christopher; Hines, Jarrod C; Touron, Dayna R

    When people try to learn new information (e.g., in a school setting), they often have multiple opportunities to study the material. One of the most important things to know is whether people adjust their study behavior on the basis of past success so as to increase their overall level of learning (for example, by emphasizing information they have not yet learned). Monitoring their learning is a key part of being able to make those kinds of adjustments. We used a recognition memory task to replicate prior research showing that memory for past test outcomes influences later monitoring, as measured by judgments of learning (JOLs; confidence that the material has been learned), but also to show that subjective confidence in whether the test answer and the amount of time taken to restudy the items also have independent effects on JOLs. We also show that there are individual differences in the effects of test accuracy and test confidence on JOLs, showing that some but not all people use past test experiences to guide monitoring of their new learning. Monitoring learning is therefore a complex process of considering multiple cues, and some people attend to those cues more effectively than others. Improving the quality of monitoring performance and learning could lead to better study behaviors and better learning. An individual's memory of past test performance (MPT) is often cited as the primary cue for judgments of learning (JOLs) following test experience during multi-trial learning tasks (Finn & Metcalfe, 2007; 2008). We used an associative recognition task to evaluate MPT-related phenomena, because performance monitoring, as measured by recognition test confidence judgments (CJs), is fallible and varies in accuracy across persons. The current study used multilevel regression models to show the simultaneous and independent influences of multiple cues on Trial 2 JOLs, in addition to performance accuracy (the typical measure of MPT in cued-recall experiments). These cues include recognition CJs, perceived recognition fluency, and Trial 2 study time allocation (an index of reprocessing fluency). Our results expand the scope of MPT-related phenomena in recognition memory testing to show independent effects of recognition test accuracy and CJs on second-trial JOLs, while also demonstrating individual differences in the effects of these cues on JOLs (as manifested in significant random effects for those regression effects in the model). The effect of study time on second-trial JOLs controlling on other variables, including Trial 1 recognition memory accuracy, also demonstrates that second-trial encoding behavior influence JOLs in addition to MPT.

  20. ROBOSIGHT: Robotic Vision System For Inspection And Manipulation

    NASA Astrophysics Data System (ADS)

    Trivedi, Mohan M.; Chen, ChuXin; Marapane, Suresh

    1989-02-01

    Vision is an important sensory modality that can be used for deriving information critical to the proper, efficient, flexible, and safe operation of an intelligent robot. Vision systems are uti-lized for developing higher level interpretation of the nature of a robotic workspace using images acquired by cameras mounted on a robot. Such information can be useful for tasks such as object recognition, object location, object inspection, obstacle avoidance and navigation. In this paper we describe efforts directed towards developing a vision system useful for performing various robotic inspection and manipulation tasks. The system utilizes gray scale images and can be viewed as a model-based system. It includes general purpose image analysis modules as well as special purpose, task dependent object status recognition modules. Experiments are described to verify the robust performance of the integrated system using a robotic testbed.

  1. Urdu Nasta'liq text recognition using implicit segmentation based on multi-dimensional long short term memory neural networks.

    PubMed

    Naz, Saeeda; Umar, Arif Iqbal; Ahmed, Riaz; Razzak, Muhammad Imran; Rashid, Sheikh Faisal; Shafait, Faisal

    2016-01-01

    The recognition of Arabic script and its derivatives such as Urdu, Persian, Pashto etc. is a difficult task due to complexity of this script. Particularly, Urdu text recognition is more difficult due to its Nasta'liq writing style. Nasta'liq writing style inherits complex calligraphic nature, which presents major issues to recognition of Urdu text owing to diagonality in writing, high cursiveness, context sensitivity and overlapping of characters. Therefore, the work done for recognition of Arabic script cannot be directly applied to Urdu recognition. We present Multi-dimensional Long Short Term Memory (MDLSTM) Recurrent Neural Networks with an output layer designed for sequence labeling for recognition of printed Urdu text-lines written in the Nasta'liq writing style. Experiments show that MDLSTM attained a recognition accuracy of 98% for the unconstrained Urdu Nasta'liq printed text, which significantly outperforms the state-of-the-art techniques.

  2. Repetition reveals ups and downs of hippocampal, thalamic, and neocortical engagement during mnemonic decisions

    PubMed Central

    Reagh, Zachariah M.; Murray, Elizabeth A.; Yassa, Michael A.

    2017-01-01

    The extent to which current information is consistent with past experiences and our capacity to recognize or discriminate accordingly are key factors in flexible memory-guided behavior. Despite a wealth of evidence linking hippocampal and neocortical computations to these phenomena, many important factors remain poorly understood. One such factor is repeated encoding of learned information. In this experiment, participants completed a task in which study stimuli were incidentally encoded either once or three separate times during high-resolution fMRI scanning. We asked how repetition influenced recognition and discrimination memory judgments, and how this affects engagement of hippocampal and neocortical regions. Repetition revealed shifts in engagement in an anterior (ventral) CA1-thalamic-medial prefrontal network related to true and false recognition. Conversely, repetition revealed shifts in a posterior (dorsal) dentate/CA3-parahippocampal-restrosplenial network related to accurate discrimination. These differences in engagement were accompanied by task-related correlations in respective anterior and posterior networks. In particular, the anterior thalamic region observed during recognition judgments is functionally and anatomically consistent with nucleus reuniens in humans, and was found to mediate correlations between the anterior CA1 and medial prefrontal cortex. These findings offer new insights into how repeated experience affects memory and its neural substrates in hippocampal-neocortical networks. PMID:27859884

  3. Investigating the Role of Assessment Method on Reports of Déjà Vu and Tip-of-the-Tongue States during Standard Recognition Tests

    PubMed Central

    Jersakova, Radka; Moulin, Chris J. A.

    2016-01-01

    Déjà vu and tip-of-the-tongue (TOT) are retrieval-related subjective experiences whose study relies on participant self-report. In four experiments (ns = 224, 273, 123 and 154), we explored the effect of questioning method on reported occurrence of déjà vu and TOT in experimental settings. All participants carried out a continuous recognition task, which was not expected to induce déjà vu or TOT, but were asked about their experiences of these subjective states. When presented with contemporary definitions, between 32% and 58% of participants nonetheless reported experiencing déjà vu or TOT. Changing the definition of déjà vu or asking participants to bring to mind a real-life instance of déjà vu or TOT before completing the recognition task had no impact on reporting rates. However, there was an indication that changing the method of requesting subjective reports impacted reporting of both experiences. More specifically, moving from the commonly used retrospective questioning (e.g. “Have you experienced déjà vu?”) to free report instructions (e.g. “Indicate whenever you experience déjà vu.”) reduced the total number of reported déjà vu and TOT occurrences. We suggest that research on subjective experiences should move toward free report assessments. Such a shift would potentially reduce the presence of false alarms in experimental work, thereby reducing the overestimation of subjective experiences prevalent in this area of research. PMID:27100292

  4. Investigating the Role of Assessment Method on Reports of Déjà Vu and Tip-of-the-Tongue States during Standard Recognition Tests.

    PubMed

    Jersakova, Radka; Moulin, Chris J A; O'Connor, Akira R

    2016-01-01

    Déjà vu and tip-of-the-tongue (TOT) are retrieval-related subjective experiences whose study relies on participant self-report. In four experiments (ns = 224, 273, 123 and 154), we explored the effect of questioning method on reported occurrence of déjà vu and TOT in experimental settings. All participants carried out a continuous recognition task, which was not expected to induce déjà vu or TOT, but were asked about their experiences of these subjective states. When presented with contemporary definitions, between 32% and 58% of participants nonetheless reported experiencing déjà vu or TOT. Changing the definition of déjà vu or asking participants to bring to mind a real-life instance of déjà vu or TOT before completing the recognition task had no impact on reporting rates. However, there was an indication that changing the method of requesting subjective reports impacted reporting of both experiences. More specifically, moving from the commonly used retrospective questioning (e.g. "Have you experienced déjà vu?") to free report instructions (e.g. "Indicate whenever you experience déjà vu.") reduced the total number of reported déjà vu and TOT occurrences. We suggest that research on subjective experiences should move toward free report assessments. Such a shift would potentially reduce the presence of false alarms in experimental work, thereby reducing the overestimation of subjective experiences prevalent in this area of research.

  5. Characteristics of speaking style and implications for speech recognition.

    PubMed

    Shinozaki, Takahiro; Ostendorf, Mari; Atlas, Les

    2009-09-01

    Differences in speaking style are associated with more or less spectral variability, as well as different modulation characteristics. The greater variation in some styles (e.g., spontaneous speech and infant-directed speech) poses challenges for recognition but possibly also opportunities for learning more robust models, as evidenced by prior work and motivated by child language acquisition studies. In order to investigate this possibility, this work proposes a new method for characterizing speaking style (the modulation spectrum), examines spontaneous, read, adult-directed, and infant-directed styles in this space, and conducts pilot experiments in style detection and sampling for improved speech recognizer training. Speaking style classification is improved by using the modulation spectrum in combination with standard pitch and energy variation. Speech recognition experiments on a small vocabulary conversational speech recognition task show that sampling methods for training with a small amount of data benefit from the new features.

  6. Effects of perceptual similarity but not semantic association on false recognition in aging

    PubMed Central

    Gill, Emma

    2017-01-01

    This study investigated semantic and perceptual influences on false recognition in older and young adults in a variant on the Deese-Roediger-McDermott paradigm. In two experiments, participants encoded intermixed sets of semantically associated words, and sets of unrelated words. Each set was presented in a shared distinctive font. Older adults were no more likely to falsely recognize semantically associated lure words compared to unrelated lures also presented in studied fonts. However, they showed an increase in false recognition of lures which were related to studied items only by a shared font. This increased false recognition was associated with recollective experience. The data show that older adults do not always rely more on prior knowledge in episodic memory tasks. They converge with other findings suggesting that older adults may also be more prone to perceptually-driven errors. PMID:29302398

  7. Learning Weight Uncertainty with Stochastic Gradient MCMC for Shape Classification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Chunyuan; Stevens, Andrew J.; Chen, Changyou

    2016-08-10

    Learning the representation of shape cues in 2D & 3D objects for recognition is a fundamental task in computer vision. Deep neural networks (DNNs) have shown promising performance on this task. Due to the large variability of shapes, accurate recognition relies on good estimates of model uncertainty, ignored in traditional training of DNNs, typically learned via stochastic optimization. This paper leverages recent advances in stochastic gradient Markov Chain Monte Carlo (SG-MCMC) to learn weight uncertainty in DNNs. It yields principled Bayesian interpretations for the commonly used Dropout/DropConnect techniques and incorporates them into the SG-MCMC framework. Extensive experiments on 2D &more » 3D shape datasets and various DNN models demonstrate the superiority of the proposed approach over stochastic optimization. Our approach yields higher recognition accuracy when used in conjunction with Dropout and Batch-Normalization.« less

  8. Evidence for the activation of sensorimotor information during visual word recognition: the body-object interaction effect.

    PubMed

    Siakaluk, Paul D; Pexman, Penny M; Aguilera, Laura; Owen, William J; Sears, Christopher R

    2008-01-01

    We examined the effects of sensorimotor experience in two visual word recognition tasks. Body-object interaction (BOI) ratings were collected for a large set of words. These ratings assess perceptions of the ease with which a human body can physically interact with a word's referent. A set of high BOI words (e.g., mask) and a set of low BOI words (e.g., ship) were created, matched on imageability and concreteness. Facilitatory BOI effects were observed in lexical decision and phonological lexical decision tasks: responses were faster for high BOI words than for low BOI words. We discuss how our findings may be accounted for by (a) semantic feedback within the visual word recognition system, and (b) an embodied view of cognition (e.g., Barsalou's perceptual symbol systems theory), which proposes that semantic knowledge is grounded in sensorimotor interactions with the environment.

  9. Interference with facial emotion recognition by verbal but not visual loads.

    PubMed

    Reed, Phil; Steed, Ian

    2015-12-01

    The ability to recognize emotions through facial characteristics is critical for social functioning, but is often impaired in those with a developmental or intellectual disability. The current experiments explored the degree to which interfering with the processing capacities of typically-developing individuals would produce a similar inability to recognize emotions through the facial elements of faces displaying particular emotions. It was found that increasing the cognitive load (in an attempt to model learning impairments in a typically developing population) produced deficits in correctly identifying emotions from facial elements. However, this effect was much more pronounced when using a concurrent verbal task than when employing a concurrent visual task, suggesting that there is a substantial verbal element to the labeling and subsequent recognition of emotions. This concurs with previous work conducted with those with developmental disabilities that suggests emotion recognition deficits are connected with language deficits. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Hemispheric asymmetries of a motor memory in a recognition test after learning a movement sequence.

    PubMed

    Leinen, Peter; Panzer, Stefan; Shea, Charles H

    2016-11-01

    Two experiments utilizing a spatial-temporal movement sequence were designed to determine if the memory of the sequence is lateralized in the left or right hemisphere. In Experiment 1, dominant right-handers were randomly assigned to one of two acquisition groups: a left-hand starter and a right-hand starter group. After an acquisition phase, reaction time (RT) was measured in a recognition test by providing the learned sequential pattern in the left or right visual half-field for 150ms. In a retention test and two transfer tests the dominant coordinate system for sequence production was evaluated. In Experiment 2 dominant left-handers and dominant right-handers had to acquire the sequence with their dominant limb. The results of Experiment 1 indicated that RT was significantly shorter when the acquired sequence was provided in the right visual field during the recognition test. The same results occurred in Experiment 2 for dominant right-handers and left-handers. These results indicated a right visual field left hemisphere advantage in the recognition test for the practiced stimulus for dominant left and right-handers, when the task was practiced with the dominant limb. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Accent modulates access to word meaning: Evidence for a speaker-model account of spoken word recognition.

    PubMed

    Cai, Zhenguang G; Gilbert, Rebecca A; Davis, Matthew H; Gaskell, M Gareth; Farrar, Lauren; Adler, Sarah; Rodd, Jennifer M

    2017-11-01

    Speech carries accent information relevant to determining the speaker's linguistic and social background. A series of web-based experiments demonstrate that accent cues can modulate access to word meaning. In Experiments 1-3, British participants were more likely to retrieve the American dominant meaning (e.g., hat meaning of "bonnet") in a word association task if they heard the words in an American than a British accent. In addition, results from a speeded semantic decision task (Experiment 4) and sentence comprehension task (Experiment 5) confirm that accent modulates on-line meaning retrieval such that comprehension of ambiguous words is easier when the relevant word meaning is dominant in the speaker's dialect. Critically, neutral-accent speech items, created by morphing British- and American-accented recordings, were interpreted in a similar way to accented words when embedded in a context of accented words (Experiment 2). This finding indicates that listeners do not use accent to guide meaning retrieval on a word-by-word basis; instead they use accent information to determine the dialectic identity of a speaker and then use their experience of that dialect to guide meaning access for all words spoken by that person. These results motivate a speaker-model account of spoken word recognition in which comprehenders determine key characteristics of their interlocutor and use this knowledge to guide word meaning access. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  12. Associations between feeling and judging the emotions of happiness and fear: findings from a large-scale field experiment.

    PubMed

    Buchanan, Tony W; Bibas, David; Adolphs, Ralph

    2010-05-14

    How do we recognize emotions from other people? One possibility is that our own emotional experiences guide us in the online recognition of emotion in others. A distinct but related possibility is that emotion experience helps us to learn how to recognize emotions in childhood. We explored these ideas in a large sample of people (N = 4,608) ranging from 5 to over 50 years old. Participants were asked to rate the intensity of emotional experience in their own lives, as well as to perform a task of facial emotion recognition. Those who reported more intense experience of fear and happiness were significantly more accurate (closer to prototypical) in recognizing facial expressions of fear and happiness, respectively, and intense experience of fear was associated also with more accurate recognition of surprised and happy facial expressions. The associations held across all age groups. These results suggest that the intensity of one's own emotional experience of fear and happiness correlates with the ability to recognize these emotions in others, and demonstrate such an association as early as age 5.

  13. How similar are recognition memory and inductive reasoning?

    PubMed

    Hayes, Brett K; Heit, Evan

    2013-07-01

    Conventionally, memory and reasoning are seen as different types of cognitive activities driven by different processes. In two experiments, we challenged this view by examining the relationship between recognition memory and inductive reasoning involving multiple forms of similarity. A common study set (members of a conjunctive category) was followed by a test set containing old and new category members, as well as items that matched the study set on only one dimension. The study and test sets were presented under recognition or induction instructions. In Experiments 1 and 2, the inductive property being generalized was varied in order to direct attention to different dimensions of similarity. When there was no time pressure on decisions, patterns of positive responding were strongly affected by property type, indicating that different types of similarity were driving recognition and induction. By comparison, speeded judgments showed weaker property effects and could be explained by generalization based on overall similarity. An exemplar model, GEN-EX (GENeralization from EXamples), could account for both the induction and recognition data. These findings show that induction and recognition share core component processes, even when the tasks involve flexible forms of similarity.

  14. How to say no: single- and dual-process theories of short-term recognition tested on negative probes.

    PubMed

    Oberauer, Klaus

    2008-05-01

    Three experiments with short-term recognition tasks are reported. In Experiments 1 and 2, participants decided whether a probe matched a list item specified by its spatial location. Items presented at study in a different location (intrusion probes) had to be rejected. Serial position curves of positive, new, and intrusion probes over the probed location's position were mostly parallel. Serial position curves of intrusion probes over their position of origin were again parallel to those of positive probes. Experiment 3 showed largely parallel serial position effects for positive probes and for intrusion probes plotted over positions in a relevant and an irrelevant list, respectively. The results support a dual-process theory in which recognition is based on familiarity and recollection, and recollection uses 2 retrieval routes, from context to item and from item to context.

  15. Strength-based criterion shifts in recognition memory.

    PubMed

    Singer, Murray

    2009-10-01

    In manipulations of stimulus strength between lists, a more lenient signal detection criterion is more frequently applied to a weak than to a strong stimulus class. However, with randomly intermixed weak and strong test probes, such a criterion shift often does not result. A procedure that has yielded delay-based within-list criterion shifts was applied to strength manipulations in recognition memory for categorized word lists. When participants made semantic ratings about each stimulus word, strength-based criterion shifts emerged regardless of whether words from pairs of categories were studied in separate blocks (Experiment 1) or in intermixed blocks (Experiment 2). In Experiment 3, the criterion shift persisted under the semantic-rating study task, but not under rote memorization. These findings suggest that continually adjusting the recognition decision criterion is cognitively feasible. They provide a technique for manipulating the criterion shift, and they identify competing theoretical accounts of these effects.

  16. Dentate gyrus supports slope recognition memory, shades of grey-context pattern separation and recognition memory, and CA3 supports pattern completion for object memory.

    PubMed

    Kesner, Raymond P; Kirk, Ryan A; Yu, Zhenghui; Polansky, Caitlin; Musso, Nick D

    2016-03-01

    In order to examine the role of the dorsal dentate gyrus (dDG) in slope (vertical space) recognition and possible pattern separation, various slope (vertical space) degrees were used in a novel exploratory paradigm to measure novelty detection for changes in slope (vertical space) recognition memory and slope memory pattern separation in Experiment 1. The results of the experiment indicate that control rats displayed a slope recognition memory function with a pattern separation process for slope memory that is dependent upon the magnitude of change in slope between study and test phases. In contrast, the dDG lesioned rats displayed an impairment in slope recognition memory, though because there was no significant interaction between the two groups and slope memory, a reliable pattern separation impairment for slope could not be firmly established in the DG lesioned rats. In Experiment 2, in order to determine whether, the dDG plays a role in shades of grey spatial context recognition and possible pattern separation, shades of grey were used in a novel exploratory paradigm to measure novelty detection for changes in the shades of grey context environment. The results of the experiment indicate that control rats displayed a shades of grey-context pattern separation effect across levels of separation of context (shades of grey). In contrast, the DG lesioned rats displayed a significant interaction between the two groups and levels of shades of grey suggesting impairment in a pattern separation function for levels of shades of grey. In Experiment 3 in order to determine whether the dorsal CA3 (dCA3) plays a role in object pattern completion, a new task requiring less training and using a choice that was based on choosing the correct set of objects on a two-choice discrimination task was used. The results indicated that control rats displayed a pattern completion function based on the availability of one, two, three or four cues. In contrast, the dCA3 lesioned rats displayed a significant interaction between the two groups and the number of available objects suggesting impairment in a pattern completion function for object cues. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. Development of a sonar-based object recognition system

    NASA Astrophysics Data System (ADS)

    Ecemis, Mustafa Ihsan

    2001-02-01

    Sonars are used extensively in mobile robotics for obstacle detection, ranging and avoidance. However, these range-finding applications do not exploit the full range of information carried in sonar echoes. In addition, mobile robots need robust object recognition systems. Therefore, a simple and robust object recognition system using ultrasonic sensors may have a wide range of applications in robotics. This dissertation develops and analyzes an object recognition system that uses ultrasonic sensors of the type commonly found on mobile robots. Three principal experiments are used to test the sonar recognition system: object recognition at various distances, object recognition during unconstrained motion, and softness discrimination. The hardware setup, consisting of an inexpensive Polaroid sonar and a data acquisition board, is described first. The software for ultrasound signal generation, echo detection, data collection, and data processing is then presented. Next, the dissertation describes two methods to extract information from the echoes, one in the frequency domain and the other in the time domain. The system uses the fuzzy ARTMAP neural network to recognize objects on the basis of the information content of their echoes. In order to demonstrate that the performance of the system does not depend on the specific classification method being used, the K- Nearest Neighbors (KNN) Algorithm is also implemented. KNN yields a test accuracy similar to fuzzy ARTMAP in all experiments. Finally, the dissertation describes a method for extracting features from the envelope function in order to reduce the dimension of the input vector used by the classifiers. Decreasing the size of the input vectors reduces the memory requirements of the system and makes it run faster. It is shown that this method does not affect the performance of the system dramatically and is more appropriate for some tasks. The results of these experiments demonstrate that sonar can be used to develop a low-cost, low-computation system for real-time object recognition tasks on mobile robots. This system differs from all previous approaches in that it is relatively simple, robust, fast, and inexpensive.

  18. Selective attention and recognition: effects of congruency on episodic learning.

    PubMed

    Rosner, Tamara M; D'Angelo, Maria C; MacLellan, Ellen; Milliken, Bruce

    2015-05-01

    Recent research on cognitive control has focused on the learning consequences of high selective attention demands in selective attention tasks (e.g., Botvinick, Cognit Affect Behav Neurosci 7(4):356-366, 2007; Verguts and Notebaert, Psychol Rev 115(2):518-525, 2008). The current study extends these ideas by examining the influence of selective attention demands on remembering. In Experiment 1, participants read aloud the red word in a pair of red and green spatially interleaved words. Half of the items were congruent (the interleaved words had the same identity), and the other half were incongruent (the interleaved words had different identities). Following the naming phase, participants completed a surprise recognition memory test. In this test phase, recognition memory was better for incongruent than for congruent items. In Experiment 2, context was only partially reinstated at test, and again recognition memory was better for incongruent than for congruent items. In Experiment 3, all of the items contained two different words, but in one condition the words were presented close together and interleaved, while in the other condition the two words were spatially separated. Recognition memory was better for the interleaved than for the separated items. This result rules out an interpretation of the congruency effects on recognition in Experiments 1 and 2 that hinges on stronger relational encoding for items that have two different words. Together, the results support the view that selective attention demands for incongruent items lead to encoding that improves recognition.

  19. Collaboration in associative recognition memory: using recalled information to defend "new" judgments.

    PubMed

    Clark, Steven E; Abbe, Allison; Larson, Rakel P

    2006-11-01

    S. E. Clark, A. Hori, A. Putnam, and T. J. Martin (2000) showed that collaboration on a recognition memory task produced facilitation in recognition of targets but had inconsistent and sometimes negative effects regarding distractors. They accounted for these results within the framework of a dual-process, recall-plus-familiarity model but showed only weak evidence to support it. The present results of 3 experiments present stronger evidence for Clark et al.'s dual-process view and also show why such evidence is difficult to obtain. Copyright 2006 APA, all rights reserved.

  20. Learned Non-Rigid Object Motion is a View-Invariant Cue to Recognizing Novel Objects

    PubMed Central

    Chuang, Lewis L.; Vuong, Quoc C.; Bülthoff, Heinrich H.

    2012-01-01

    There is evidence that observers use learned object motion to recognize objects. For instance, studies have shown that reversing the learned direction in which a rigid object rotated in depth impaired recognition accuracy. This motion reversal can be achieved by playing animation sequences of moving objects in reverse frame order. In the current study, we used this sequence-reversal manipulation to investigate whether observers encode the motion of dynamic objects in visual memory, and whether such dynamic representations are encoded in a way that is dependent on the viewing conditions. Participants first learned dynamic novel objects, presented as animation sequences. Following learning, they were then tested on their ability to recognize these learned objects when their animation sequence was shown in the same sequence order as during learning or in the reverse sequence order. In Experiment 1, we found that non-rigid motion contributed to recognition performance; that is, sequence-reversal decreased sensitivity across different tasks. In subsequent experiments, we tested the recognition of non-rigidly deforming (Experiment 2) and rigidly rotating (Experiment 3) objects across novel viewpoints. Recognition performance was affected by viewpoint changes for both experiments. Learned non-rigid motion continued to contribute to recognition performance and this benefit was the same across all viewpoint changes. By comparison, learned rigid motion did not contribute to recognition performance. These results suggest that non-rigid motion provides a source of information for recognizing dynamic objects, which is not affected by changes to viewpoint. PMID:22661939

  1. Emotion recognition in Parkinson's disease: Static and dynamic factors.

    PubMed

    Wasser, Cory I; Evans, Felicity; Kempnich, Clare; Glikmann-Johnston, Yifat; Andrews, Sophie C; Thyagarajan, Dominic; Stout, Julie C

    2018-02-01

    The authors tested the hypothesis that Parkinson's disease (PD) participants would perform better in an emotion recognition task with dynamic (video) stimuli compared to a task using only static (photograph) stimuli and compared performances on both tasks to healthy control participants. In a within-subjects study, 21 PD participants and 20 age-matched healthy controls performed both static and dynamic emotion recognition tasks. The authors used a 2-way analysis of variance (controlling for individual participant variance) to determine the effect of group (PD, control) on emotion recognition performance in static and dynamic facial recognition tasks. Groups did not significantly differ in their performances on the static and dynamic tasks; however, the trend was suggestive that PD participants performed worse than controls. PD participants may have subtle emotion recognition deficits that are not ameliorated by the addition of contextual cues, similar to those found in everyday scenarios. Consistent with previous literature, the results suggest that PD participants may have underlying emotion recognition deficits, which may impact their social functioning. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  2. Emotion recognition training using composite faces generalises across identities but not all emotions.

    PubMed

    Dalili, Michael N; Schofield-Toloza, Lawrence; Munafò, Marcus R; Penton-Voak, Ian S

    2017-08-01

    Many cognitive bias modification (CBM) tasks use facial expressions of emotion as stimuli. Some tasks use unique facial stimuli, while others use composite stimuli, given evidence that emotion is encoded prototypically. However, CBM using composite stimuli may be identity- or emotion-specific, and may not generalise to other stimuli. We investigated the generalisability of effects using composite faces in two experiments. Healthy adults in each study were randomised to one of four training conditions: two stimulus-congruent conditions, where same faces were used during all phases of the task, and two stimulus-incongruent conditions, where faces of the opposite sex (Experiment 1) or faces depicting another emotion (Experiment 2) were used after the modification phase. Our results suggested that training effects generalised across identities. However, our results indicated only partial generalisation across emotions. These findings suggest effects obtained using composite stimuli may extend beyond the stimuli used in the task but remain emotion-specific.

  3. Verbal predicates foster conscious recollection but not familiarity of a task-irrelevant perceptual feature--an ERP study.

    PubMed

    Ecker, Ullrich K H; Arend, Anna M; Bergström, Kirstin; Zimmer, Hubert D

    2009-09-01

    Research on the effects of perceptual manipulations on recognition memory has suggested that (a) recollection is selectively influenced by task-relevant information and (b) familiarity can be considered perceptually specific. The present experiment tested divergent assumptions that (a) perceptual features can influence conscious object recollection via verbal code despite being task-irrelevant and that (b) perceptual features do not influence object familiarity if study is verbal-conceptual. At study, subjects named objects and their presentation colour; this was followed by an old/new object recognition test. Event-related potentials (ERP) showed that a study-test manipulation of colour impacted selectively on the ERP effect associated with recollection, while a size manipulation showed no effect. It is concluded that (a) verbal predicates generated at study are potent episodic memory agents that modulate recollection even if the recovered feature information is task-irrelevant and (b) commonly found perceptual match effects on familiarity critically depend on perceptual processing at study.

  4. Providing information about diagnostic features at retrieval reduces false recognition.

    PubMed

    Lane, Sean M; Roussel, Cristine C; Starns, Jeffrey J; Villa, Diane; Alonzo, Jill D

    2008-11-01

    In the following study, participants encoded blocked DRM word lists and we varied whether they received information before test about the utility of mnemonic features that potentially discriminate between veridical and false memories. The results of three experiments revealed that this manipulation successfully reduced false recognition of critical theme words. We also found that this manipulation was effective for younger but not older adults. Furthermore, calling attention to the features in test instructions alone was sufficient for reducing false recognition and its effectiveness was not enhanced by also asking participants to rate their phenomenal experience. We argue that providing diagnostic information before test allows participants to establish more accurate expectations about the task and thus improves the efficacy of retrieval and monitoring processes that are subsequently engaged.

  5. Bilevel Model-Based Discriminative Dictionary Learning for Recognition.

    PubMed

    Zhou, Pan; Zhang, Chao; Lin, Zhouchen

    2017-03-01

    Most supervised dictionary learning methods optimize the combinations of reconstruction error, sparsity prior, and discriminative terms. Thus, the learnt dictionaries may not be optimal for recognition tasks. Also, the sparse codes learning models in the training and the testing phases are inconsistent. Besides, without utilizing the intrinsic data structure, many dictionary learning methods only employ the l 0 or l 1 norm to encode each datum independently, limiting the performance of the learnt dictionaries. We present a novel bilevel model-based discriminative dictionary learning method for recognition tasks. The upper level directly minimizes the classification error, while the lower level uses the sparsity term and the Laplacian term to characterize the intrinsic data structure. The lower level is subordinate to the upper level. Therefore, our model achieves an overall optimality for recognition in that the learnt dictionary is directly tailored for recognition. Moreover, the sparse codes learning models in the training and the testing phases can be the same. We further propose a novel method to solve our bilevel optimization problem. It first replaces the lower level with its Karush-Kuhn-Tucker conditions and then applies the alternating direction method of multipliers to solve the equivalent problem. Extensive experiments demonstrate the effectiveness and robustness of our method.

  6. Ongoing slow oscillatory phase modulates speech intelligibility in cooperation with motor cortical activity.

    PubMed

    Onojima, Takayuki; Kitajo, Keiichi; Mizuhara, Hiroaki

    2017-01-01

    Neural oscillation is attracting attention as an underlying mechanism for speech recognition. Speech intelligibility is enhanced by the synchronization of speech rhythms and slow neural oscillation, which is typically observed as human scalp electroencephalography (EEG). In addition to the effect of neural oscillation, it has been proposed that speech recognition is enhanced by the identification of a speaker's motor signals, which are used for speech production. To verify the relationship between the effect of neural oscillation and motor cortical activity, we measured scalp EEG, and simultaneous EEG and functional magnetic resonance imaging (fMRI) during a speech recognition task in which participants were required to recognize spoken words embedded in noise sound. We proposed an index to quantitatively evaluate the EEG phase effect on behavioral performance. The results showed that the delta and theta EEG phase before speech inputs modulated the participant's response time when conducting speech recognition tasks. The simultaneous EEG-fMRI experiment showed that slow EEG activity was correlated with motor cortical activity. These results suggested that the effect of the slow oscillatory phase was associated with the activity of the motor cortex during speech recognition.

  7. Locality constrained joint dynamic sparse representation for local matching based face recognition.

    PubMed

    Wang, Jianzhong; Yi, Yugen; Zhou, Wei; Shi, Yanjiao; Qi, Miao; Zhang, Ming; Zhang, Baoxue; Kong, Jun

    2014-01-01

    Recently, Sparse Representation-based Classification (SRC) has attracted a lot of attention for its applications to various tasks, especially in biometric techniques such as face recognition. However, factors such as lighting, expression, pose and disguise variations in face images will decrease the performances of SRC and most other face recognition techniques. In order to overcome these limitations, we propose a robust face recognition method named Locality Constrained Joint Dynamic Sparse Representation-based Classification (LCJDSRC) in this paper. In our method, a face image is first partitioned into several smaller sub-images. Then, these sub-images are sparsely represented using the proposed locality constrained joint dynamic sparse representation algorithm. Finally, the representation results for all sub-images are aggregated to obtain the final recognition result. Compared with other algorithms which process each sub-image of a face image independently, the proposed algorithm regards the local matching-based face recognition as a multi-task learning problem. Thus, the latent relationships among the sub-images from the same face image are taken into account. Meanwhile, the locality information of the data is also considered in our algorithm. We evaluate our algorithm by comparing it with other state-of-the-art approaches. Extensive experiments on four benchmark face databases (ORL, Extended YaleB, AR and LFW) demonstrate the effectiveness of LCJDSRC.

  8. Handwritten digits recognition based on immune network

    NASA Astrophysics Data System (ADS)

    Li, Yangyang; Wu, Yunhui; Jiao, Lc; Wu, Jianshe

    2011-11-01

    With the development of society, handwritten digits recognition technique has been widely applied to production and daily life. It is a very difficult task to solve these problems in the field of pattern recognition. In this paper, a new method is presented for handwritten digit recognition. The digit samples firstly are processed and features extraction. Based on these features, a novel immune network classification algorithm is designed and implemented to the handwritten digits recognition. The proposed algorithm is developed by Jerne's immune network model for feature selection and KNN method for classification. Its characteristic is the novel network with parallel commutating and learning. The performance of the proposed method is experimented to the handwritten number datasets MNIST and compared with some other recognition algorithms-KNN, ANN and SVM algorithm. The result shows that the novel classification algorithm based on immune network gives promising performance and stable behavior for handwritten digits recognition.

  9. Family environment influences emotion recognition following paediatric traumatic brain injury.

    PubMed

    Schmidt, Adam T; Orsten, Kimberley D; Hanten, Gerri R; Li, Xiaoqi; Levin, Harvey S

    2010-01-01

    This study investigated the relationship between family functioning and performance on two tasks of emotion recognition (emotional prosody and face emotion recognition) and a cognitive control procedure (the Flanker task) following paediatric traumatic brain injury (TBI) or orthopaedic injury (OI). A total of 142 children (75 TBI, 67 OI) were assessed on three occasions: baseline, 3 months and 1 year post-injury on the two emotion recognition tasks and the Flanker task. Caregivers also completed the Life Stressors and Resources Scale (LISRES) on each occasion. Growth curve analysis was used to analyse the data. Results indicated that family functioning influenced performance on the emotional prosody and Flanker tasks but not on the face emotion recognition task. Findings on both the emotional prosody and Flanker tasks were generally similar across groups. However, financial resources emerged as significantly related to emotional prosody performance in the TBI group only (p = 0.0123). Findings suggest family functioning variables--especially financial resources--can influence performance on an emotional processing task following TBI in children.

  10. Hierarchical singleton-type recurrent neural fuzzy networks for noisy speech recognition.

    PubMed

    Juang, Chia-Feng; Chiou, Chyi-Tian; Lai, Chun-Lung

    2007-05-01

    This paper proposes noisy speech recognition using hierarchical singleton-type recurrent neural fuzzy networks (HSRNFNs). The proposed HSRNFN is a hierarchical connection of two singleton-type recurrent neural fuzzy networks (SRNFNs), where one is used for noise filtering and the other for recognition. The SRNFN is constructed by recurrent fuzzy if-then rules with fuzzy singletons in the consequences, and their recurrent properties make them suitable for processing speech patterns with temporal characteristics. In n words recognition, n SRNFNs are created for modeling n words, where each SRNFN receives the current frame feature and predicts the next one of its modeling word. The prediction error of each SRNFN is used as recognition criterion. In filtering, one SRNFN is created, and each SRNFN recognizer is connected to the same SRNFN filter, which filters noisy speech patterns in the feature domain before feeding them to the SRNFN recognizer. Experiments with Mandarin word recognition under different types of noise are performed. Other recognizers, including multilayer perceptron (MLP), time-delay neural networks (TDNNs), and hidden Markov models (HMMs), are also tested and compared. These experiments and comparisons demonstrate good results with HSRNFN for noisy speech recognition tasks.

  11. Effects of exposure to facial expression variation in face learning and recognition.

    PubMed

    Liu, Chang Hong; Chen, Wenfeng; Ward, James

    2015-11-01

    Facial expression is a major source of image variation in face images. Linking numerous expressions to the same face can be a huge challenge for face learning and recognition. It remains largely unknown what level of exposure to this image variation is critical for expression-invariant face recognition. We examined this issue in a recognition memory task, where the number of facial expressions of each face being exposed during a training session was manipulated. Faces were either trained with multiple expressions or a single expression, and they were later tested in either the same or different expressions. We found that recognition performance after learning three emotional expressions had no improvement over learning a single emotional expression (Experiments 1 and 2). However, learning three emotional expressions improved recognition compared to learning a single neutral expression (Experiment 3). These findings reveal both the limitation and the benefit of multiple exposures to variations of emotional expression in achieving expression-invariant face recognition. The transfer of expression training to a new type of expression is likely to depend on a relatively extensive level of training and a certain degree of variation across the types of expressions.

  12. Adaptive false memory: Imagining future scenarios increases false memories in the DRM paradigm.

    PubMed

    Dewhurst, Stephen A; Anderson, Rachel J; Grace, Lydia; van Esch, Lotte

    2016-10-01

    Previous research has shown that rating words for their relevance to a future scenario enhances memory for those words. The current study investigated the effect of future thinking on false memory using the Deese/Roediger-McDermott (DRM) procedure. In Experiment 1, participants rated words from 6 DRM lists for relevance to a past or future event (with or without planning) or in terms of pleasantness. In a surprise recall test, levels of correct recall did not vary between the rating tasks, but the future rating conditions led to significantly higher levels of false recall than the past and pleasantness conditions did. Experiment 2 found that future rating led to higher levels of false recognition than did past and pleasantness ratings but did not affect correct recognition. The effect in false recognition was, however, eliminated when DRM items were presented in random order. Participants in Experiment 3 were presented with both DRM lists and lists of unrelated words. Future rating increased levels of false recognition for DRM lures but did not affect correct recognition for DRM or unrelated lists. The findings are discussed in terms of the view that false memories can be associated with adaptive memory functions.

  13. Affective theory of mind inferences contextually influence the recognition of emotional facial expressions.

    PubMed

    Stewart, Suzanne L K; Schepman, Astrid; Haigh, Matthew; McHugh, Rhian; Stewart, Andrew J

    2018-03-14

    The recognition of emotional facial expressions is often subject to contextual influence, particularly when the face and the context convey similar emotions. We investigated whether spontaneous, incidental affective theory of mind inferences made while reading vignettes describing social situations would produce context effects on the identification of same-valenced emotions (Experiment 1) as well as differently-valenced emotions (Experiment 2) conveyed by subsequently presented faces. Crucially, we found an effect of context on reaction times in both experiments while, in line with previous work, we found evidence for a context effect on accuracy only in Experiment 1. This demonstrates that affective theory of mind inferences made at the pragmatic level of a text can automatically, contextually influence the perceptual processing of emotional facial expressions in a separate task even when those emotions are of a distinctive valence. Thus, our novel findings suggest that language acts as a contextual influence to the recognition of emotional facial expressions for both same and different valences.

  14. The short and long term effects of docetaxel chemotherapy on rodent object recognition and spatial reference memory.

    PubMed

    Fardell, Joanna E; Vardy, Janette; Johnston, Ian N

    2013-10-17

    Previous animal studies have examined the potential for cytostatic drugs to induce learning and memory deficits in laboratory animals but, to date, there is no pre-clinical evidence that taxanes have the potential to cause cognitive impairment. Therefore our aim was to explore the short- and long-term cognitive effects of different dosing schedules of the taxane docetaxel (DTX) on laboratory rodents. Healthy male hooded Wistar rats were treated with DTX (6 mg/kg, 10mg/kg) or physiological saline (control), once a week for 3 weeks (Experiment 1) or once only (10mg/kg; Experiment 2). Cognitive function was assessed using the novel object recognition (NOR) task and spatial water maze (WM) task 1 to 3 weeks after treatment and again 4 months after treatment. Shortly after DTX treatment, rats perform poorly on NOR regardless of treatment regimen. Treatment with a single injection of 10mg/kg DTX does not appear to induce sustained deficits in object recognition or peripheral neuropathy. Overall these findings show that treatment with the taxane DTX in the absence of cancer and other anti-cancer treatments causes cognitive impairment in healthy rodents. Copyright © 2013 Elsevier Inc. All rights reserved.

  15. Speech variability effects on recognition accuracy associated with concurrent task performance by pilots

    NASA Technical Reports Server (NTRS)

    Simpson, C. A.

    1985-01-01

    In the present study of the responses of pairs of pilots to aircraft warning classification tasks using an isolated word, speaker-dependent speech recognition system, the induced stress was manipulated by means of different scoring procedures for the classification task and by the inclusion of a competitive manual control task. Both speech patterns and recognition accuracy were analyzed, and recognition errors were recorded by type for an isolated word speaker-dependent system and by an offline technique for a connected word speaker-dependent system. While errors increased with task loading for the isolated word system, there was no such effect for task loading in the case of the connected word system.

  16. A model of traffic signs recognition with convolutional neural network

    NASA Astrophysics Data System (ADS)

    Hu, Haihe; Li, Yujian; Zhang, Ting; Huo, Yi; Kuang, Wenqing

    2016-10-01

    In real traffic scenes, the quality of captured images are generally low due to some factors such as lighting conditions, and occlusion on. All of these factors are challengeable for automated recognition algorithms of traffic signs. Deep learning has provided a new way to solve this kind of problems recently. The deep network can automatically learn features from a large number of data samples and obtain an excellent recognition performance. We therefore approach this task of recognition of traffic signs as a general vision problem, with few assumptions related to road signs. We propose a model of Convolutional Neural Network (CNN) and apply the model to the task of traffic signs recognition. The proposed model adopts deep CNN as the supervised learning model, directly takes the collected traffic signs image as the input, alternates the convolutional layer and subsampling layer, and automatically extracts the features for the recognition of the traffic signs images. The proposed model includes an input layer, three convolutional layers, three subsampling layers, a fully-connected layer, and an output layer. To validate the proposed model, the experiments are implemented using the public dataset of China competition of fuzzy image processing. Experimental results show that the proposed model produces a recognition accuracy of 99.01 % on the training dataset, and yield a record of 92% on the preliminary contest within the fourth best.

  17. Recognition of own-race and other-race faces by three-month-old infants.

    PubMed

    Sangrigoli, Sandy; De Schonen, Scania

    2004-10-01

    People are better at recognizing faces of their own race than faces of another race. Such race specificity may be due to differential expertise in the two races. In order to find out whether this other-race effect develops as early as face-recognition skills or whether it is a long-term effect of acquired expertise, we tested face recognition in 3-month-old Caucasian infants by conducting two experiments using Caucasian and Asiatic faces and a visual pair-comparison task. We hypothesized that if the other race effect develops together with face processing skills during the first months of life, the ability to recognize own-race faces will be greater than the ability to recognize other-race faces: 3-month-old Caucasian infants should be better at recognizing Caucasian faces than Asiatic faces. If, on the contrary, the other-race effect is the long-term result of acquired expertise, no difference between recognizing own- and other-race faces will be observed at that age. In Experiment 1, Caucasian infants were habituated to a single face. Recognition was assessed by a novelty preference paradigm. The infants' recognition performance was better for Caucasian than for Asiatic faces. In Experiment 2, Caucasian infants were familiarized with three individual faces. Recognition was demonstrated with both Caucasian and Asiatic faces. These results suggest that (i) the representation of face information by 3-month-olds may be race-experience-dependent (Experiment 1), and (ii) short-term familiarization with exemplars of another race group is sufficient to reduce the other-race effect and to extend the power of face processing (Experiment 2).

  18. Word-to-picture recognition is a function of motor components mappings at the stage of retrieval.

    PubMed

    Brouillet, Denis; Brouillet, Thibaut; Milhau, Audrey; Heurley, Loïc; Vagnot, Caroline; Brunel, Lionel

    2016-10-01

    Embodied approaches of cognition argue that retrieval involves the re-enactment of both sensory and motor components of the desired remembering. In this study, we investigated the effect of motor action performed to produce the response in a recognition task when this action is compatible with the affordance of the objects that have to be recognised. In our experiment, participants were first asked to learn a list of words referring to graspable objects, and then told to make recognition judgements on pictures. The pictures represented objects where the graspable part was either pointing to the same or to the opposite side of the "Yes" response key. Results show a robust effect of compatibility between objects affordance and response hand. Moreover, this compatibility improves participants' ability of discrimination, suggesting that motor components are relevant cue for memory judgement at the stage of retrieval in a recognition task. More broadly, our data highlight that memory judgements are a function of motor components mappings at the stage of retrieval. © 2015 International Union of Psychological Science.

  19. Verbalizing, Visualizing, and Navigating: The Effect of Strategies on Encoding a Large-Scale Virtual Environment

    PubMed Central

    Kraemer, David J.M.; Schinazi, Victor R.; Cawkwell, Philip B.; Tekriwal, Anand; Epstein, Russell A.; Thompson-Schill, Sharon L.

    2016-01-01

    Using novel virtual cities, we investigated the influence of verbal and visual strategies on the encoding of navigation-relevant information in a large-scale virtual environment. In two experiments, participants watched videos of routes through four virtual cities and were subsequently tested on their memory for observed landmarks and on their ability to make judgments regarding the relative directions of the different landmarks along the route. In the first experiment, self-report questionnaires measuring visual and verbal cognitive styles were administered to examine correlations between cognitive styles, landmark recognition, and judgments of relative direction. Results demonstrate a tradeoff in which the verbal cognitive style is more beneficial for recognizing individual landmarks than for judging relative directions between them, whereas the visual cognitive style is more beneficial for judging relative directions than for landmark recognition. In a second experiment, we manipulated the use of verbal and visual strategies by varying task instructions given to separate groups of participants. Results confirm that a verbal strategy benefits landmark memory, whereas a visual strategy benefits judgments of relative direction. The manipulation of strategy by altering task instructions appears to trump individual differences in cognitive style. Taken together, we find that processing different details during route encoding, whether due to individual proclivities (Experiment 1) or task instructions (Experiment 2), results in benefits for different components of navigation relevant information. These findings also highlight the value of considering multiple sources of individual differences as part of spatial cognition investigations. PMID:27668486

  20. Social cognition in schizophrenia: cognitive and affective factors.

    PubMed

    Ziv, Ido; Leiser, David; Levine, Joseph

    2011-01-01

    Social cognition refers to how people conceive, perceive, and draw inferences about mental and emotional states of others in the social world. Previous studies suggest that the concept of social cognition involves several abilities, including those related to affect and cognition. The present study analyses the deficits of individuals with schizophrenia in two areas of social cognition: Theory of Mind (ToM) and emotion recognition and processing. Examining the impairment of these abilities in patients with schizophrenia has the potential to elucidate the neurophysiological regions involved in social cognition and may also have the potential to aid rehabilitation. Two experiments were conducted. Both included the same five tasks: first- and second-level false-belief ToM tasks, emotion inferencing, understanding of irony, and matrix reasoning (a WAIS-R subtest). The matrix reasoning task was administered to evaluate and control for the association of the other tasks with analytic reasoning skills. Experiment 1 involved factor analysis of the task performance of 75 healthy participants. Experiment 2 compared 30 patients with schizophrenia to an equal number of matched controls. Results. (1) The five tasks were clearly divided into two factors corresponding to the two areas of social cognition, ToM and emotion recognition and processing. (2) Schizophrenics' performance was impaired on all tasks, particularly on those loading heavily on the analytic component (matrix reasoning and second-order ToM). (3) Matrix reasoning, second-level ToM (ToM2), and irony were found to distinguish patients from controls, even when all other tasks that revealed significant impairment in the patients' performance were taken into account. The two areas of social cognition examined are related to distinct factors. The mechanism for answering ToM questions (especially ToM2) depends on analytic reasoning capabilities, but the difficulties they present to individuals with schizophrenia are due to other components as well. The impairment in social cognition in schizophrenia stems from deficiencies in several mechanisms, including the ability to think analytically and to process emotion information and cues.

  1. Oxytocin Reduces Face Processing Time but Leaves Recognition Accuracy and Eye-Gaze Unaffected.

    PubMed

    Hubble, Kelly; Daughters, Katie; Manstead, Antony S R; Rees, Aled; Thapar, Anita; van Goozen, Stephanie H M

    2017-01-01

    Previous studies have found that oxytocin (OXT) can improve the recognition of emotional facial expressions; it has been proposed that this effect is mediated by an increase in attention to the eye-region of faces. Nevertheless, evidence in support of this claim is inconsistent, and few studies have directly tested the effect of oxytocin on emotion recognition via altered eye-gaze Methods: In a double-blind, within-subjects, randomized control experiment, 40 healthy male participants received 24 IU intranasal OXT and placebo in two identical experimental sessions separated by a 2-week interval. Visual attention to the eye-region was assessed on both occasions while participants completed a static facial emotion recognition task using medium intensity facial expressions. Although OXT had no effect on emotion recognition accuracy, recognition performance was improved because face processing was faster across emotions under the influence of OXT. This effect was marginally significant (p<.06). Consistent with a previous study using dynamic stimuli, OXT had no effect on eye-gaze patterns when viewing static emotional faces and this was not related to recognition accuracy or face processing time. These findings suggest that OXT-induced enhanced facial emotion recognition is not necessarily mediated by an increase in attention to the eye-region of faces, as previously assumed. We discuss several methodological issues which may explain discrepant findings and suggest the effect of OXT on visual attention may differ depending on task requirements. (JINS, 2017, 23, 23-33).

  2. Reading in developmental prosopagnosia: Evidence for a dissociation between word and face recognition.

    PubMed

    Starrfelt, Randi; Klargaard, Solja K; Petersen, Anders; Gerlach, Christian

    2018-02-01

    Recent models suggest that face and word recognition may rely on overlapping cognitive processes and neural regions. In support of this notion, face recognition deficits have been demonstrated in developmental dyslexia. Here we test whether the opposite association can also be found, that is, impaired reading in developmental prosopagnosia. We tested 10 adults with developmental prosopagnosia and 20 matched controls. All participants completed the Cambridge Face Memory Test, the Cambridge Face Perception test and a Face recognition questionnaire used to quantify everyday face recognition experience. Reading was measured in four experimental tasks, testing different levels of letter, word, and text reading: (a) single word reading with words of varying length,(b) vocal response times in single letter and short word naming, (c) recognition of single letters and short words at brief exposure durations (targeting the word superiority effect), and d) text reading. Participants with developmental prosopagnosia performed strikingly similar to controls across the four reading tasks. Formal analysis revealed a significant dissociation between word and face recognition, as the difference in performance with faces and words was significantly greater for participants with developmental prosopagnosia than for controls. Adult developmental prosopagnosics read as quickly and fluently as controls, while they are seemingly unable to learn efficient strategies for recognizing faces. We suggest that this is due to the differing demands that face and word recognition put on the perceptual system. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  3. The Vanderbilt Expertise Test Reveals Domain-General and Domain-Specific Sex Effects in Object Recognition

    PubMed Central

    McGugin, Rankin W.; Richler, Jennifer J.; Herzmann, Grit; Speegle, Magen; Gauthier, Isabel

    2012-01-01

    Individual differences in face recognition are often contrasted with differences in object recognition using a single object category. Likewise, individual differences in perceptual expertise for a given object domain have typically been measured relative to only a single category baseline. In Experiment 1, we present a new test of object recognition, the Vanderbilt Expertise Test (VET), which is comparable in methods to the Cambridge Face Memory Task (CFMT) but uses eight different object categories. Principal component analysis reveals that the underlying structure of the VET can be largely explained by two independent factors, which demonstrate good reliability and capture interesting sex differences inherent in the VET structure. In Experiment 2, we show how the VET can be used to separate domain-specific from domain-general contributions to a standard measure of perceptual expertise. While domain-specific contributions are found for car matching for both men and women and for plane matching in men, women in this sample appear to use more domain-general strategies to match planes. In Experiment 3, we use the VET to demonstrate that holistic processing of faces predicts face recognition independently of general object recognition ability, which has a sex-specific contribution to face recognition. Overall, the results suggest that the VET is a reliable and valid measure of object recognition abilities and can measure both domain-general skills and domain-specific expertise, which were both found to depend on the sex of observers. PMID:22877929

  4. Object Recognition Memory and the Rodent Hippocampus

    ERIC Educational Resources Information Center

    Broadbent, Nicola J.; Gaskin, Stephane; Squire, Larry R.; Clark, Robert E.

    2010-01-01

    In rodents, the novel object recognition task (NOR) has become a benchmark task for assessing recognition memory. Yet, despite its widespread use, a consensus has not developed about which brain structures are important for task performance. We assessed both the anterograde and retrograde effects of hippocampal lesions on performance in the NOR…

  5. Multiple confidence estimates as indices of eyewitness memory.

    PubMed

    Sauer, James D; Brewer, Neil; Weber, Nathan

    2008-08-01

    Eyewitness identification decisions are vulnerable to various influences on witnesses' decision criteria that contribute to false identifications of innocent suspects and failures to choose perpetrators. An alternative procedure using confidence estimates to assess the degree of match between novel and previously viewed faces was investigated. Classification algorithms were applied to participants' confidence data to determine when a confidence value or pattern of confidence values indicated a positive response. Experiment 1 compared confidence group classification accuracy with a binary decision control group's accuracy on a standard old-new face recognition task and found superior accuracy for the confidence group for target-absent trials but not for target-present trials. Experiment 2 used a face mini-lineup task and found reduced target-present accuracy offset by large gains in target-absent accuracy. Using a standard lineup paradigm, Experiments 3 and 4 also found improved classification accuracy for target-absent lineups and, with a more sophisticated algorithm, for target-present lineups. This demonstrates the accessibility of evidence for recognition memory decisions and points to a more sensitive index of memory quality than is afforded by binary decisions.

  6. Repetition priming of access to biographical information from faces.

    PubMed

    Johnston, Robert A; Barry, Christopher

    2006-02-01

    Two experiments examined repetition priming on tasks that require access to semantic (or biographical) information from faces. In the second stage of each experiment, participants made either a nationality or an occupation decision to faces of celebrities, and, in the first stage, they made either the same or a different decision to faces (in Experiment 1) or the same or a different decision to printed names (in Experiment 2). All combinations of priming and test tasks produced clear repetition effects, which occurred irrespective of whether the decisions made were positive or negative. Same-domain (face-to-face) repetition priming was larger than cross-domain (name-to-face) priming, and priming was larger when the two tasks were the same. It is discussed how these findings are more readily accommodated by the Burton, Bruce, and Johnston (1990) model of face recognition than by episode-based accounts of repetition priming.

  7. Family environment influences emotion recognition following paediatric traumatic brain injury

    PubMed Central

    SCHMIDT, ADAM T.; ORSTEN, KIMBERLEY D.; HANTEN, GERRI R.; LI, XIAOQI; LEVIN, HARVEY S.

    2011-01-01

    Objective This study investigated the relationship between family functioning and performance on two tasks of emotion recognition (emotional prosody and face emotion recognition) and a cognitive control procedure (the Flanker task) following paediatric traumatic brain injury (TBI) or orthopaedic injury (OI). Methods A total of 142 children (75 TBI, 67 OI) were assessed on three occasions: baseline, 3 months and 1 year post-injury on the two emotion recognition tasks and the Flanker task. Caregivers also completed the Life Stressors and Resources Scale (LISRES) on each occasion. Growth curve analysis was used to analyse the data. Results Results indicated that family functioning influenced performance on the emotional prosody and Flanker tasks but not on the face emotion recognition task. Findings on both the emotional prosody and Flanker tasks were generally similar across groups. However, financial resources emerged as significantly related to emotional prosody performance in the TBI group only (p = 0.0123). Conclusions Findings suggest family functioning variables—especially financial resources—can influence performance on an emotional processing task following TBI in children. PMID:21058900

  8. What Factors Underlie Associative and Categorical Memory Illusions? The Roles of Backward Associative Strength and Interitem Connectivity

    ERIC Educational Resources Information Center

    Knott, Lauren M.; Dewhurst, Stephen A.; Howe, Mark L.

    2012-01-01

    Factors that affect categorical and associative false memory illusions were investigated in 2 experiments. In Experiment 1, backward associative strength (BAS) from the list word to the critical lure and interitem connectivity were manipulated in Deese-Roediger-McDermott (DRM) and category list types. For both recall and recognition tasks, the…

  9. When Does Memory Monitoring Succeed versus Fail? Comparing Item-Specific and Relational Encoding in the DRM Paradigm

    ERIC Educational Resources Information Center

    Huff, Mark J.; Bodner, Glen E.

    2013-01-01

    We compared the effects of item-specific versus relational encoding on recognition memory in the Deese-Roediger-McDermott paradigm. In Experiment 1, we directly compared item-specific and relational encoding instructions, whereas in Experiments 2 and 3 we biased pleasantness and generation tasks, respectively, toward one or the other type of…

  10. Comprehension of Written Sentences as a Core Component of Children's Reading Comprehension

    ERIC Educational Resources Information Center

    Ecalle, Jean; Bouchafa, Houria; Potocki, Anna; Magnan, Annie

    2013-01-01

    Two experiments were conducted to test the hypothesis that sentence processing is an essential mediatory skill between word recognition and text comprehension in reading. In Experiment 1, a semantic similarity judgement task was used with children from Grade 2 to Grade 9. They had to say whether two written sentences had the same (or very similar)…

  11. Auditory word recognition: extrinsic and intrinsic effects of word frequency.

    PubMed

    Connine, C M; Titone, D; Wang, J

    1993-01-01

    Two experiments investigated the influence of word frequency in a phoneme identification task. Speech voicing continua were constructed so that one endpoint was a high-frequency word and the other endpoint was a low-frequency word (e.g., best-pest). Experiment 1 demonstrated that ambiguous tokens were labeled such that a high-frequency word was formed (intrinsic frequency effect). Experiment 2 manipulated the frequency composition of the list (extrinsic frequency effect). A high-frequency list bias produced an exaggerated influence of frequency; a low-frequency list bias showed a reverse frequency effect. Reaction time effects were discussed in terms of activation and postaccess decision models of frequency coding. The results support a late use of frequency in auditory word recognition.

  12. Race coding and the other-race effect in face recognition.

    PubMed

    Rhodes, Gillian; Locke, Vance; Ewing, Louise; Evangelista, Emma

    2009-01-01

    Other-race faces are generally recognised more poorly than own-race faces. According to Levin's influential race-coding hypothesis, this other-race recognition deficit results from spontaneous coding of race-specifying information, at the expense of individuating information, in other-race faces. Therefore, requiring participants to code race-specifying information for all faces should eliminate the other-race effect by reducing recognition of own-race faces to the level of other-race faces. We tested this prediction in two experiments. Race coding was induced by requiring participants to rate study faces on race typicality (experiment 1) or to categorise them by race (experiment 2). Neither manipulation reduced the other-race effect, providing no support for the race-coding hypothesis. Instead, race-coding instructions marginally increased the other-race effect in experiment 1 and had no effect in experiment 2. These results do not support the race-coding hypothesis. Surprisingly, a control task of rating the attractiveness of study faces increased the other-race effect, indicating that deeper encoding of faces does not necessarily reduce the effect (experiment 1). Finally, the normally robust other-race effect was absent when participants were instructed to individuate other-race faces (experiment 2). We suggest that poorer recognition of other-race faces may reflect reduced perceptual expertise with such faces and perhaps reduced motivation to individuate them.

  13. Why we respond faster to the self than to others? An implicit positive association theory of self-advantage during implicit face recognition.

    PubMed

    Ma, Yina; Han, Shihui

    2010-06-01

    Human adults usually respond faster to their own faces rather than to those of others. We tested the hypothesis that an implicit positive association (IPA) with self mediates self-advantage in face recognition through 4 experiments. Using a self-concept threat (SCT) priming that associated the self with negative personal traits and led to a weakened IPA with self, we found that self-face advantage in an implicit face-recognition task that required identification of face orientation was eliminated by the SCT priming. Moreover, the SCT effect on self-face recognition was evident only with the left-hand responses. Furthermore, the SCT effect on self-face recognition was observed in both Chinese and American participants. Our findings support the IPA hypothesis that defines a social cognitive mechanism of self-advantage in face recognition.

  14. The Costs and Benefits of Testing and Guessing on Recognition Memory

    ERIC Educational Resources Information Center

    Huff, Mark J.; Balota, David A.; Hutchison, Keith A.

    2016-01-01

    We examined whether 2 types of interpolated tasks (i.e., retrieval-practice via free recall or guessing a missing critical item) improved final recognition for related and unrelated word lists relative to restudying or completing a filler task. Both retrieval-practice and guessing tasks improved correct recognition relative to restudy and filler…

  15. Rapid Naming Speed and Chinese Character Recognition

    ERIC Educational Resources Information Center

    Liao, Chen-Huei; Georgiou, George K.; Parrila, Rauno

    2008-01-01

    We examined the relationship between rapid naming speed (RAN) and Chinese character recognition accuracy and fluency. Sixty-three grade 2 and 54 grade 4 Taiwanese children were administered four RAN tasks (colors, digits, Zhu-Yin-Fu-Hao, characters), and two character recognition tasks. RAN tasks accounted for more reading variance in grade 4 than…

  16. How Fast is Famous Face Recognition?

    PubMed Central

    Barragan-Jason, Gladys; Lachat, Fanny; Barbeau, Emmanuel J.

    2012-01-01

    The rapid recognition of familiar faces is crucial for social interactions. However the actual speed with which recognition can be achieved remains largely unknown as most studies have been carried out without any speed constraints. Different paradigms have been used, leading to conflicting results, and although many authors suggest that face recognition is fast, the speed of face recognition has not been directly compared to “fast” visual tasks. In this study, we sought to overcome these limitations. Subjects performed three tasks, a familiarity categorization task (famous faces among unknown faces), a superordinate categorization task (human faces among animal ones), and a gender categorization task. All tasks were performed under speed constraints. The results show that, despite the use of speed constraints, subjects were slow when they had to categorize famous faces: minimum reaction time was 467 ms, which is 180 ms more than during superordinate categorization and 160 ms more than in the gender condition. Our results are compatible with a hierarchy of face processing from the superordinate level to the familiarity level. The processes taking place between detection and recognition need to be investigated in detail. PMID:23162503

  17. The beneficial effect of oxytocin on avoidance-related facial emotion recognition depends on early life stress experience.

    PubMed

    Feeser, Melanie; Fan, Yan; Weigand, Anne; Hahn, Adam; Gärtner, Matti; Aust, Sabine; Böker, Heinz; Bajbouj, Malek; Grimm, Simone

    2014-12-01

    Previous studies have shown that oxytocin (OXT) enhances social cognitive processes. It has also been demonstrated that OXT does not uniformly facilitate social cognition. The effects of OXT administration strongly depend on the exposure to stressful experiences in early life. Emotional facial recognition is crucial for social cognition. However, no study has yet examined how the effects of OXT on the ability to identify emotional faces are altered by early life stress (ELS) experiences. Given the role of OXT in modulating social motivational processes, we specifically aimed to investigate its effects on the recognition of approach- and avoidance-related facial emotions. In a double-blind, between-subjects, placebo-controlled design, 82 male participants performed an emotion recognition task with faces taken from the "Karolinska Directed Emotional Faces" set. We clustered the six basic emotions along the dimensions approach (happy, surprise, anger) and avoidance (fear, sadness, disgust). ELS was assessed with the Childhood Trauma Questionnaire (CTQ). Our results showed that OXT improved the ability to recognize avoidance-related emotional faces as compared to approach-related emotional faces. Whereas the performance for avoidance-related emotions in participants with higher ELS scores was comparable in both OXT and placebo condition, OXT enhanced emotion recognition in participants with lower ELS scores. Independent of OXT administration, we observed increased emotion recognition for avoidance-related faces in participants with high ELS scores. Our findings suggest that the investigation of OXT on social recognition requires a broad approach that takes ELS experiences as well as motivational processes into account.

  18. The effects of initial testing on false recall and false recognition in the social contagion of memory paradigm.

    PubMed

    Huff, Mark J; Davis, Sara D; Meade, Michelle L

    2013-08-01

    In three experiments, participants studied photographs of common household scenes. Following study, participants completed a category-cued recall test without feedback (Exps. 1 and 3), a category-cued recall test with feedback (Exp. 2), or a filler task (no-test condition). Participants then viewed recall tests from fictitious previous participants that contained erroneous items presented either one or four times, and then completed final recall and source recognition tests. The participants in all conditions reported incorrect items during final testing (a social contagion effect), and across experiments, initial testing had no impact on false recall of erroneous items. However, on the final source-monitoring recognition test, initial testing had a protective effect against false source recognition: Participants who were initially tested with and without feedback on category-cued initial tests attributed fewer incorrect items to the original event on the final source-monitoring recognition test than did participants who were not initially tested. These data demonstrate that initial testing may protect individuals' memories from erroneous suggestions.

  19. Transfer-Appropriate Processing in Recognition Memory: Perceptual and Conceptual Effects on Recognition Memory Depend on Task Demands

    ERIC Educational Resources Information Center

    Parks, Colleen M.

    2013-01-01

    Research examining the importance of surface-level information to familiarity in recognition memory tasks is mixed: Sometimes it affects recognition and sometimes it does not. One potential explanation of the inconsistent findings comes from the ideas of dual process theory of recognition and the transfer-appropriate processing framework, which…

  20. Chemical Entity Recognition and Resolution to ChEBI

    PubMed Central

    Grego, Tiago; Pesquita, Catia; Bastos, Hugo P.; Couto, Francisco M.

    2012-01-01

    Chemical entities are ubiquitous through the biomedical literature and the development of text-mining systems that can efficiently identify those entities are required. Due to the lack of available corpora and data resources, the community has focused its efforts in the development of gene and protein named entity recognition systems, but with the release of ChEBI and the availability of an annotated corpus, this task can be addressed. We developed a machine-learning-based method for chemical entity recognition and a lexical-similarity-based method for chemical entity resolution and compared them with Whatizit, a popular-dictionary-based method. Our methods outperformed the dictionary-based method in all tasks, yielding an improvement in F-measure of 20% for the entity recognition task, 2–5% for the entity-resolution task, and 15% for combined entity recognition and resolution tasks. PMID:25937941

  1. Construction of language models for an handwritten mail reading system

    NASA Astrophysics Data System (ADS)

    Morillot, Olivier; Likforman-Sulem, Laurence; Grosicki, Emmanuèle

    2012-01-01

    This paper presents a system for the recognition of unconstrained handwritten mails. The main part of this system is an HMM recognizer which uses trigraphs to model contextual information. This recognition system does not require any segmentation into words or characters and directly works at line level. To take into account linguistic information and enhance performance, a language model is introduced. This language model is based on bigrams and built from training document transcriptions only. Different experiments with various vocabulary sizes and language models have been conducted. Word Error Rate and Perplexity values are compared to show the interest of specific language models, fit to handwritten mail recognition task.

  2. Contextual modulation of biases in face recognition.

    PubMed

    Felisberti, Fatima Maria; Pavey, Louisa

    2010-09-23

    The ability to recognize the faces of potential cooperators and cheaters is fundamental to social exchanges, given that cooperation for mutual benefit is expected. Studies addressing biases in face recognition have so far proved inconclusive, with reports of biases towards faces of cheaters, biases towards faces of cooperators, or no biases at all. This study attempts to uncover possible causes underlying such discrepancies. Four experiments were designed to investigate biases in face recognition during social exchanges when behavioral descriptors (prosocial, antisocial or neutral) embedded in different scenarios were tagged to faces during memorization. Face recognition, measured as accuracy and response latency, was tested with modified yes-no, forced-choice and recall tasks (N = 174). An enhanced recognition of faces tagged with prosocial descriptors was observed when the encoding scenario involved financial transactions and the rules of the social contract were not explicit (experiments 1 and 2). Such bias was eliminated or attenuated by making participants explicitly aware of "cooperative", "cheating" and "neutral/indifferent" behaviors via a pre-test questionnaire and then adding such tags to behavioral descriptors (experiment 3). Further, in a social judgment scenario with descriptors of salient moral behaviors, recognition of antisocial and prosocial faces was similar, but significantly better than neutral faces (experiment 4). The results highlight the relevance of descriptors and scenarios of social exchange in face recognition, when the frequency of prosocial and antisocial individuals in a group is similar. Recognition biases towards prosocial faces emerged when descriptors did not state the rules of a social contract or the moral status of a behavior, and they point to the existence of broad and flexible cognitive abilities finely tuned to minor changes in social context.

  3. The role of retrieval mode and retrieval orientation in retrieval practice: insights from comparing recognition memory testing formats and restudying.

    PubMed

    Gao, Chuanji; Rosburg, Timm; Hou, Mingzhu; Li, Bingbing; Xiao, Xin; Guo, Chunyan

    2016-12-01

    The effectiveness of retrieval practice for aiding long-term memory, referred to as the testing effect, has been widely demonstrated. However, the specific neurocognitive mechanisms underlying this phenomenon remain unclear. In the present study, we sought to explore the role of pre-retrieval processes at initial testing on later recognition performance by using event-related potentials (ERPs). Subjects studied two lists of words (Chinese characters) and then performed a recognition task or a source memory task, or restudied the word lists. At the end of the experiment, subjects received a final recognition test based on the remember-know paradigm. Behaviorally, initial testing (active retrieval) enhanced memory retention relative to restudying (passive retrieval). The retrieval mode at initial testing was indexed by more positive-going ERPs for unstudied items in the active-retrieval tasks than in passive retrieval from 300 to 900 ms. Follow-up analyses showed that the magnitude of the early ERP retrieval mode effect (300-500 ms) was predictive of the behavioral testing effect later on. In addition, the ERPs for correctly rejected new items during initial testing differed between the two active-retrieval tasks from 500 to 900 ms, and this ERP retrieval orientation effect predicted differential behavioral testing gains between the two active-retrieval conditions. Our findings confirm that initial testing promotes later retrieval relative to restudying, and they further suggest that adopting pre-retrieval processing in the forms of retrieval mode and retrieval orientation might contribute to these memory enhancements.

  4. ERP correlates of letter identity and letter position are modulated by lexical frequency

    PubMed Central

    Vergara-Martínez, Marta; Perea, Manuel; Gómez, Pablo; Swaab, Tamara Y.

    2013-01-01

    The encoding of letter position is a key aspect in all recently proposed models of visual-word recognition. We analyzed the impact of lexical frequency on letter position assignment by examining the temporal dynamics of lexical activation induced by pseudowords extracted from words of different frequencies. For each word (e.g., BRIDGE), we created two pseudowords: A transposed-letter (TL: BRIGDE) and a replaced-letter pseudoword (RL: BRITGE). ERPs were recorded while participants read words and pseudowords in two tasks: Semantic categorization (Experiment 1) and lexical decision (Experiment 2). For high-frequency stimuli, similar ERPs were obtained for words and TL-pseudowords, but the N400 component to words was reduced relative to RL-pseudowords, indicating less lexical/semantic activation. In contrast, TL- and RL-pseudowords created from low-frequency stimuli elicited similar ERPs. Behavioral responses in the lexical decision task paralleled this asymmetry. The present findings impose constraints on computational and neural models of visual-word recognition. PMID:23454070

  5. Recognition and reading aloud of kana and kanji word: an fMRI study.

    PubMed

    Ino, Tadashi; Nakai, Ryusuke; Azuma, Takashi; Kimura, Toru; Fukuyama, Hidenao

    2009-03-16

    It has been proposed that different brain regions are recruited for processing two Japanese writing systems, namely, kanji (morphograms) and kana (syllabograms). However, this difference may depend upon what type of word was used and also on what type of task was performed. Using fMRI, we investigated brain activation for processing kanji and kana words with similar high familiarity in two tasks: word recognition and reading aloud. During both tasks, words and non-words were presented side by side, and the subjects were required to press a button corresponding to the real word in the word recognition task and were required to read aloud the real word in the reading aloud task. Brain activations were similar between kanji and kana during reading aloud task, whereas during word recognition task in which accurate identification and selection were required, kanji relative to kana activated regions of bilateral frontal, parietal and occipitotemporal cortices, all of which were related mainly to visual word-form analysis and visuospatial attention. Concerning the difference of brain activity between two tasks, differential activation was found only in the regions associated with task-specific sensorimotor processing for kana, whereas visuospatial attention network also showed greater activation during word recognition task than during reading aloud task for kanji. We conclude that the differences in brain activation between kanji and kana depend on the interaction between the script characteristics and the task demands.

  6. Repetition priming of face recognition in a serial choice reaction-time task.

    PubMed

    Roberts, T; Bruce, V

    1989-05-01

    Marshall & Walker (1987) found that pictorial stimuli yield visual priming that is disrupted by an unpredictable visual event in the response-stimulus interval. They argue that visual stimuli are represented in memory in the form of distinct visual and object codes. Bruce & Young (1986) propose similar pictorial, structural and semantic codes which mediate the recognition of faces, yet repetition priming results obtained with faces as stimuli (Bruce & Valentine, 1985), and with objects (Warren & Morton, 1982) are quite different from those of Marshall & Walker (1987), in the sense that recognition is facilitated by pictures presented 20 minutes earlier. The experiment reported here used different views of familiar and unfamiliar faces as stimuli in a serial choice reaction-time task and found that, with identical pictures, repetition priming survives and intervening item requiring a response, with both familiar and unfamiliar faces. Furthermore, with familiar faces such priming was present even when the view of the prime was different from the target. The theoretical implications of these results are discussed.

  7. Signal detection with criterion noise: applications to recognition memory.

    PubMed

    Benjamin, Aaron S; Diaz, Michael; Wee, Serena

    2009-01-01

    A tacit but fundamental assumption of the theory of signal detection is that criterion placement is a noise-free process. This article challenges that assumption on theoretical and empirical grounds and presents the noisy decision theory of signal detection (ND-TSD). Generalized equations for the isosensitivity function and for measures of discrimination incorporating criterion variability are derived, and the model's relationship with extant models of decision making in discrimination tasks is examined. An experiment evaluating recognition memory for ensembles of word stimuli revealed that criterion noise is not trivial in magnitude and contributes substantially to variance in the slope of the isosensitivity function. The authors discuss how ND-TSD can help explain a number of current and historical puzzles in recognition memory, including the inconsistent relationship between manipulations of learning and the isosensitivity function's slope, the lack of invariance of the slope with manipulations of bias or payoffs, the effects of aging on the decision-making process in recognition, and the nature of responding in remember-know decision tasks. ND-TSD poses novel, theoretically meaningful constraints on theories of recognition and decision making more generally, and provides a mechanism for rapprochement between theories of decision making that employ deterministic response rules and those that postulate probabilistic response rules.

  8. Component-based target recognition inspired by human vision

    NASA Astrophysics Data System (ADS)

    Zheng, Yufeng; Agyepong, Kwabena

    2009-05-01

    In contrast with machine vision, human can recognize an object from complex background with great flexibility. For example, given the task of finding and circling all cars (no further information) in a picture, you may build a virtual image in mind from the task (or target) description before looking at the picture. Specifically, the virtual car image may be composed of the key components such as driver cabin and wheels. In this paper, we propose a component-based target recognition method by simulating the human recognition process. The component templates (equivalent to the virtual image in mind) of the target (car) are manually decomposed from the target feature image. Meanwhile, the edges of the testing image can be extracted by using a difference of Gaussian (DOG) model that simulates the spatiotemporal response in visual process. A phase correlation matching algorithm is then applied to match the templates with the testing edge image. If all key component templates are matched with the examining object, then this object is recognized as the target. Besides the recognition accuracy, we will also investigate if this method works with part targets (half cars). In our experiments, several natural pictures taken on streets were used to test the proposed method. The preliminary results show that the component-based recognition method is very promising.

  9. Eye-Gaze Analysis of Facial Emotion Recognition and Expression in Adolescents with ASD.

    PubMed

    Wieckowski, Andrea Trubanova; White, Susan W

    2017-01-01

    Impaired emotion recognition and expression in individuals with autism spectrum disorder (ASD) may contribute to observed social impairment. The aim of this study was to examine the role of visual attention directed toward nonsocial aspects of a scene as a possible mechanism underlying recognition and expressive ability deficiency in ASD. One recognition and two expression tasks were administered. Recognition was assessed in force-choice paradigm, and expression was assessed during scripted and free-choice response (in response to emotional stimuli) tasks in youth with ASD (n = 20) and an age-matched sample of typically developing youth (n = 20). During stimulus presentation prior to response in each task, participants' eye gaze was tracked. Youth with ASD were less accurate at identifying disgust and sadness in the recognition task. They fixated less to the eye region of stimuli showing surprise. A group difference was found during the free-choice response task, such that those with ASD expressed emotion less clearly but not during the scripted task. Results suggest altered eye gaze to the mouth region but not the eye region as a candidate mechanism for decreased ability to recognize or express emotion. Findings inform our understanding of the association between social attention and emotion recognition and expression deficits.

  10. Recognition Without Words: Using Taste to Explore Survival Processing

    PubMed Central

    Hallock, Henry L.; Garman, Heather D.; Cook, Shaun P.; Gallagher, Shawn P.

    2017-01-01

    Many educational demonstrations of memory and recall employ word lists and number strings; items that lend themselves to semantic organization and “chunking.” By applying taste recall to the adaptive memory paradigm, which evaluates memory from a survival-based evolutionary perspective, we have developed a simple, inexpensive exercise that defies mnemonic strategies. Most adaptive memory studies have evaluated recall of words encountered while imagining survival and non-survival scenarios. Here, we’ve left the lexical domain and hypothesized that taste memory, as measured by recognition, would be best when acquisition occurs under imagined threat of personal harm, namely poisoning. We tested participants individually while they evaluated eight teas in one of three conditions: in one, they evaluated the toxicity of the tea (survival condition), in a second, they considered the marketability of the tea and, in the third, they evaluated the bitterness of the tea. After a filler task, a surprise recognition task required the participants to taste and identify the eight original teas from a group of 16 that included eight novel teas. The survival condition led to better recognition than the bitterness condition but, surprisingly, it did not yield better recognition than the marketing condition. A second experiment employed a streamlined design more appropriate for classroom settings and failed to support the hypothesis that planning enhanced recognition in survival scenarios. This simple technique has, at least, revealed a robust levels-of-processing effect for taste recognition and invites students to consider the adaptive advantages of all forms of memory. PMID:28690433

  11. Dietary effects on object recognition: The impact of high-fat high-sugar diets on recollection and familiarity-based memory.

    PubMed

    Tran, Dominic M D; Westbrook, R Frederick

    2018-05-31

    Exposure to a high-fat high-sugar (HFHS) diet rapidly impairs novel-place- but not novel-object-recognition memory in rats (Tran & Westbrook, 2015, 2017). Three experiments sought to investigate the generality of diet-induced cognitive deficits by examining whether there are conditions under which object-recognition memory is impaired. Experiments 1 and 3 tested the strength of short- and long-term object-memory trace, respectively, by varying the interval of time between object familiarization and subsequent novel object test. Experiment 2 tested the effect of increasing working memory load on object-recognition memory by interleaving additional object exposures between familiarization and test in an n-back style task. Experiments 1-3 failed to detect any differences in object recognition between HFHS and control rats. Experiment 4 controlled for object novelty by separately familiarizing both objects presented at test, which included one remote-familiar and one recent-familiar object. Under these conditions, when test objects differed in their relative recency, HFHS rats showed a weaker memory trace for the remote object compared to chow rats. This result suggests that the diet leaves intact recollection judgments, but impairs familiarity judgments. We speculate that the HFHS diet adversely affects "where" memories as well as the quality of "what" memories, and discuss these effects in relation to recollection and familiarity memory models, hippocampal-dependent functions, and episodic food memories. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  12. The Effects of Aging and IQ on Item and Associative Memory

    PubMed Central

    Ratcliff, Roger; Thapar, Anjali; McKoon, Gail

    2011-01-01

    The effects of aging and IQ on performance were examined in four memory tasks: item recognition, associative recognition, cued recall, and free recall. For item and associative recognition, accuracy and the response time distributions for correct and error responses were explained by Ratcliff’s (1978) diffusion model, at the level of individual participants. The values of the components of processing identified by the model for the recognition tasks, as well as accuracy for cued and free recall, were compared across levels of IQ ranging from 85 to 140 and age (college-age, 60-74 year olds, and 75-90 year olds). IQ had large effects on the quality of the evidence from memory on which decisions were based in the recognition tasks and accuracy in the recall tasks, except for the oldest participants for whom some of the measures were near floor values. Drift rates in the recognition tasks, accuracy in the recall tasks, and IQ all correlated strongly with each other. However, there was a small decline in drift rates for item recognition and a large decline for associative recognition and accuracy in cued recall (about 70 percent). In contrast, there were large age effects on boundary separation and nondecision time (which correlated across tasks), but little effect of IQ. The implications of these results for single- and dual- process models of item recognition are discussed and it is concluded that models that deal with both RTs and accuracy are subject to many more constraints than models that deal with only one of these measures. Overall, the results of the study show a complicated but interpretable pattern of interactions that present important targets for response time and memory models. PMID:21707207

  13. Intrinsic and contextual features in object recognition.

    PubMed

    Schlangen, Derrick; Barenholtz, Elan

    2015-01-28

    The context in which an object is found can facilitate its recognition. Yet, it is not known how effective this contextual information is relative to the object's intrinsic visual features, such as color and shape. To address this, we performed four experiments using rendered scenes with novel objects. In each experiment, participants first performed a visual search task, searching for a uniquely shaped target object whose color and location within the scene was experimentally manipulated. We then tested participants' tendency to use their knowledge of the location and color information in an identification task when the objects' images were degraded due to blurring, thus eliminating the shape information. In Experiment 1, we found that, in the absence of any diagnostic intrinsic features, participants identified objects based purely on their locations within the scene. In Experiment 2, we found that participants combined an intrinsic feature, color, with contextual location in order to uniquely specify an object. In Experiment 3, we found that when an object's color and location information were in conflict, participants identified the object using both sources of information equally. Finally, in Experiment 4, we found that participants used whichever source of information-either color or location-was more statistically reliable in order to identify the target object. Overall, these experiments show that the context in which objects are found can play as important a role as intrinsic features in identifying the objects. © 2015 ARVO.

  14. Spatiotemporal proximity effects in visual short-term memory examined by target-nontarget analysis.

    PubMed

    Sapkota, Raju P; Pardhan, Shahina; van der Linde, Ian

    2016-08-01

    Visual short-term memory (VSTM) is a limited-capacity system that holds a small number of objects online simultaneously, implying that competition for limited storage resources occurs (Phillips, 1974). How the spatial and temporal proximity of stimuli affects this competition is unclear. In this 2-experiment study, we examined the effect of the spatial and temporal separation of real-world memory targets and erroneously selected nontarget items examined during location-recognition and object-recall tasks. In Experiment 1 (the location-recognition task), our test display comprised either the picture or name of 1 previously examined memory stimulus (rendered above as the stimulus-display area), together with numbered square boxes at each of the memory-stimulus locations used in that trial. Participants were asked to report the number inside the square box corresponding to the location at which the cued object was originally presented. In Experiment 2 (the object-recall task), the test display comprised a single empty square box presented at 1 memory-stimulus location. Participants were asked to report the name of the object presented at that location. In both experiments, nontarget objects that were spatially and temporally proximal to the memory target were confused more often than nontarget objects that were spatially and temporally distant (i.e., a spatiotemporal proximity effect); this effect generalized across memory tasks, and the object feature (picture or name) that cued the test-display memory target. Our findings are discussed in terms of spatial and temporal confusion "fields" in VSTM, wherein objects occupy diffuse loci in a spatiotemporal coordinate system, wherein neighboring locations are more susceptible to confusion. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  15. Lexical orthography acquisition: Is handwriting better than spelling aloud?

    PubMed Central

    Bosse, Marie-Line; Chaves, Nathalie; Valdois, Sylviane

    2014-01-01

    Lexical orthography acquisition is currently described as the building of links between the visual forms and the auditory forms of whole words. However, a growing body of data suggests that a motor component could further be involved in orthographic acquisition. A few studies support the idea that reading plus handwriting is a better lexical orthographic learning situation than reading alone. However, these studies did not explore which of the cognitive processes involved in handwriting enhanced lexical orthographic acquisition. Some findings suggest that the specific movements memorized when learning to write may participate in the establishment of orthographic representations in memory. The aim of the present study was to assess this hypothesis using handwriting and spelling aloud as two learning conditions. In two experiments, fifth graders were asked to read complex pseudo-words embedded in short sentences. Immediately after reading, participants had to recall the pseudo-words' spellings either by spelling them aloud or by handwriting them down. One week later, orthographic acquisition was tested using two post-tests: a pseudo-word production task (spelling by hand in Experiment 1 or spelling aloud in Experiment 2) and a pseudo-word recognition task. Results showed no significant difference in pseudo-word recognition between the two learning conditions. In the pseudo-word production task, orthography learning improved when the learning and post-test conditions were similar, thus showing a massive encoding-retrieval match effect in the two experiments. However, a mixed model analysis of the pseudo-word production results revealed a significant learning condition effect which remained after control of the encoding-retrieval match effect. This later finding suggests that orthography learning is more efficient when mediated by handwriting than by spelling aloud, whatever the post-test production task. PMID:24575058

  16. Lexical orthography acquisition: Is handwriting better than spelling aloud?

    PubMed

    Bosse, Marie-Line; Chaves, Nathalie; Valdois, Sylviane

    2014-01-01

    Lexical orthography acquisition is currently described as the building of links between the visual forms and the auditory forms of whole words. However, a growing body of data suggests that a motor component could further be involved in orthographic acquisition. A few studies support the idea that reading plus handwriting is a better lexical orthographic learning situation than reading alone. However, these studies did not explore which of the cognitive processes involved in handwriting enhanced lexical orthographic acquisition. Some findings suggest that the specific movements memorized when learning to write may participate in the establishment of orthographic representations in memory. The aim of the present study was to assess this hypothesis using handwriting and spelling aloud as two learning conditions. In two experiments, fifth graders were asked to read complex pseudo-words embedded in short sentences. Immediately after reading, participants had to recall the pseudo-words' spellings either by spelling them aloud or by handwriting them down. One week later, orthographic acquisition was tested using two post-tests: a pseudo-word production task (spelling by hand in Experiment 1 or spelling aloud in Experiment 2) and a pseudo-word recognition task. Results showed no significant difference in pseudo-word recognition between the two learning conditions. In the pseudo-word production task, orthography learning improved when the learning and post-test conditions were similar, thus showing a massive encoding-retrieval match effect in the two experiments. However, a mixed model analysis of the pseudo-word production results revealed a significant learning condition effect which remained after control of the encoding-retrieval match effect. This later finding suggests that orthography learning is more efficient when mediated by handwriting than by spelling aloud, whatever the post-test production task.

  17. The Swedish Hayling task, and its relation to working memory, verbal ability, and speech-recognition-in-noise.

    PubMed

    Stenbäck, Victoria; Hällgren, Mathias; Lyxell, Björn; Larsby, Birgitta

    2015-06-01

    Cognitive functions and speech-recognition-in-noise were evaluated with a cognitive test battery, assessing response inhibition using the Hayling task, working memory capacity (WMC) and verbal information processing, and an auditory test of speech recognition. The cognitive tests were performed in silence whereas the speech recognition task was presented in noise. Thirty young normally-hearing individuals participated in the study. The aim of the study was to investigate one executive function, response inhibition, and whether it is related to individual working memory capacity (WMC), and how speech-recognition-in-noise relates to WMC and inhibitory control. The results showed a significant difference between initiation and response inhibition, suggesting that the Hayling task taps cognitive activity responsible for executive control. Our findings also suggest that high verbal ability was associated with better performance in the Hayling task. We also present findings suggesting that individuals who perform well on tasks involving response inhibition, and WMC, also perform well on a speech-in-noise task. Our findings indicate that capacity to resist semantic interference can be used to predict performance on speech-in-noise tasks. © 2015 Scandinavian Psychological Associations and John Wiley & Sons Ltd.

  18. Cognitive Factors Affecting Free Recall, Cued Recall, and Recognition Tasks in Alzheimer's Disease

    PubMed Central

    Yamagishi, Takashi; Sato, Takuya; Sato, Atsushi; Imamura, Toru

    2012-01-01

    Background/Aims Our aim was to identify cognitive factors affecting free recall, cued recall, and recognition tasks in patients with Alzheimer's disease (AD). Subjects: We recruited 349 consecutive AD patients who attended a memory clinic. Methods Each patient was assessed using the Alzheimer's Disease Assessment Scale (ADAS) and the extended 3-word recall test. In this task, each patient was asked to freely recall 3 previously presented words. If patients could not recall 1 or more of the target words, the examiner cued their recall by providing the category of the target word and then provided a forced-choice recognition of the target word with 2 distracters. The patients were divided into groups according to the results of the free recall, cued recall, and recognition tasks. Multivariate logistic regression analysis for repeated measures was carried out to evaluate the net effects of cognitive factors on the free recall, cued recall, and recognition tasks after controlling for the effects of age and recent memory deficit. Results Performance on the ADAS Orientation task was found to be related to performance on the free and cued recall tasks, performance on the ADAS Following Commands task was found to be related to performance on the cued recall task, and performance on the ADAS Ideational Praxis task was found to be related to performance on the free recall, cued recall, and recognition tasks. Conclusion The extended 3-word recall test reflects deficits in a wider range of memory and other cognitive processes, including memory retention after interference, divided attention, and executive functions, compared with word-list recall tasks. The characteristics of the extended 3-word recall test may be advantageous for evaluating patients’ memory impairments in daily living. PMID:22962551

  19. Cognitive factors affecting free recall, cued recall, and recognition tasks in Alzheimer's disease.

    PubMed

    Yamagishi, Takashi; Sato, Takuya; Sato, Atsushi; Imamura, Toru

    2012-01-01

    Our aim was to identify cognitive factors affecting free recall, cued recall, and recognition tasks in patients with Alzheimer's disease (AD). We recruited 349 consecutive AD patients who attended a memory clinic. Each patient was assessed using the Alzheimer's Disease Assessment Scale (ADAS) and the extended 3-word recall test. In this task, each patient was asked to freely recall 3 previously presented words. If patients could not recall 1 or more of the target words, the examiner cued their recall by providing the category of the target word and then provided a forced-choice recognition of the target word with 2 distracters. The patients were divided into groups according to the results of the free recall, cued recall, and recognition tasks. Multivariate logistic regression analysis for repeated measures was carried out to evaluate the net effects of cognitive factors on the free recall, cued recall, and recognition tasks after controlling for the effects of age and recent memory deficit. Performance on the ADAS Orientation task was found to be related to performance on the free and cued recall tasks, performance on the ADAS Following Commands task was found to be related to performance on the cued recall task, and performance on the ADAS Ideational Praxis task was found to be related to performance on the free recall, cued recall, and recognition tasks. The extended 3-word recall test reflects deficits in a wider range of memory and other cognitive processes, including memory retention after interference, divided attention, and executive functions, compared with word-list recall tasks. The characteristics of the extended 3-word recall test may be advantageous for evaluating patients' memory impairments in daily living.

  20. Task difficulty moderates the revelation effect.

    PubMed

    Aßfalg, André; Currie, Devon; Bernstein, Daniel M

    2017-05-01

    Tasks that precede a recognition probe induce a more liberal response criterion than do probes without tasks-the "revelation effect." For example, participants are more likely to claim that a stimulus is familiar directly after solving an anagram, relative to a condition without an anagram. Revelation effect hypotheses disagree whether hard preceding tasks should produce a larger revelation effect than easy preceding tasks. Although some studies have shown that hard tasks increase the revelation effect as compared to easy tasks, these studies suffered from a confound of task difficulty and task presence. Conversely, other studies have shown that the revelation effect is independent of task difficulty. In the present study, we used new task difficulty manipulations to test whether hard tasks produce larger revelation effects than easy tasks. Participants (N = 464) completed hard or easy preceding tasks, including anagrams (Exps. 1 and 2) and the typing of specific arrow key sequences (Exps. 3-6). With sample sizes typical of revelation effect experiments, the effect sizes of task difficulty on the revelation effect varied considerably across experiments. Despite this variability, a consistent data pattern emerged: Hard tasks produced larger revelation effects than easy tasks. Although the present study falsifies certain revelation effect hypotheses, the general vagueness of revelation effect hypotheses remains.

  1. Does object view influence the scene consistency effect?

    PubMed

    Sastyin, Gergo; Niimi, Ryosuke; Yokosawa, Kazuhiko

    2015-04-01

    Traditional research on the scene consistency effect only used clearly recognizable object stimuli to show mutually interactive context effects for both the object and background components on scene perception (Davenport & Potter in Psychological Science, 15, 559-564, 2004). However, in real environments, objects are viewed from multiple viewpoints, including an accidental, hard-to-recognize one. When the observers named target objects in scenes (Experiments 1a and 1b, object recognition task), we replicated the scene consistency effect (i.e., there was higher accuracy for the objects with consistent backgrounds). However, there was a significant interaction effect between consistency and object viewpoint, which indicated that the scene consistency effect was more important for identifying objects in the accidental view condition than in the canonical view condition. Therefore, the object recognition system may rely more on the scene context when the object is difficult to recognize. In Experiment 2, the observers identified the background (background recognition task) while the scene consistency and object views were manipulated. The results showed that object viewpoint had no effect, while the scene consistency effect was observed. More specifically, the canonical and accidental views both equally provided contextual information for scene perception. These findings suggested that the mechanism for conscious recognition of objects could be dissociated from the mechanism for visual analysis of object images that were part of a scene. The "context" that the object images provided may have been derived from its view-invariant, relatively low-level visual features (e.g., color), rather than its semantic information.

  2. Feature saliency in judging the sex and familiarity of faces.

    PubMed

    Roberts, T; Bruce, V

    1988-01-01

    Two experiments are reported on the effect of feature masking on judgements of the sex and familiarity of faces. In experiment 1 the effect of masking the eyes, nose, or mouth of famous and nonfamous, male and female faces on response times in two tasks was investigated. In the first, recognition, task only masking of the eyes had a significant effect on response times. In the second, sex-judgement, task masking of the nose gave rise to a significant and large increase in response times. In experiment 2 it was found that when facial features were presented in isolation in a sex-judgement task, responses to noses were at chance level, unlike those for eyes or mouths. It appears that visual information available from the nose in isolation from the rest of the face is not sufficient for sex judgement, yet masking of the nose may disrupt the extraction of information about the overall topography of the face, information that may be more useful for sex judgement than for identification of a face.

  3. Associative and semantic priming effects occur at very short stimulus-onset asynchronies in lexical decision and naming.

    PubMed

    Perea, M; Gotor, A

    1997-02-01

    Prior research has found significant associative/semantic priming effects at very short stimulus-onset asynchronies (SOAs) in experimental tasks such as lexical decision, but not in naming tasks (however, see Lukatela and Turvey, 1994). In this paper, the time course of associative priming effects was analyzed a several very short SOAs (33, 50, and 67 ms), using the masked priming paradigm (Forster and Davis, 1984), both in lexical decision (Experiment 1) and naming (Experiment 2). The results show small--but significant--associative priming effects in both tasks. Additionally, using the masked priming procedure at the 67 ms SOA. Experiments 3 and 4, shows facilitatory priming effects for both associatively and semantically (unassociated) related pairs in lexical decision and naming tasks. That is, automatic priming can be semantic. Taken together our data appear to support interactive models of word recognition in which semantic activation may influence the early stages of word processing.

  4. Investigating grounded conceptualization: motor system state-dependence facilitates familiarity judgments of novel tools.

    PubMed

    Matheson, Heath E; Familiar, Ariana M; Thompson-Schill, Sharon L

    2018-03-02

    Theories of embodied cognition propose that we recognize tools in part by reactivating sensorimotor representations of tool use in a process of simulation. If motor simulations play a causal role in tool recognition then performing a concurrent motor task should differentially modulate recognition of experienced vs. non-experienced tools. We sought to test the hypothesis that an incompatible concurrent motor task modulates conceptual processing of learned vs. non-learned objects by directly manipulating the embodied experience of participants. We trained one group to use a set of novel, 3-D printed tools under the pretense that they were preparing for an archeological expedition to Mars (manipulation group); we trained a second group to report declarative information about how the tools are stored (storage group). With this design, familiarity and visual attention to different object parts was similar for both groups, though their qualitative interactions differed. After learning, participants made familiarity judgments of auditorily presented tool names while performing a concurrent motor task or simply sitting at rest. We showed that familiarity judgments were facilitated by motor state-dependence; specifically, in the manipulation group, familiarity was facilitated by a concurrent motor task, whereas in the spatial group familiarity was facilitated while sitting at rest. These results are the first to directly show that manipulation experience differentially modulates conceptual processing of familiar vs. unfamiliar objects, suggesting that embodied representations contribute to recognizing tools.

  5. Parallel effects of processing fluency and positive affect on familiarity-based recognition decisions for faces.

    PubMed

    Duke, Devin; Fiacconi, Chris M; Köhler, Stefan

    2014-01-01

    According to attribution models of familiarity assessment, people can use a heuristic in recognition-memory decisions, in which they attribute the subjective ease of processing of a memory probe to a prior encounter with the stimulus in question. Research in social cognition suggests that experienced positive affect may be the proximal cue that signals fluency in various experimental contexts. In the present study, we compared the effects of positive affect and fluency on recognition-memory judgments for faces with neutral emotional expression. We predicted that if positive affect is indeed the critical cue that signals processing fluency at retrieval, then its manipulation should produce effects that closely mirror those produced by manipulations of processing fluency. In two experiments, we employed a masked-priming procedure in combination with a Remember-Know (RK) paradigm that aimed to separate familiarity- from recollection-based memory decisions. In addition, participants performed a prime-discrimination task that allowed us to take inter-individual differences in prime awareness into account. We found highly similar effects of our priming manipulations of processing fluency and of positive affect. In both cases, the critical effect was specific to familiarity-based recognition responses. Moreover, in both experiments it was reflected in a shift toward a more liberal response bias, rather than in changed discrimination. Finally, in both experiments, the effect was found to be related to prime awareness; it was present only in participants who reported a lack of such awareness on the prime-discrimination task. These findings add to a growing body of evidence that points not only to a role of fluency, but also of positive affect in familiarity assessment. As such they are consistent with the idea that fluency itself may be hedonically marked.

  6. ERP Correlates of Target-Distracter Differentiation in Repeated Runs of a Continuous Recognition Task with Emotional and Neutral Faces

    ERIC Educational Resources Information Center

    Treese, Anne-Cecile; Johansson, Mikael; Lindgren, Magnus

    2010-01-01

    The emotional salience of faces has previously been shown to induce memory distortions in recognition memory tasks. This event-related potential (ERP) study used repeated runs of a continuous recognition task with emotional and neutral faces to investigate emotion-induced memory distortions. In the second and third runs, participants made more…

  7. Evidence for the contribution of a threshold retrieval process to semantic memory.

    PubMed

    Kempnich, Maria; Urquhart, Josephine A; O'Connor, Akira R; Moulin, Chris J A

    2017-10-01

    It is widely held that episodic retrieval can recruit two processes: a threshold context retrieval process (recollection) and a continuous signal strength process (familiarity). Conversely the processes recruited during semantic retrieval are less well specified. We developed a semantic task analogous to single-item episodic recognition to interrogate semantic recognition receiver-operating characteristics (ROCs) for a marker of a threshold retrieval process. We fitted observed ROC points to three signal detection models: two models typically used in episodic recognition (unequal variance and dual-process signal detection models) and a novel dual-process recollect-to-reject (DP-RR) signal detection model that allows a threshold recollection process to aid both target identification and lure rejection. Given the nature of most semantic questions, we anticipated the DP-RR model would best fit the semantic task data. Experiment 1 (506 participants) provided evidence for a threshold retrieval process in semantic memory, with overall best fits to the DP-RR model. Experiment 2 (316 participants) found within-subjects estimates of episodic and semantic threshold retrieval to be uncorrelated. Our findings add weight to the proposal that semantic and episodic memory are served by similar dual-process retrieval systems, though the relationship between the two threshold processes needs to be more fully elucidated.

  8. Holistic word processing in dyslexia

    PubMed Central

    Conway, Aisling; Misra, Karuna

    2017-01-01

    People with dyslexia have difficulty learning to read and many lack fluent word recognition as adults. In a novel task that borrows elements of the ‘word superiority’ and ‘word inversion’ paradigms, we investigate whether holistic word recognition is impaired in dyslexia. In Experiment 1 students with dyslexia and controls judged the similarity of pairs of 6- and 7-letter words or pairs of words whose letters had been partially jumbled. The stimuli were presented in both upright and inverted form with orthographic regularity and orientation randomized from trial to trial. While both groups showed sensitivity to orthographic regularity, both word inversion and letter jumbling were more detrimental to skilled than dyslexic readers supporting the idea that the latter may read in a more analytic fashion. Experiment 2 employed the same task but using shorter, 4- and 5-letter words and a design where orthographic regularity and stimuli orientation was held constant within experimental blocks to encourage the use of either holistic or analytic processing. While there was no difference in reaction time between the dyslexic and control groups for inverted stimuli, the students with dyslexia were significantly slower than controls for upright stimuli. These findings suggest that holistic word recognition, which is largely based on the detection of orthographic regularity, is impaired in dyslexia. PMID:29121046

  9. Prevalence of face recognition deficits in middle childhood.

    PubMed

    Bennetts, Rachel J; Murray, Ebony; Boyce, Tian; Bate, Sarah

    2017-02-01

    Approximately 2-2.5% of the adult population is believed to show severe difficulties with face recognition, in the absence of any neurological injury-a condition known as developmental prosopagnosia (DP). However, to date no research has attempted to estimate the prevalence of face recognition deficits in children, possibly because there are very few child-friendly, well-validated tests of face recognition. In the current study, we examined face and object recognition in a group of primary school children (aged 5-11 years), to establish whether our tests were suitable for children and to provide an estimate of face recognition difficulties in children. In Experiment 1 (n = 184), children completed a pre-existing test of child face memory, the Cambridge Face Memory Test-Kids (CFMT-K), and a bicycle test with the same format. In Experiment 2 (n = 413), children completed three-alternative forced-choice matching tasks with faces and bicycles. All tests showed good psychometric properties. The face and bicycle tests were well matched for difficulty and showed a similar developmental trajectory. Neither the memory nor the matching tests were suitable to detect impairments in the youngest groups of children, but both tests appear suitable to screen for face recognition problems in middle childhood. In the current sample, 1.2-5.2% of children showed difficulties with face recognition; 1.2-4% showed face-specific difficulties-that is, poor face recognition with typical object recognition abilities. This is somewhat higher than previous adult estimates: It is possible that face matching tests overestimate the prevalence of face recognition difficulties in children; alternatively, some children may "outgrow" face recognition difficulties.

  10. Short exposure to a diet rich in both fat and sugar or sugar alone impairs place, but not object recognition memory in rats.

    PubMed

    Beilharz, Jessica E; Maniam, Jayanthi; Morris, Margaret J

    2014-03-01

    High energy diets have been shown to impair cognition however, the rapidity of these effects, and the dietary component/s responsible are currently unclear. We conducted two experiments in rats to examine the effects of short-term exposure to a diet rich in sugar and fat or rich in sugar on object (perirhinal-dependent) and place (hippocampal-dependent) recognition memory, and the role of inflammatory mediators in these responses. In Experiment 1, rats fed a cafeteria style diet containing chow supplemented with lard, cakes, biscuits, and a 10% sucrose solution performed worse on the place, but not the object recognition task, than chow fed control rats when tested after 5, 11, and 20 days. In Experiment 2, rats fed the cafeteria style diet either with or without sucrose and rats fed chow supplemented with sucrose also performed worse on the place, but not the object recognition task when tested after 5, 11, and 20 days. Rats fed the cafeteria diets consumed five times more energy than control rats and exhibited increased plasma leptin, insulin and triglyceride concentrations; these were not affected in the sucrose only rats. Rats exposed to sucrose exhibited both increased hippocampal inflammation (TNF-α and IL-1β mRNA) and oxidative stress, as indicated by an upregulation of NRF1 mRNA compared to control rats. In contrast, these markers were not significantly elevated in rats that received the cafeteria diet without added sucrose. Hippocampal BDNF and neuritin mRNA were similar across all groups. These results show that relatively short exposures to diets rich in both fat and sugar or rich in sugar, impair hippocampal-dependent place recognition memory prior to the emergence of weight differences, and suggest a role for oxidative stress and neuroinflammation in this impairment. Crown Copyright © 2013. Published by Elsevier Inc. All rights reserved.

  11. The effects of divided attention on auditory priming.

    PubMed

    Mulligan, Neil W; Duke, Marquinn; Cooper, Angela W

    2007-09-01

    Traditional theorizing stresses the importance of attentional state during encoding for later memory, based primarily on research with explicit memory. Recent research has begun to investigate the role of attention in implicit memory but has focused almost exclusively on priming in the visual modality. The present experiments examined the effect of divided attention on auditory implicit memory, using auditory perceptual identification, word-stem completion and word-fragment completion. Participants heard study words under full attention conditions or while simultaneously carrying out a distractor task (the divided attention condition). In Experiment 1, a distractor task with low response frequency failed to disrupt later auditory priming (but diminished explicit memory as assessed with auditory recognition). In Experiment 2, a distractor task with greater response frequency disrupted priming on all three of the auditory priming tasks as well as the explicit test. These results imply that although auditory priming is less reliant on attention than explicit memory, it is still greatly affected by at least some divided-attention manipulations. These results are consistent with research using visual priming tasks and have relevance for hypotheses regarding attention and auditory priming.

  12. Interactions between auditory 'what' and 'where' pathways revealed by enhanced near-threshold discrimination of frequency and position.

    PubMed

    Tardif, Eric; Spierer, Lucas; Clarke, Stephanie; Murray, Micah M

    2008-03-07

    Partially segregated neuronal pathways ("what" and "where" pathways, respectively) are thought to mediate sound recognition and localization. Less studied are interactions between these pathways. In two experiments, we investigated whether near-threshold pitch discrimination sensitivity (d') is altered by supra-threshold task-irrelevant position differences and likewise whether near-threshold position discrimination sensitivity is altered by supra-threshold task-irrelevant pitch differences. Each experiment followed a 2 x 2 within-subjects design regarding changes/no change in the task-relevant and task-irrelevant stimulus dimensions. In Experiment 1, subjects discriminated between 750 Hz and 752 Hz pure tones, and d' for this near-threshold pitch change significantly increased by a factor of 1.09 when accompanied by a task-irrelevant position change of 65 micros interaural time difference (ITD). No response bias was induced by the task-irrelevant position change. In Experiment 2, subjects discriminated between 385 micros and 431 micros ITDs, and d' for this near-threshold position change significantly increased by a factor of 0.73 when accompanied by task-irrelevant pitch changes (6 Hz). In contrast to Experiment 1, task-irrelevant pitch changes induced a response criterion bias toward responding that the two stimuli differed. The collective results are indicative of facilitative interactions between "what" and "where" pathways. By demonstrating how these pathways may cooperate under impoverished listening conditions, our results bear implications for possible neuro-rehabilitation strategies. We discuss our results in terms of the dual-pathway model of auditory processing.

  13. From Perception to Metacognition: Auditory and Olfactory Functions in Early Blind, Late Blind, and Sighted Individuals

    PubMed Central

    Cornell Kärnekull, Stina; Arshamian, Artin; Nilsson, Mats E.; Larsson, Maria

    2016-01-01

    Although evidence is mixed, studies have shown that blind individuals perform better than sighted at specific auditory, tactile, and chemosensory tasks. However, few studies have assessed blind and sighted individuals across different sensory modalities in the same study. We tested early blind (n = 15), late blind (n = 15), and sighted (n = 30) participants with analogous olfactory and auditory tests in absolute threshold, discrimination, identification, episodic recognition, and metacognitive ability. Although the multivariate analysis of variance (MANOVA) showed no overall effect of blindness and no interaction with modality, follow-up between-group contrasts indicated a blind-over-sighted advantage in auditory episodic recognition, that was most pronounced in early blind individuals. In contrast to the auditory modality, there was no empirical support for compensatory effects in any of the olfactory tasks. There was no conclusive evidence for group differences in metacognitive ability to predict episodic recognition performance. Taken together, the results showed no evidence of an overall superior performance in blind relative sighted individuals across olfactory and auditory functions, although early blind individuals exceled in episodic auditory recognition memory. This observation may be related to an experience-induced increase in auditory attentional capacity. PMID:27729884

  14. Differential effects of white noise in cognitive and perceptual tasks

    PubMed Central

    Herweg, Nora A.; Bunzeck, Nico

    2015-01-01

    Beneficial effects of noise on higher cognition have recently attracted attention. Hypothesizing an involvement of the mesolimbic dopamine system and its functional interactions with cortical areas, the current study aimed to demonstrate a facilitation of dopamine-dependent attentional and mnemonic functions by externally applying white noise in five behavioral experiments including a total sample of 167 healthy human subjects. During working memory, acoustic white noise impaired accuracy when presented during the maintenance period (Experiments 1–3). In a reward based long-term memory task, white noise accelerated perceptual judgments for scene images during encoding but left subsequent recognition memory unaffected (Experiment 4). In a modified Posner task (Experiment 5), the benefit due to white noise in attentional orienting correlated weakly with reward dependence, a personality trait that has been associated with the dopaminergic system. These results suggest that white noise has no general effect on cognitive functions. Instead, they indicate differential effects on perception and cognition depending on a variety of factors such as task demands and timing of white noise presentation. PMID:26579024

  15. Neural Substrates of View-Invariant Object Recognition Developed without Experiencing Rotations of the Objects

    PubMed Central

    Okamura, Jun-ya; Yamaguchi, Reona; Honda, Kazunari; Tanaka, Keiji

    2014-01-01

    One fails to recognize an unfamiliar object across changes in viewing angle when it must be discriminated from similar distractor objects. View-invariant recognition gradually develops as the viewer repeatedly sees the objects in rotation. It is assumed that different views of each object are associated with one another while their successive appearance is experienced in rotation. However, natural experience of objects also contains ample opportunities to discriminate among objects at each of the multiple viewing angles. Our previous behavioral experiments showed that after experiencing a new set of object stimuli during a task that required only discrimination at each of four viewing angles at 30° intervals, monkeys could recognize the objects across changes in viewing angle up to 60°. By recording activities of neurons from the inferotemporal cortex after various types of preparatory experience, we here found a possible neural substrate for the monkeys' performance. For object sets that the monkeys had experienced during the task that required only discrimination at each of four viewing angles, many inferotemporal neurons showed object selectivity covering multiple views. The degree of view generalization found for these object sets was similar to that found for stimulus sets with which the monkeys had been trained to conduct view-invariant recognition. These results suggest that the experience of discriminating new objects in each of several viewing angles develops the partially view-generalized object selectivity distributed over many neurons in the inferotemporal cortex, which in turn bases the monkeys' emergent capability to discriminate the objects across changes in viewing angle. PMID:25378169

  16. The Automaticity of Emotional Face-Context Integration

    PubMed Central

    Aviezer, Hillel; Dudarev, Veronica; Bentin, Shlomo; Hassin, Ran R.

    2011-01-01

    Recent studies have demonstrated that context can dramatically influence the recognition of basic facial expressions, yet the nature of this phenomenon is largely unknown. In the present paper we begin to characterize the underlying process of face-context integration. Specifically, we examine whether it is a relatively controlled or automatic process. In Experiment 1 participants were motivated and instructed to avoid using the context while categorizing contextualized facial expression, or they were led to believe that the context was irrelevant. Nevertheless, they were unable to disregard the context, which exerted a strong effect on their emotion recognition. In Experiment 2, participants categorized contextualized facial expressions while engaged in a concurrent working memory task. Despite the load, the context exerted a strong influence on their recognition of facial expressions. These results suggest that facial expressions and their body contexts are integrated in an unintentional, uncontrollable, and relatively effortless manner. PMID:21707150

  17. Repetition priming across distinct contexts: effects of lexical status, word frequency, and retrieval test.

    PubMed

    Coane, Jennifer H; Balota, David A

    2010-12-01

    Repetition priming, the facilitation observed when a target is preceded by an identity prime, is a robust phenomenon that occurs across a variety of conditions. Oliphant (1983), however, failed to observe repetition priming for targets embedded in the instructions to an experiment in a subsequent lexical decision task. In the present experiments, we examined the roles of priming context (list or instructions), target lexicality, and target frequency in both lexical decision and episodic recognition performance. Initial encoding context did not modulate priming in lexical decision or recognition memory for low-frequency targets or nonwords, whereas context strongly modulated episodic recognition for high-frequency targets. The results indicate that priming across contexts is sensitive to the distinctiveness of the trace and the reliance on episodic retrieval mechanisms. These results also shed light on the influence of event boundaries, such that priming occurs across different events for relatively distinct (low-frequency) items.

  18. How does interhemispheric communication in visual word recognition work? Deciding between early and late integration accounts of the split fovea theory.

    PubMed

    Van der Haegen, Lise; Brysbaert, Marc; Davis, Colin J

    2009-02-01

    It has recently been shown that interhemispheric communication is needed for the processing of foveally presented words. In this study, we examine whether the integration of information happens at an early stage, before word recognition proper starts, or whether the integration is part of the recognition process itself. Two lexical decision experiments are reported in which words were presented at different fixation positions. In Experiment 1, a masked form priming task was used with primes that had two adjacent letters transposed. The results showed that although the fixation position had a substantial influence on the transposed letter priming effect, the priming was not smaller when the transposed letters were sent to different hemispheres than when they were projected to the same hemisphere. In Experiment 2, stimuli were presented that either had high frequency hemifield competitors or could be identified unambiguously on the basis of the information in one hemifield. Again, the lexical decision times did not vary as a function of hemifield competitors. These results are consistent with the early integration account, as presented in the SERIOL model of visual word recognition.

  19. Encoding instructions and stimulus presentation in local environmental context-dependent memory studies.

    PubMed

    Markopoulos, G; Rutherford, A; Cairns, C; Green, J

    2010-08-01

    Murnane and Phelps (1993) recommend word pair presentations in local environmental context (EC) studies to prevent associations being formed between successively presented items and their ECs and a consequent reduction in the EC effect. Two experiments were conducted to assess the veracity of this assumption. In Experiment 1, participants memorised single words or word pairs, or categorised them as natural or man made. Their free recall protocols were examined to assess any associations established between successively presented items. Fewest associations were observed when the item-specific encoding task (i.e., natural or man made categorisation of word referents) was applied to single words. These findings were examined further in Experiment 2, where the influence of encoding instructions and stimulus presentation on local EC dependent recognition memory was examined. Consistent with recognition dual-process signal detection model predictions and findings (e.g., Macken, 2002; Parks & Yonelinas, 2008), recollection sensitivity, but not familiarity sensitivity, was found to be local EC dependent. However, local EC dependent recognition was observed only after item-specific encoding instructions, irrespective of stimulus presentation. These findings and the existing literature suggest that the use of single word presentations and item-specific encoding enhances local EC dependent recognition.

  20. Expert Knowledge, Distinctiveness, and Levels of Processing in Language Learning

    ERIC Educational Resources Information Center

    Bird, Steve

    2012-01-01

    The foreign language vocabulary learning research literature often attributes strong mnemonic potency to the cognitive processing of meaning when learning words. Routinely cited as support for this idea are experiments by Craik and Tulving (C&T) demonstrating superior recognition and recall of studied words following semantic tasks ("deep"…

  1. Fine-grained recognition of plants from images.

    PubMed

    Šulc, Milan; Matas, Jiří

    2017-01-01

    Fine-grained recognition of plants from images is a challenging computer vision task, due to the diverse appearance and complex structure of plants, high intra-class variability and small inter-class differences. We review the state-of-the-art and discuss plant recognition tasks, from identification of plants from specific plant organs to general plant recognition "in the wild". We propose texture analysis and deep learning methods for different plant recognition tasks. The methods are evaluated and compared them to the state-of-the-art. Texture analysis is only applied to images with unambiguous segmentation (bark and leaf recognition), whereas CNNs are only applied when sufficiently large datasets are available. The results provide an insight in the complexity of different plant recognition tasks. The proposed methods outperform the state-of-the-art in leaf and bark classification and achieve very competitive results in plant recognition "in the wild". The results suggest that recognition of segmented leaves is practically a solved problem, when high volumes of training data are available. The generality and higher capacity of state-of-the-art CNNs makes them suitable for plant recognition "in the wild" where the views on plant organs or plants vary significantly and the difficulty is increased by occlusions and background clutter.

  2. How does creating a concept map affect item-specific encoding?

    PubMed

    Grimaldi, Phillip J; Poston, Laurel; Karpicke, Jeffrey D

    2015-07-01

    Concept mapping has become a popular learning tool. However, the processes underlying the task are poorly understood. In the present study, we examined the effect of creating a concept map on the processing of item-specific information. In 2 experiments, subjects learned categorized or ad hoc word lists by making pleasantness ratings, sorting words into categories, or creating a concept map. Memory was tested using a free recall test and a recognition memory test, which is considered to be especially sensitive to item-specific processing. Typically, tasks that promote item-specific processing enhance free recall of categorized lists, relative to category sorting. Concept mapping resulted in lower recall performance than both the pleasantness rating and category sorting condition for categorized words. Moreover, concept mapping resulted in lower recognition memory performance than the other 2 tasks. These results converge on the conclusion that creating a concept map disrupts the processing of item-specific information. (c) 2015 APA, all rights reserved.

  3. Breastfeeding experience differentially impacts recognition of happiness and anger in mothers.

    PubMed

    Krol, Kathleen M; Kamboj, Sunjeev K; Curran, H Valerie; Grossmann, Tobias

    2014-11-12

    Breastfeeding is a dynamic biological and social process based on hormonal regulation involving oxytocin. While there is much work on the role of breastfeeding in infant development and on the role of oxytocin in socio-emotional functioning in adults, little is known about how breastfeeding impacts emotion perception during motherhood. We therefore examined whether breastfeeding influences emotion recognition in mothers. Using a dynamic emotion recognition task, we found that longer durations of exclusive breastfeeding were associated with faster recognition of happiness, providing evidence for a facilitation of processing positive facial expressions. In addition, we found that greater amounts of breastfed meals per day were associated with slower recognition of anger. Our findings are in line with current views of oxytocin function and support accounts that view maternal behaviour as tuned to prosocial responsiveness, by showing that vital elements of maternal care can facilitate the rapid responding to affiliative stimuli by reducing importance of threatening stimuli.

  4. Scene recognition based on integrating active learning with dictionary learning

    NASA Astrophysics Data System (ADS)

    Wang, Chengxi; Yin, Xueyan; Yang, Lin; Gong, Chengrong; Zheng, Caixia; Yi, Yugen

    2018-04-01

    Scene recognition is a significant topic in the field of computer vision. Most of the existing scene recognition models require a large amount of labeled training samples to achieve a good performance. However, labeling image manually is a time consuming task and often unrealistic in practice. In order to gain satisfying recognition results when labeled samples are insufficient, this paper proposed a scene recognition algorithm named Integrating Active Learning and Dictionary Leaning (IALDL). IALDL adopts projective dictionary pair learning (DPL) as classifier and introduces active learning mechanism into DPL for improving its performance. When constructing sampling criterion in active learning, IALDL considers both the uncertainty and representativeness as the sampling criteria to effectively select the useful unlabeled samples from a given sample set for expanding the training dataset. Experiment results on three standard databases demonstrate the feasibility and validity of the proposed IALDL.

  5. Development of detection and recognition of orientation of geometric and real figures.

    PubMed

    Stein, N L; Mandler, J M

    1975-06-01

    Black and white kindergarten and second-grade children were tested for accuracy of detection and recognition of orientation and location changes in pictures of real-world and geometric figures. No differences were found in accuracy of recognition between the 2 kinds of pictures, but patterns of verbalization differed on specific transformations. Although differences in accuracy were found between kindergarten and second grade on an initial recognition task, practice on a matching-to-sample task eliminated differences on a second recognition task. Few ethnic differences were found on accuracy of recognition, but significant differences were found in amount of verbal output on specific transformations. For both groups, mention of orientation changes was markedly reduced when location changes were present.

  6. Developing a Natural User Interface and Facial Recognition System With OpenCV and the Microsoft Kinect

    NASA Technical Reports Server (NTRS)

    Gutensohn, Michael

    2018-01-01

    The task for this project was to design, develop, test, and deploy a facial recognition system for the Kennedy Space Center Augmented/Virtual Reality Lab. This system will serve as a means of user authentication as part of the NUI of the lab. The overarching goal is to create a seamless user interface that will allow the user to initiate and interact with AR and VR experiences without ever needing to use a mouse or keyboard at any step in the process.

  7. Glucose enhancement of a facial recognition task in young adults.

    PubMed

    Metzger, M M

    2000-02-01

    Numerous studies have reported that glucose administration enhances memory processes in both elderly and young adult subjects. Although these studies have utilized a variety of procedures and paradigms, investigations of both young and elderly subjects have typically used verbal tasks (word list recall, paragraph recall, etc.). In the present study, the effect of glucose consumption on a nonverbal, facial recognition task in young adults was examined. Lemonade sweetened with either glucose (50 g) or saccharin (23.7 mg) was consumed by college students (mean age of 21.1 years) 15 min prior to a facial recognition task. The task consisted of a familiarization phase in which subjects were presented with "target" faces, followed immediately by a recognition phase in which subjects had to identify the targets among a random array of familiar target and novel "distractor" faces. Statistical analysis indicated that there were no differences on hit rate (target identification) for subjects who consumed either saccharin or glucose prior to the test. However, further analyses revealed that subjects who consumed glucose committed significantly fewer false alarms and had (marginally) higher d-prime scores (a signal detection measure) compared to subjects who consumed saccharin prior to the test. These results parallel a previous report demonstrating glucose enhancement of a facial recognition task in probable Alzheimer's patients; however, this is believed to be the first demonstration of glucose enhancement for a facial recognition task in healthy, young adults.

  8. [Explicit memory for type font of words in source monitoring and recognition tasks].

    PubMed

    Hatanaka, Yoshiko; Fujita, Tetsuya

    2004-02-01

    We investigated whether people can consciously remember type fonts of words by methods of examining explicit memory; source-monitoring and old/new-recognition. We set matched, non-matched, and non-studied conditions between the study and the test words using two kinds of type fonts; Gothic and MARU. After studying words in one way of encoding, semantic or physical, subjects in a source-monitoring task made a three way discrimination between new words, Gothic words, and MARU words (Exp. 1). Subjects in an old/new-recognition task indicated whether test words were previously presented or not (Exp. 2). We compared the source judgments with old/new recognition data. As a result, these data showed conscious recollection for type font of words on the source monitoring task and dissociation between source monitoring and old/new recognition performance.

  9. The 'Reading the Mind in Films' Task [child version]: complex emotion and mental state recognition in children with and without autism spectrum conditions.

    PubMed

    Golan, Ofer; Baron-Cohen, Simon; Golan, Yael

    2008-09-01

    Children with autism spectrum conditions (ASC) have difficulties recognizing others' emotions. Research has mostly focused on basic emotion recognition, devoid of context. This study reports the results of a new task, assessing recognition of complex emotions and mental states in social contexts. An ASC group (n = 23) was compared to a general population control group (n = 24). Children with ASC performed lower than controls on the task. Using task scores, more than 87% of the participants were allocated to their group. This new test quantifies complex emotion and mental state recognition in life-like situations. Our findings reveal that children with ASC have residual difficulties in this aspect of empathy. The use of language-based compensatory strategies for emotion recognition is discussed.

  10. Alterations in Resting-State Activity Relate to Performance in a Verbal Recognition Task

    PubMed Central

    López Zunini, Rocío A.; Thivierge, Jean-Philippe; Kousaie, Shanna; Sheppard, Christine; Taler, Vanessa

    2013-01-01

    In the brain, resting-state activity refers to non-random patterns of intrinsic activity occurring when participants are not actively engaged in a task. We monitored resting-state activity using electroencephalogram (EEG) both before and after a verbal recognition task. We show a strong positive correlation between accuracy in verbal recognition and pre-task resting-state alpha power at posterior sites. We further characterized this effect by examining resting-state post-task activity. We found marked alterations in resting-state alpha power when comparing pre- and post-task periods, with more pronounced alterations in participants that attained higher task accuracy. These findings support a dynamical view of cognitive processes where patterns of ongoing brain activity can facilitate –or interfere– with optimal task performance. PMID:23785436

  11. Age-Related Differences in Listening Effort During Degraded Speech Recognition.

    PubMed

    Ward, Kristina M; Shen, Jing; Souza, Pamela E; Grieco-Calub, Tina M

    The purpose of the present study was to quantify age-related differences in executive control as it relates to dual-task performance, which is thought to represent listening effort, during degraded speech recognition. Twenty-five younger adults (YA; 18-24 years) and 21 older adults (OA; 56-82 years) completed a dual-task paradigm that consisted of a primary speech recognition task and a secondary visual monitoring task. Sentence material in the primary task was either unprocessed or spectrally degraded into 8, 6, or 4 spectral channels using noise-band vocoding. Performance on the visual monitoring task was assessed by the accuracy and reaction time of participants' responses. Performance on the primary and secondary task was quantified in isolation (i.e., single task) and during the dual-task paradigm. Participants also completed a standardized psychometric measure of executive control, including attention and inhibition. Statistical analyses were implemented to evaluate changes in listeners' performance on the primary and secondary tasks (1) per condition (unprocessed vs. vocoded conditions); (2) per task (single task vs. dual task); and (3) per group (YA vs. OA). Speech recognition declined with increasing spectral degradation for both YA and OA when they performed the task in isolation or concurrently with the visual monitoring task. OA were slower and less accurate than YA on the visual monitoring task when performed in isolation, which paralleled age-related differences in standardized scores of executive control. When compared with single-task performance, OA experienced greater declines in secondary-task accuracy, but not reaction time, than YA. Furthermore, results revealed that age-related differences in executive control significantly contributed to age-related differences on the visual monitoring task during the dual-task paradigm. OA experienced significantly greater declines in secondary-task accuracy during degraded speech recognition than YA. These findings are interpreted as suggesting that OA expended greater listening effort than YA, which may be partially attributed to age-related differences in executive control.

  12. Destination memory for self-generated actions.

    PubMed

    El Haj, Mohamad

    2016-10-01

    There is a substantial body of literature showing memory enhancement for self-generated information in normal aging. The present paper investigated this outcome for destination memory or memory for outputted information. In Experiment 1, younger adults and older adults had to place (self-generated actions) and observe an experimenter placing (experiment-generated actions) items into two different destinations (i.e., a black circular box and a white square box). On a subsequent recognition task, the participants had to decide into which box each item had originally been placed. These procedures showed better destination memory for self- than experimenter-generated actions. In Experiment 2, destination and source memory were assessed for self-generated actions. Younger adults and older adults had to place items into the two boxes (self-generated actions), take items out of the boxes (self-generated actions), and observe an experimenter taking items out of the boxes (experiment-generated actions). On a subsequent recognition task, they had to decide into which box (destination memory)/from which box (source memory) each item had originally been placed/taken. For both populations, source memory was better than destination memory for self-generated actions, and both were better than source memory for experimenter-generated actions. Taken together, these findings highlight the beneficial effect of self-generation on destination memory in older adults.

  13. Insensitivity of visual short-term memory to irrelevant visual information.

    PubMed

    Andrade, Jackie; Kemps, Eva; Werniers, Yves; May, Jon; Szmalec, Arnaud

    2002-07-01

    Several authors have hypothesized that visuo-spatial working memory is functionally analogous to verbal working memory. Irrelevant background speech impairs verbal short-term memory. We investigated whether irrelevant visual information has an analogous effect on visual short-term memory, using a dynamic visual noise (DVN) technique known to disrupt visual imagery (Quinn & McConnell, 1996b). Experiment I replicated the effect of DVN on pegword imagery. Experiments 2 and 3 showed no effect of DVN on recall of static matrix patterns, despite a significant effect of a concurrent spatial tapping task. Experiment 4 showed no effect of DVN on encoding or maintenance of arrays of matrix patterns, despite testing memory by a recognition procedure to encourage visual rather than spatial processing. Serial position curves showed a one-item recency effect typical of visual short-term memory. Experiment 5 showed no effect of DVN on short-term recognition of Chinese characters, despite effects of visual similarity and a concurrent colour memory task that confirmed visual processing of the characters. We conclude that irrelevant visual noise does not impair visual short-term memory. Visual working memory may not be functionally analogous to verbal working memory, and different cognitive processes may underlie visual short-term memory and visual imagery.

  14. An attentional bias for LEGO® people using a change detection task: Are LEGO® people animate?

    PubMed

    LaPointe, Mitchell R P; Cullen, Rachael; Baltaretu, Bianca; Campos, Melissa; Michalski, Natalie; Sri Satgunarajah, Suja; Cadieux, Michelle L; Pachai, Matthew V; Shore, David I

    2016-09-01

    Animate objects have been shown to elicit attentional priority in a change detection task. This benefit has been seen for both human and nonhuman animals compared with inanimate objects. One explanation for these results has been based on the importance animate objects have served over the course of our species' history. In the present set of experiments, we present stimuli, which could be perceived as animate, but with which our distant ancestors would have had no experience, and natural selection could have no direct pressure on their prioritization. In the first experiment, we compared LEGO® "people" with LEGO "nonpeople" in a change detection task. In a second experiment, we attempt to control the heterogeneity of the nonanimate objects by using LEGO blocks, matched in size and colour to LEGO people. In the third experiment, we occlude the faces of the LEGO people to control for facial pattern recognition. In the final 2 experiments, we attempt to obscure high-level categorical information processing of the stimuli by inverting and blurring the scenes. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  15. Place recognition and heading retrieval are mediated by dissociable cognitive systems in mice.

    PubMed

    Julian, Joshua B; Keinath, Alexander T; Muzzio, Isabel A; Epstein, Russell A

    2015-05-19

    A lost navigator must identify its current location and recover its facing direction to restore its bearings. We tested the idea that these two tasks--place recognition and heading retrieval--might be mediated by distinct cognitive systems in mice. Previous work has shown that numerous species, including young children and rodents, use the geometric shape of local space to regain their sense of direction after disorientation, often ignoring nongeometric cues even when they are informative. Notably, these experiments have almost always been performed in single-chamber environments in which there is no ambiguity about place identity. We examined the navigational behavior of mice in a two-chamber paradigm in which animals had to both recognize the chamber in which they were located (place recognition) and recover their facing direction within that chamber (heading retrieval). In two experiments, we found that mice used nongeometric features for place recognition, but simultaneously failed to use these same features for heading retrieval, instead relying exclusively on spatial geometry. These results suggest the existence of separate systems for place recognition and heading retrieval in mice that are differentially sensitive to geometric and nongeometric cues. We speculate that a similar cognitive architecture may underlie human navigational behavior.

  16. Feature binding and attention in working memory: a resolution of previous contradictory findings.

    PubMed

    Allen, Richard J; Hitch, Graham J; Mate, Judit; Baddeley, Alan D

    2012-01-01

    We aimed to resolve an apparent contradiction between previous experiments from different laboratories, using dual-task methodology to compare effects of a concurrent executive load on immediate recognition memory for colours or shapes of items or their colour-shape combinations. Results of two experiments confirmed previous evidence that an irrelevant attentional load interferes equally with memory for features and memory for feature bindings. Detailed analyses suggested that previous contradictory evidence arose from limitations in the way recognition memory was measured. The present findings are inconsistent with an earlier suggestion that feature binding takes place within a multimodal episodic buffer Baddeley, ( 2000 ) and support a subsequent account in which binding takes place automatically prior to information entering the episodic buffer Baddeley, Allen, & Hitch, ( 2011 ). Methodologically, the results suggest that different measures of recognition memory performance (A', d', corrected recognition) give a converging picture of main effects, but are less consistent in detecting interactions. We suggest that this limitation on the reliability of measuring recognition should be taken into account in future research so as to avoid problems of replication that turn out to be more apparent than real.

  17. Resilient memory for melodies: The number of intervening melodies does not influence novel melody recognition.

    PubMed

    Herff, Steffen A; Olsen, Kirk N; Dean, Roger T

    2018-05-01

    In many memory domains, a decrease in recognition performance between the first and second presentation of an object is observed as the number of intervening items increases. However, this effect is not universal. Within the auditory domain, this form of interference has been demonstrated in word and single-note recognition, but has yet to be substantiated using relatively complex musical material such as a melody. Indeed, it is becoming clear that music shows intriguing properties when it comes to memory. This study investigated how the number of intervening items influences memory for melodies. In Experiments 1, 2 and 3, one melody was presented per trial in a continuous recognition paradigm. After each melody, participants indicated whether they had heard the melody in the experiment before by responding "old" or "new." In Experiment 4, participants rated perceived familiarity for every melody without being told that melodies reoccur. In four experiments using two corpora of music, two different memory tasks, transposed and untransposed melodies and up to 195 intervening melodies, no sign of a disruptive effect from the number of intervening melodies beyond the first was observed. We propose a new "regenerative multiple representations" conjecture to explain why intervening items increase interference in recognition memory for most domains but not music. This conjecture makes several testable predictions and has the potential to strengthen our understanding of domain specificity in human memory, while moving one step closer to explaining the "paradox" that is memory for melody.

  18. Remember-Know and Source Memory Instructions Can Qualitatively Change Old-New Recognition Accuracy: The Modality-Match Effect in Recognition Memory

    ERIC Educational Resources Information Center

    Mulligan, Neil W.; Besken, Miri; Peterson, Daniel

    2010-01-01

    Remember-Know (RK) and source memory tasks were designed to elucidate processes underlying memory retrieval. As part of more complex judgments, both tests produce a measure of old-new recognition, which is typically treated as equivalent to that derived from a standard recognition task. The present study demonstrates, however, that recognition…

  19. The Role of Active Exploration of 3D Face Stimuli on Recognition Memory of Facial Information

    ERIC Educational Resources Information Center

    Liu, Chang Hong; Ward, James; Markall, Helena

    2007-01-01

    Research on face recognition has mainly relied on methods in which observers are relatively passive viewers of face stimuli. This study investigated whether active exploration of three-dimensional (3D) face stimuli could facilitate recognition memory. A standard recognition task and a sequential matching task were employed in a yoked design.…

  20. Recognition memory: a review of the critical findings and an integrated theory for relating them.

    PubMed

    Malmberg, Kenneth J

    2008-12-01

    The development of formal models has aided theoretical progress in recognition memory research. Here, I review the findings that are critical for testing them, including behavioral and brain imaging results of single-item recognition, plurality discrimination, and associative recognition experiments under a variety of testing conditions. I also review the major approaches to measurement and process modeling of recognition. The review indicates that several extant dual-process measures of recollection are unreliable, and thus they are unsuitable as a basis for forming strong conclusions. At the process level, however, the retrieval dynamics of recognition memory and the effect of strengthening operations suggest that a recall-to-reject process plays an important role in plurality discrimination and associative recognition, but not necessarily in single-item recognition. A new theoretical framework proposes that the contribution of recollection to recognition depends on whether the retrieval of episodic details improves accuracy, and it organizes the models around the construct of efficiency. Accordingly, subjects adopt strategies that they believe will produce a desired level of accuracy in the shortest amount of time. Several models derived from this framework are shown to account the accuracy, latency, and confidence with which the various recognition tasks are performed.

  1. Feature extraction for face recognition via Active Shape Model (ASM) and Active Appearance Model (AAM)

    NASA Astrophysics Data System (ADS)

    Iqtait, M.; Mohamad, F. S.; Mamat, M.

    2018-03-01

    Biometric is a pattern recognition system which is used for automatic recognition of persons based on characteristics and features of an individual. Face recognition with high recognition rate is still a challenging task and usually accomplished in three phases consisting of face detection, feature extraction, and expression classification. Precise and strong location of trait point is a complicated and difficult issue in face recognition. Cootes proposed a Multi Resolution Active Shape Models (ASM) algorithm, which could extract specified shape accurately and efficiently. Furthermore, as the improvement of ASM, Active Appearance Models algorithm (AAM) is proposed to extracts both shape and texture of specified object simultaneously. In this paper we give more details about the two algorithms and give the results of experiments, testing their performance on one dataset of faces. We found that the ASM is faster and gains more accurate trait point location than the AAM, but the AAM gains a better match to the texture.

  2. The influence of print exposure on the body-object interaction effect in visual word recognition.

    PubMed

    Hansen, Dana; Siakaluk, Paul D; Pexman, Penny M

    2012-01-01

    We examined the influence of print exposure on the body-object interaction (BOI) effect in visual word recognition. High print exposure readers and low print exposure readers either made semantic categorizations ("Is the word easily imageable?"; Experiment 1) or phonological lexical decisions ("Does the item sound like a real English word?"; Experiment 2). The results from Experiment 1 showed that there was a larger BOI effect for the low print exposure readers than for the high print exposure readers in semantic categorization, though an effect was observed for both print exposure groups. However, the results from Experiment 2 showed that the BOI effect was observed only for the high print exposure readers in phonological lexical decision. The results of the present study suggest that print exposure does influence the BOI effect, and that this influence varies as a function of task demands.

  3. The relational luring effect: Retrieval of relational information during associative recognition.

    PubMed

    Popov, Vencislav; Hristova, Penka; Anders, Royce

    2017-05-01

    Here we argue that semantic relations (e.g., works in: nurse-hospital) have abstract independent representations in long-term memory (LTM) and that the same representation is accessed by all exemplars of a specific relation. We present evidence from 2 associative recognition experiments that uncovered a novel relational luring effect (RLE) in recognition memory. Participants studied word pairs, and then discriminated between intact (old) pairs and recombined lures. In the first experiment participants responded more slowly to lures that were relationally similar (table-cloth) to studied pairs (floor-carpet), in contrast to relationally dissimilar lures (pipe-water). Experiment 2 extended the RLE by showing a continuous effect of relational lure strength on recognition times (RTs), false alarms, and hits. It used a continuous pair recognition task, where each recombined lure or target could be preceded by 0, 1, 2, 3 or 4 different exemplars of the same relation. RTs and false alarms increased linearly with the number of different previously seen relationally similar pairs. Moreover, more typical exemplars of a given relation lead to a stronger RLE. Finally, hits for intact pairs also rose with the number of previously studied different relational instances. These results suggest that semantic relations exist as independent representations in LTM and that during associative recognition these representations can be a spurious source of familiarity. We discuss the implications of the RLE for current models of semantic and episodic memory, unitization in associative recognition, analogical reasoning and retrieval, as well as constructive memory research. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  4. Image processing strategies based on saliency segmentation for object recognition under simulated prosthetic vision.

    PubMed

    Li, Heng; Su, Xiaofan; Wang, Jing; Kan, Han; Han, Tingting; Zeng, Yajie; Chai, Xinyu

    2018-01-01

    Current retinal prostheses can only generate low-resolution visual percepts constituted of limited phosphenes which are elicited by an electrode array and with uncontrollable color and restricted grayscale. Under this visual perception, prosthetic recipients can just complete some simple visual tasks, but more complex tasks like face identification/object recognition are extremely difficult. Therefore, it is necessary to investigate and apply image processing strategies for optimizing the visual perception of the recipients. This study focuses on recognition of the object of interest employing simulated prosthetic vision. We used a saliency segmentation method based on a biologically plausible graph-based visual saliency model and a grabCut-based self-adaptive-iterative optimization framework to automatically extract foreground objects. Based on this, two image processing strategies, Addition of Separate Pixelization and Background Pixel Shrink, were further utilized to enhance the extracted foreground objects. i) The results showed by verification of psychophysical experiments that under simulated prosthetic vision, both strategies had marked advantages over Direct Pixelization in terms of recognition accuracy and efficiency. ii) We also found that recognition performance under two strategies was tied to the segmentation results and was affected positively by the paired-interrelated objects in the scene. The use of the saliency segmentation method and image processing strategies can automatically extract and enhance foreground objects, and significantly improve object recognition performance towards recipients implanted a high-density implant. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Task-dependent modulation of the visual sensory thalamus assists visual-speech recognition.

    PubMed

    Díaz, Begoña; Blank, Helen; von Kriegstein, Katharina

    2018-05-14

    The cerebral cortex modulates early sensory processing via feed-back connections to sensory pathway nuclei. The functions of this top-down modulation for human behavior are poorly understood. Here, we show that top-down modulation of the visual sensory thalamus (the lateral geniculate body, LGN) is involved in visual-speech recognition. In two independent functional magnetic resonance imaging (fMRI) studies, LGN response increased when participants processed fast-varying features of articulatory movements required for visual-speech recognition, as compared to temporally more stable features required for face identification with the same stimulus material. The LGN response during the visual-speech task correlated positively with the visual-speech recognition scores across participants. In addition, the task-dependent modulation was present for speech movements and did not occur for control conditions involving non-speech biological movements. In face-to-face communication, visual speech recognition is used to enhance or even enable understanding what is said. Speech recognition is commonly explained in frameworks focusing on cerebral cortex areas. Our findings suggest that task-dependent modulation at subcortical sensory stages has an important role for communication: Together with similar findings in the auditory modality the findings imply that task-dependent modulation of the sensory thalami is a general mechanism to optimize speech recognition. Copyright © 2018. Published by Elsevier Inc.

  6. Hand-gesture-based sterile interface for the operating room using contextual cues for the navigation of radiological images

    PubMed Central

    Jacob, Mithun George; Wachs, Juan Pablo; Packer, Rebecca A

    2013-01-01

    This paper presents a method to improve the navigation and manipulation of radiological images through a sterile hand gesture recognition interface based on attentional contextual cues. Computer vision algorithms were developed to extract intention and attention cues from the surgeon's behavior and combine them with sensory data from a commodity depth camera. The developed interface was tested in a usability experiment to assess the effectiveness of the new interface. An image navigation and manipulation task was performed, and the gesture recognition accuracy, false positives and task completion times were computed to evaluate system performance. Experimental results show that gesture interaction and surgeon behavior analysis can be used to accurately navigate, manipulate and access MRI images, and therefore this modality could replace the use of keyboard and mice-based interfaces. PMID:23250787

  7. Hand-gesture-based sterile interface for the operating room using contextual cues for the navigation of radiological images.

    PubMed

    Jacob, Mithun George; Wachs, Juan Pablo; Packer, Rebecca A

    2013-06-01

    This paper presents a method to improve the navigation and manipulation of radiological images through a sterile hand gesture recognition interface based on attentional contextual cues. Computer vision algorithms were developed to extract intention and attention cues from the surgeon's behavior and combine them with sensory data from a commodity depth camera. The developed interface was tested in a usability experiment to assess the effectiveness of the new interface. An image navigation and manipulation task was performed, and the gesture recognition accuracy, false positives and task completion times were computed to evaluate system performance. Experimental results show that gesture interaction and surgeon behavior analysis can be used to accurately navigate, manipulate and access MRI images, and therefore this modality could replace the use of keyboard and mice-based interfaces.

  8. Own- and other-race face identity recognition in children: the effects of pose and feature composition.

    PubMed

    Anzures, Gizelle; Kelly, David J; Pascalis, Olivier; Quinn, Paul C; Slater, Alan M; de Viviés, Xavier; Lee, Kang

    2014-02-01

    We used a matching-to-sample task and manipulated facial pose and feature composition to examine the other-race effect (ORE) in face identity recognition between 5 and 10 years of age. Overall, the present findings provide a genuine measure of own- and other-race face identity recognition in children that is independent of photographic and image processing. The current study also confirms the presence of an ORE in children as young as 5 years of age using a recognition paradigm that is sensitive to their developing cognitive abilities. In addition, the present findings show that with age, increasing experience with familiar classes of own-race faces and further lack of experience with unfamiliar classes of other-race faces serves to maintain the ORE between 5 and 10 years of age rather than exacerbate the effect. All age groups also showed a differential effect of stimulus facial pose in their recognition of the internal regions of own- and other-race faces. Own-race inner faces were remembered best when three-quarter poses were used during familiarization and frontal poses were used during the recognition test. In contrast, other-race inner faces were remembered best when frontal poses were used during familiarization and three-quarter poses were used during the recognition test. Thus, children encode and/or retrieve own- and other-race faces from memory in qualitatively different ways.

  9. Own- and other-race face identity recognition in children: The effects of pose and feature composition

    PubMed Central

    Anzures, Gizelle; Kelly, David J.; Pascalis, Olivier; Quinn, Paul C.; Slater, Alan M.; de Viviés, Xavier; Lee, Kang

    2013-01-01

    We used a matching-to-sample task and manipulated facial pose and feature composition to examine the other-race effect (ORE) in face identity recognition between 5 and 10 years of age. Overall, the present findings provide a genuine measure of own- and other-race face identity recognition in children that is independent of photographic and image processing. The present study also confirms the presence of an ORE in children as young as 5 years of age using a recognition paradigm that is sensitive to their developing cognitive abilities. In addition, the present findings show that with age, increasing experience with familiar classes of own-race faces and further lack of experience with unfamiliar classes of other-race faces serves to maintain the ORE between 5 to 10 years of age rather than exacerbate the effect. All age groups also showed a differential effect of stimulus facial pose in their recognition of the internal regions of own- and other-race faces. Own-race inner faces were remembered best when three-quarter poses were used during familiarization and frontal poses were used during the recognition test. In contrast, other-race inner faces were remembered best when frontal poses were used during familiarization and three-quarter poses were used during the recognition test. Thus, children encode and/or retrieve own- and other-race faces from memory in qualitatively different ways. PMID:23731287

  10. Feedforward object-vision models only tolerate small image variations compared to human

    PubMed Central

    Ghodrati, Masoud; Farzmahdi, Amirhossein; Rajaei, Karim; Ebrahimpour, Reza; Khaligh-Razavi, Seyed-Mahdi

    2014-01-01

    Invariant object recognition is a remarkable ability of primates' visual system that its underlying mechanism has constantly been under intense investigations. Computational modeling is a valuable tool toward understanding the processes involved in invariant object recognition. Although recent computational models have shown outstanding performances on challenging image databases, they fail to perform well in image categorization under more complex image variations. Studies have shown that making sparse representation of objects by extracting more informative visual features through a feedforward sweep can lead to higher recognition performances. Here, however, we show that when the complexity of image variations is high, even this approach results in poor performance compared to humans. To assess the performance of models and humans in invariant object recognition tasks, we built a parametrically controlled image database consisting of several object categories varied in different dimensions and levels, rendered from 3D planes. Comparing the performance of several object recognition models with human observers shows that only in low-level image variations the models perform similar to humans in categorization tasks. Furthermore, the results of our behavioral experiments demonstrate that, even under difficult experimental conditions (i.e., briefly presented masked stimuli with complex image variations), human observers performed outstandingly well, suggesting that the models are still far from resembling humans in invariant object recognition. Taken together, we suggest that learning sparse informative visual features, although desirable, is not a complete solution for future progresses in object-vision modeling. We show that this approach is not of significant help in solving the computational crux of object recognition (i.e., invariant object recognition) when the identity-preserving image variations become more complex. PMID:25100986

  11. Psychometric Functions of Dual-Task Paradigms for Measuring Listening Effort.

    PubMed

    Wu, Yu-Hsiang; Stangl, Elizabeth; Zhang, Xuyang; Perkins, Joanna; Eilers, Emily

    The purpose of the study was to characterize the psychometric functions that describe task performance in dual-task listening effort measures as a function of signal to noise ratio (SNR). Younger adults with normal hearing (YNH, n = 24; experiment 1) and older adults with hearing impairment (n = 24; experiment 2) were recruited. Dual-task paradigms wherein the participants performed a primary speech recognition task simultaneously with a secondary task were conducted at a wide range of SNRs. Two different secondary tasks were used: an easy task (i.e., a simple visual reaction-time task) and a hard task (i.e., the incongruent Stroop test). The reaction time (RT) quantified the performance of the secondary task. For both participant groups and for both easy and hard secondary tasks, the curves that described the RT as a function of SNR were peak shaped. The RT increased as SNR changed from favorable to intermediate SNRs, and then decreased as SNRs moved from intermediate to unfavorable SNRs. The RT reached its peak (longest time) at the SNRs at which the participants could understand 30 to 50% of the speech. In experiments 1 and 2, the dual-task trials that had the same SNR were conducted in one block. To determine if the peak shape of the RT curves was specific to the blocked SNR presentation order used in these experiments, YNH participants were recruited (n = 25; experiment 3) and dual-task measures, wherein the SNR was varied from trial to trial (i.e., nonblocked), were conducted. The results indicated that, similar to the first two experiments, the RT curves had a peak shape. Secondary task performance was poorer at the intermediate SNRs than at the favorable and unfavorable SNRs. This pattern was observed for both YNH and older adults with hearing impairment participants and was not affected by either task type (easy or hard secondary task) or SNR presentation order (blocked or nonblocked). The shorter RT at the unfavorable SNRs (speech intelligibility < 30%) possibly reflects that the participants experienced cognitive overload and/or disengaged themselves from the listening task. The implication of using the dual-task paradigm as a listening effort measure is discussed.

  12. Temporal lobe structures and facial emotion recognition in schizophrenia patients and nonpsychotic relatives.

    PubMed

    Goghari, Vina M; Macdonald, Angus W; Sponheim, Scott R

    2011-11-01

    Temporal lobe abnormalities and emotion recognition deficits are prominent features of schizophrenia and appear related to the diathesis of the disorder. This study investigated whether temporal lobe structural abnormalities were associated with facial emotion recognition deficits in schizophrenia and related to genetic liability for the disorder. Twenty-seven schizophrenia patients, 23 biological family members, and 36 controls participated. Several temporal lobe regions (fusiform, superior temporal, middle temporal, amygdala, and hippocampus) previously associated with face recognition in normative samples and found to be abnormal in schizophrenia were evaluated using volumetric analyses. Participants completed a facial emotion recognition task and an age recognition control task under time-limited and self-paced conditions. Temporal lobe volumes were tested for associations with task performance. Group status explained 23% of the variance in temporal lobe volume. Left fusiform gray matter volume was decreased by 11% in patients and 7% in relatives compared with controls. Schizophrenia patients additionally exhibited smaller hippocampal and middle temporal volumes. Patients were unable to improve facial emotion recognition performance with unlimited time to make a judgment but were able to improve age recognition performance. Patients additionally showed a relationship between reduced temporal lobe gray matter and poor facial emotion recognition. For the middle temporal lobe region, the relationship between greater volume and better task performance was specific to facial emotion recognition and not age recognition. Because schizophrenia patients exhibited a specific deficit in emotion recognition not attributable to a generalized impairment in face perception, impaired emotion recognition may serve as a target for interventions.

  13. Under what conditions is recognition spared relative to recall after selective hippocampal damage in humans?

    PubMed

    Holdstock, J S; Mayes, A R; Roberts, N; Cezayirli, E; Isaac, C L; O'Reilly, R C; Norman, K A

    2002-01-01

    The claim that recognition memory is spared relative to recall after focal hippocampal damage has been disputed in the literature. We examined this claim by investigating object and object-location recall and recognition memory in a patient, YR, who has adult-onset selective hippocampal damage. Our aim was to identify the conditions under which recognition was spared relative to recall in this patient. She showed unimpaired forced-choice object recognition but clearly impaired recall, even when her control subjects found the object recognition task to be numerically harder than the object recall task. However, on two other recognition tests, YR's performance was not relatively spared. First, she was clearly impaired at an equivalently difficult yes/no object recognition task, but only when targets and foils were very similar. Second, YR was clearly impaired at forced-choice recognition of object-location associations. This impairment was also unrelated to difficulty because this task was no more difficult than the forced-choice object recognition task for control subjects. The clear impairment of yes/no, but not of forced-choice, object recognition after focal hippocampal damage, when targets and foils are very similar, is predicted by the neural network-based Complementary Learning Systems model of recognition. This model postulates that recognition is mediated by hippocampally dependent recollection and cortically dependent familiarity; thus hippocampal damage should not impair item familiarity. The model postulates that familiarity is ineffective when very similar targets and foils are shown one at a time and subjects have to identify which items are old (yes/no recognition). In contrast, familiarity is effective in discriminating which of similar targets and foils, seen together, is old (forced-choice recognition). Independent evidence from the remember/know procedure also indicates that YR's familiarity is normal. The Complementary Learning Systems model can also accommodate the clear impairment of forced-choice object-location recognition memory if it incorporates the view that the most complete convergence of spatial and object information, represented in different cortical regions, occurs in the hippocampus.

  14. Loneliness and the social monitoring system: Emotion recognition and eye gaze in a real-life conversation.

    PubMed

    Lodder, Gerine M A; Scholte, Ron H J; Goossens, Luc; Engels, Rutger C M E; Verhagen, Maaike

    2016-02-01

    Based on the belongingness regulation theory (Gardner et al., 2005, Pers. Soc. Psychol. Bull., 31, 1549), this study focuses on the relationship between loneliness and social monitoring. Specifically, we examined whether loneliness relates to performance on three emotion recognition tasks and whether lonely individuals show increased gazing towards their conversation partner's faces in a real-life conversation. Study 1 examined 170 college students (Mage = 19.26; SD = 1.21) who completed an emotion recognition task with dynamic stimuli (morph task) and a micro(-emotion) expression recognition task. Study 2 examined 130 college students (Mage = 19.33; SD = 2.00) who completed the Reading the Mind in the Eyes Test and who had a conversation with an unfamiliar peer while their gaze direction was videotaped. In both studies, loneliness was measured using the UCLA Loneliness Scale version 3 (Russell, 1996, J. Pers. Assess., 66, 20). The results showed that loneliness was unrelated to emotion recognition on all emotion recognition tasks, but that it was related to increased gaze towards their conversation partner's faces. Implications for the belongingness regulation system of lonely individuals are discussed. © 2015 The British Psychological Society.

  15. The relationship between change detection and recognition of centrally attended objects in motion pictures.

    PubMed

    Angelone, Bonnie L; Levin, Daniel T; Simons, Daniel J

    2003-01-01

    Observers typically detect changes to central objects more readily than changes to marginal objects, but they sometimes miss changes to central, attended objects as well. However, even if observers do not report such changes, they may be able to recognize the changed object. In three experiments we explored change detection and recognition memory for several types of changes to central objects in motion pictures. Observers who failed to detect a change still performed at above chance levels on a recognition task in almost all conditions. In addition, observers who detected the change were no more accurate in their recognition than those who did not detect the change. Despite large differences in the detectability of changes across conditions, those observers who missed the change did not vary in their ability to recognize the changing object.

  16. Autobiographically significant concepts: more episodic than semantic in nature? An electrophysiological investigation of overlapping types of memory.

    PubMed

    Renoult, Louis; Davidson, Patrick S R; Schmitz, Erika; Park, Lillian; Campbell, Kenneth; Moscovitch, Morris; Levine, Brian

    2015-01-01

    A common assertion is that semantic memory emerges from episodic memory, shedding the distinctive contexts associated with episodes over time and/or repeated instances. Some semantic concepts, however, may retain their episodic origins or acquire episodic information during life experiences. The current study examined this hypothesis by investigating the ERP correlates of autobiographically significant (AS) concepts, that is, semantic concepts that are associated with vivid episodic memories. We inferred the contribution of semantic and episodic memory to AS concepts using the amplitudes of the N400 and late positive component, respectively. We compared famous names that easily brought to mind episodic memories (high AS names) against equally famous names that did not bring such recollections to mind (low AS names) on a semantic task (fame judgment) and an episodic task (recognition memory). Compared with low AS names, high AS names were associated with increased amplitude of the late positive component in both tasks. Moreover, in the recognition task, this effect of AS was highly correlated with recognition confidence. In contrast, the N400 component did not differentiate the high versus low AS names but, instead, was related to the amount of general knowledge participants had regarding each name. These results suggest that semantic concepts high in AS, such as famous names, have an episodic component and are associated with similar brain processes to those that are engaged by episodic memory. Studying AS concepts may provide unique insights into how episodic and semantic memory interact.

  17. Role of PFC during retrieval of recognition memory in rodents.

    PubMed

    Bekinschtein, Pedro; Weisstaub, Noelia

    2014-01-01

    One of the challenges for memory researches is the study of the neurobiology of episodic memory which is defined by the integration of all the different components of experiences that support the conscious recollection of events. The features of episodic memory includes a particular object or person ("what"), the context in which the experience took place ("where") and the particular time at which the event occurred ("when"). Although episodic memory has been mainly studied in humans, there are many studies that demonstrate these features in non-human animals. Here, we summarize a set of studies that employ different versions of recognition memory tasks in animals to study the role of the medial prefrontal cortex in episodic memory. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. Long-term priming of neighbours biases the word recognition process: evidence from a lexical decision task.

    PubMed

    Wagenmakers, Eric-Jan; Raaijmakers, Jeroen G W

    2006-12-01

    The role of orthographically similar words (i.e., neighbours) in the word recognition process has been studied extensively using short-term priming paradigms (e.g., Colombo, 1986). Here we demonstrate that long-term effects of neighbour priming can also be obtained. Experiment 1 showed that prior study of a neighbour (e.g., TANGO) increased later lexical decision performance for similar words (e.g., MANGO), but decreased performance for similar pseudowords (e.g., LANGO). Experiment 2 replicated this bias effect and showed that the increase in lexical decision performance due to neighbour priming is selectively due to words from a relatively sparse neighbourhood. Explanations of the bias effect in terms of lexical activation and episodic memory retrieval are discussed.

  19. Learning, specialization, efficiency and task allocation in social insects

    PubMed Central

    Muller, Helene

    2009-01-01

    One of the most spectacular features of social insect colonies is their division of labor. Although individuals are often totipotent in terms of the labor they might perform, they might persistently work as scouts, fighters, nurses, foragers, undertakers or cleaners with a repetitiveness that might resemble an assembly line worker in a factory. Perhaps because of this apparent analogy, researchers have often assumed a priori that such labor division must be efficient, but empirical proof is scarce. New work on Themnothorax ants shows that there might be no link between an individual's propensity to perform a task, and their efficiency at that task, nor are task specialists more efficient than generalists. Here we argue that learning psychology might provide the missing link between social insect task specialization and efficiency: just like in human societies, efficiency at a job specialty is only partially a result of “talent”, or innate tendency to engage in a job: it is much more a result of perfecting skills with experience, and the extent to which experience can be carried over from one task to the next (transfer), or whether experience at one task might actually impair performance at another (interference). Indeed there is extensive circumstantial evidence that learning is involved in almost any task performed by social insect workers, including food type recognition and handling techniques, but also such seemingly basic tasks as nest building and climate control. New findings on Cerapachys ants indicate that early experience of success at a task might to some extent determine the “profession” an insect worker chooses in later life. PMID:19513269

  20. Clustered Multi-Task Learning for Automatic Radar Target Recognition

    PubMed Central

    Li, Cong; Bao, Weimin; Xu, Luping; Zhang, Hua

    2017-01-01

    Model training is a key technique for radar target recognition. Traditional model training algorithms in the framework of single task leaning ignore the relationships among multiple tasks, which degrades the recognition performance. In this paper, we propose a clustered multi-task learning, which can reveal and share the multi-task relationships for radar target recognition. To further make full use of these relationships, the latent multi-task relationships in the projection space are taken into consideration. Specifically, a constraint term in the projection space is proposed, the main idea of which is that multiple tasks within a close cluster should be close to each other in the projection space. In the proposed method, the cluster structures and multi-task relationships can be autonomously learned and utilized in both of the original and projected space. In view of the nonlinear characteristics of radar targets, the proposed method is extended to a non-linear kernel version and the corresponding non-linear multi-task solving method is proposed. Comprehensive experimental studies on simulated high-resolution range profile dataset and MSTAR SAR public database verify the superiority of the proposed method to some related algorithms. PMID:28953267

  1. Perceptual Effects of Social Salience: Evidence from Self-Prioritization Effects on Perceptual Matching

    ERIC Educational Resources Information Center

    Sui, Jie; He, Xun; Humphreys, Glyn W.

    2012-01-01

    We present novel evidence showing that new self-relevant visual associations can affect performance in simple shape recognition tasks. Participants associated labels for themselves, other people, or neutral terms with geometric shapes and then immediately judged whether subsequent label-shape pairings were matched. Across 4 experiments there was a…

  2. Modality independence of order coding in working memory: Evidence from cross-modal order interference at recall.

    PubMed

    Vandierendonck, André

    2016-01-01

    Working memory researchers do not agree on whether order in serial recall is encoded by dedicated modality-specific systems or by a more general modality-independent system. Although previous research supports the existence of autonomous modality-specific systems, it has been shown that serial recognition memory is prone to cross-modal order interference by concurrent tasks. The present study used a serial recall task, which was performed in a single-task condition and in a dual-task condition with an embedded memory task in the retention interval. The modality of the serial task was either verbal or visuospatial, and the embedded tasks were in the other modality and required either serial or item recall. Care was taken to avoid modality overlaps during presentation and recall. In Experiment 1, visuospatial but not verbal serial recall was more impaired when the embedded task was an order than when it was an item task. Using a more difficult verbal serial recall task, verbal serial recall was also more impaired by another order recall task in Experiment 2. These findings are consistent with the hypothesis of modality-independent order coding. The implications for views on short-term recall and the multicomponent view of working memory are discussed.

  3. Processing F0 with cochlear implants: Modulation frequency discrimination and speech intonation recognition.

    PubMed

    Chatterjee, Monita; Peng, Shu-Chen

    2008-01-01

    Fundamental frequency (F0) processing by cochlear implant (CI) listeners was measured using a psychophysical task and a speech intonation recognition task. Listeners' Weber fractions for modulation frequency discrimination were measured using an adaptive, 3-interval, forced-choice paradigm: stimuli were presented through a custom research interface. In the speech intonation recognition task, listeners were asked to indicate whether resynthesized bisyllabic words, when presented in the free field through the listeners' everyday speech processor, were question-like or statement-like. The resynthesized tokens were systematically manipulated to have different initial-F0s to represent male vs. female voices, and different F0 contours (i.e. falling, flat, and rising) Although the CI listeners showed considerable variation in performance on both tasks, significant correlations were observed between the CI listeners' sensitivity to modulation frequency in the psychophysical task and their performance in intonation recognition. Consistent with their greater reliance on temporal cues, the CI listeners' performance in the intonation recognition task was significantly poorer with the higher initial-F0 stimuli than with the lower initial-F0 stimuli. Similar results were obtained with normal hearing listeners attending to noiseband-vocoded CI simulations with reduced spectral resolution.

  4. Processing F0 with Cochlear Implants: Modulation Frequency Discrimination and Speech Intonation Recognition

    PubMed Central

    Chatterjee, Monita; Peng, Shu-Chen

    2008-01-01

    Fundamental frequency (F0) processing by cochlear implant (CI) listeners was measured using a psychophysical task and a speech intonation recognition task. Listeners’ Weber fractions for modulation frequency discrimination were measured using an adaptive, 3-interval, forced-choice paradigm: stimuli were presented through a custom research interface. In the speech intonation recognition task, listeners were asked to indicate whether resynthesized bisyllabic words, when presented in the free field through the listeners’ everyday speech processor, were question-like or statement-like. The resynthesized tokens were systematically manipulated to have different initial F0s to represent male vs. female voices, and different F0 contours (i.e., falling, flat, and rising) Although the CI listeners showed considerable variation in performance on both tasks, significant correlations were observed between the CI listeners’ sensitivity to modulation frequency in the psychophysical task and their performance in intonation recognition. Consistent with their greater reliance on temporal cues, the CI listeners’ performance in the intonation recognition task was significantly poorer with the higher initial-F0 stimuli than with the lower initial-F0 stimuli. Similar results were obtained with normal hearing listeners attending to noiseband-vocoded CI simulations with reduced spectral resolution. PMID:18093766

  5. Illumination robust face recognition using spatial adaptive shadow compensation based on face intensity prior

    NASA Astrophysics Data System (ADS)

    Hsieh, Cheng-Ta; Huang, Kae-Horng; Lee, Chang-Hsing; Han, Chin-Chuan; Fan, Kuo-Chin

    2017-12-01

    Robust face recognition under illumination variations is an important and challenging task in a face recognition system, particularly for face recognition in the wild. In this paper, a face image preprocessing approach, called spatial adaptive shadow compensation (SASC), is proposed to eliminate shadows in the face image due to different lighting directions. First, spatial adaptive histogram equalization (SAHE), which uses face intensity prior model, is proposed to enhance the contrast of each local face region without generating visible noises in smooth face areas. Adaptive shadow compensation (ASC), which performs shadow compensation in each local image block, is then used to produce a wellcompensated face image appropriate for face feature extraction and recognition. Finally, null-space linear discriminant analysis (NLDA) is employed to extract discriminant features from SASC compensated images. Experiments performed on the Yale B, Yale B extended, and CMU PIE face databases have shown that the proposed SASC always yields the best face recognition accuracy. That is, SASC is more robust to face recognition under illumination variations than other shadow compensation approaches.

  6. Is Listening in Noise Worth It? The Neurobiology of Speech Recognition in Challenging Listening Conditions.

    PubMed

    Eckert, Mark A; Teubner-Rhodes, Susan; Vaden, Kenneth I

    2016-01-01

    This review examines findings from functional neuroimaging studies of speech recognition in noise to provide a neural systems level explanation for the effort and fatigue that can be experienced during speech recognition in challenging listening conditions. Neuroimaging studies of speech recognition consistently demonstrate that challenging listening conditions engage neural systems that are used to monitor and optimize performance across a wide range of tasks. These systems appear to improve speech recognition in younger and older adults, but sustained engagement of these systems also appears to produce an experience of effort and fatigue that may affect the value of communication. When considered in the broader context of the neuroimaging and decision making literature, the speech recognition findings from functional imaging studies indicate that the expected value, or expected level of speech recognition given the difficulty of listening conditions, should be considered when measuring effort and fatigue. The authors propose that the behavioral economics or neuroeconomics of listening can provide a conceptual and experimental framework for understanding effort and fatigue that may have clinical significance.

  7. Is Listening in Noise Worth It? The Neurobiology of Speech Recognition in Challenging Listening Conditions

    PubMed Central

    Eckert, Mark A.; Teubner-Rhodes, Susan; Vaden, Kenneth I.

    2016-01-01

    This review examines findings from functional neuroimaging studies of speech recognition in noise to provide a neural systems level explanation for the effort and fatigue that can be experienced during speech recognition in challenging listening conditions. Neuroimaging studies of speech recognition consistently demonstrate that challenging listening conditions engage neural systems that are used to monitor and optimize performance across a wide range of tasks. These systems appear to improve speech recognition in younger and older adults, but sustained engagement of these systems also appears to produce an experience of effort and fatigue that may affect the value of communication. When considered in the broader context of the neuroimaging and decision making literature, the speech recognition findings from functional imaging studies indicate that the expected value, or expected level of speech recognition given the difficulty of listening conditions, should be considered when measuring effort and fatigue. We propose that the behavioral economics and/or neuroeconomics of listening can provide a conceptual and experimental framework for understanding effort and fatigue that may have clinical significance. PMID:27355759

  8. Clinical experience with the words-in-noise test on 3430 veterans: comparisons with pure-tone thresholds and word recognition in quiet.

    PubMed

    Wilson, Richard H

    2011-01-01

    Since the 1940s, measures of pure-tone sensitivity and speech recognition in quiet have been vital components of the audiologic evaluation. Although early investigators urged that speech recognition in noise also should be a component of the audiologic evaluation, only recently has this suggestion started to become a reality. This report focuses on the Words-in-Noise (WIN) Test, which evaluates word recognition in multitalker babble at seven signal-to-noise ratios and uses the 50% correct point (in dB SNR) calculated with the Spearman-Kärber equation as the primary metric. The WIN was developed and validated in a series of 12 laboratory studies. The current study examined the effectiveness of the WIN materials for measuring the word-recognition performance of patients in a typical clinical setting. To examine the relations among three audiometric measures including pure-tone thresholds, word-recognition performances in quiet, and word-recognition performances in multitalker babble for veterans seeking remediation for their hearing loss. Retrospective, descriptive. The participants were 3430 veterans who for the most part were evaluated consecutively in the Audiology Clinic at the VA Medical Center, Mountain Home, Tennessee. The mean age was 62.3 yr (SD = 12.8 yr). The data were collected in the course of a 60 min routine audiologic evaluation. A history, otoscopy, and aural-acoustic immittance measures also were included in the clinic protocol but were not evaluated in this report. Overall, the 1000-8000 Hz thresholds were significantly lower (better) in the right ear (RE) than in the left ear (LE). There was a direct relation between age and the pure-tone thresholds, with greater change across age in the high frequencies than in the low frequencies. Notched audiograms at 4000 Hz were observed in at least one ear in 41% of the participants with more unilateral than bilateral notches. Normal pure-tone thresholds (≤20 dB HL) were obtained from 6% of the participants. Maximum performance on the Northwestern University Auditory Test No. 6 (NU-6) in quiet was ≥90% correct by 50% of the participants, with an additional 20% performing at ≥80% correct; the RE performed 1-3% better than the LE. Of the 3291 who completed the WIN on both ears, only 7% exhibited normal performance (50% correct point of ≤6 dB SNR). Overall, WIN performance was significantly better in the RE (mean = 13.3 dB SNR) than in the LE (mean = 13.8 dB SNR). Recognition performance on both the NU-6 and the WIN decreased as a function of both pure-tone hearing loss and age. There was a stronger relation between the high-frequency pure-tone average (1000, 2000, and 4000 Hz) and the WIN than between the pure-tone average (500, 1000, and 2000 Hz) and the WIN. The results on the WIN from both the previous laboratory studies and the current clinical study indicate that the WIN is an appropriate clinic instrument to assess word-recognition performance in background noise. Recognition performance on a speech-in-quiet task does not predict performance on a speech-in-noise task, as the two tasks reflect different domains of auditory function. Experience with the WIN indicates that word-in-noise tasks should be considered the "stress test" for auditory function. American Academy of Audiology.

  9. Saccadic movement deficiencies in adults with ADHD tendencies.

    PubMed

    Lee, Yun-Jeong; Lee, Sangil; Chang, Munseon; Kwak, Ho-Wan

    2015-12-01

    The goal of the present study was to explore deficits in gaze detection and emotional value judgment during a saccadic eye movement task in adults with attention deficit/hyperactivity disorder (ADHD) tendencies. Thirty-two participants, consisting of 16 ADHD tendencies and 16 controls, were recruited from a pool of 243 university students. Among the many problems in adults with ADHDs, our research focused on the deficits in the processing of nonverbal cues, such as gaze direction and the emotional value of others' faces. In Experiment 1, a cue display containing a face with emotional value and gaze direction was followed by a target display containing two faces located on the left and right side of the display. The participant's task was to make an anti-saccade opposite to the gaze direction if the cue face was not emotionally neutral. ADHD tendencies showed more overall errors than controls in making anti-saccades. Based on the hypothesis that the exposure duration of the cue display in Experiment 1 may have been too long, we presented the cue and target display simultaneously to prevent participants from preparing saccades in advance. Participants in Experiment 2 were asked to make either a pro-saccade or an anti-saccade depending on the emotional value of the central cue face. Interestingly, significant group differences were observed for errors of omission and commission. In addition, a significant three-way interaction among groups, cue emotion, and target gaze direction suggests that the emotional recognition and gaze control systems might somehow be interconnected. The result also shows that ADHDs are more easily distracted by a task-irrelevant gaze direction. Taken together, these results suggest that tasks requiring both response inhibition (anti-saccade) and gaze-emotion recognition might be useful in developing a diagnostic test for discriminating adults with ADHDs from healthy adults.

  10. A further examination of word frequency and age-of-acquisition effects in English lexical decision task performance: The role of frequency trajectory.

    PubMed

    Juhasz, Barbara J; Yap, Melvin J; Raoul, Akila; Kaye, Micaela

    2018-04-23

    Word frequency is an important predictor of lexical-decision task performance. The current study further examined the role of this variable by exploring the influence of frequency trajectory. Frequency trajectory is measured by how often a word occurs in childhood relative to adulthood. Past research on the role of this variable in word recognition has produced equivocal results. In the current study, words were selected based on their frequencies in Grade 1 (child frequency) and Grade 13 (college frequency). In Experiment 1, four frequency trajectory conditions were factorially examined in a lexical-decision task with English words: high-to-high (world), high-to-low (uncle), low-to-high (brain) and low-to-low (opera). an interaction between Grade 1 and college frequency demonstrated that words in the low-to-high condition were processed significantly faster and more accurately than words in the low-to-low condition, whereas the high-to-high and high-to-low conditions did not differ significantly. In Experiment 2, an advantage for words with an increasing frequency trajectory was also supported in regression analyses on both lexical decision and naming times for 3,039 items selected from the English Lexicon Project (Balota et al., 2007). This was replicated in Experiment 3, based on a regression analysis of 2,680 words from the British Lexicon Project (BLP; Keuleers, Lacey, Rastle, & Brysbaert, 2012). In all analyses, rated age-of-acquisition also significantly impacted word recognition. Together, the results suggest that the age at which a word is initially learned as well as its frequency trajectory across childhood impact performance in the lexical-decision task. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  11. Theory of mind and its relationship with executive functions and emotion recognition in borderline personality disorder.

    PubMed

    Baez, Sandra; Marengo, Juan; Perez, Ana; Huepe, David; Font, Fernanda Giralt; Rial, Veronica; Gonzalez-Gadea, María Luz; Manes, Facundo; Ibanez, Agustin

    2015-09-01

    Impaired social cognition has been claimed to be a mechanism underlying the development and maintenance of borderline personality disorder (BPD). One important aspect of social cognition is the theory of mind (ToM), a complex skill that seems to be influenced by more basic processes, such as executive functions (EF) and emotion recognition. Previous ToM studies in BPD have yielded inconsistent results. This study assessed the performance of BPD adults on ToM, emotion recognition, and EF tasks. We also examined whether EF and emotion recognition could predict the performance on ToM tasks. We evaluated 15 adults with BPD and 15 matched healthy controls using different tasks of EF, emotion recognition, and ToM. The results showed that BPD adults exhibited deficits in the three domains, which seem to be task-dependent. Furthermore, we found that EF and emotion recognition predicted the performance on ToM. Our results suggest that tasks that involve real-life social scenarios and contextual cues are more sensitive to detect ToM and emotion recognition deficits in BPD individuals. Our findings also indicate that (a) ToM variability in BPD is partially explained by individual differences on EF and emotion recognition; and (b) ToM deficits of BPD patients are partially explained by the capacity to integrate cues from face, prosody, gesture, and social context to identify the emotions and others' beliefs. © 2014 The British Psychological Society.

  12. Is visual image segmentation a bottom-up or an interactive process?

    PubMed

    Vecera, S P; Farah, M J

    1997-11-01

    Visual image segmentation is the process by which the visual system groups features that are part of a single shape. Is image segmentation a bottom-up or an interactive process? In Experiments 1 and 2, we presented subjects with two overlapping shapes and asked them to determine whether two probed locations were on the same shape or on different shapes. The availability of top-down support was manipulated by presenting either upright or rotated letters. Subjects were fastest to respond when the shapes corresponded to familiar shapes--the upright letters. In Experiment 3, we used a variant of this segmentation task to rule out the possibility that subjects performed same/different judgments after segmentation and recognition of both letters. Finally, in Experiment 4, we ruled out the possibility that the advantage for upright letters was merely due to faster recognition of upright letters relative to rotated letters. The results suggested that the previous effects were not due to faster recognition of upright letters; stimulus familiarity influenced segmentation per se. The results are discussed in terms of an interactive model of visual image segmentation.

  13. The influence of talker and foreign-accent variability on spoken word identification.

    PubMed

    Bent, Tessa; Holt, Rachael Frush

    2013-03-01

    In spoken word identification and memory tasks, stimulus variability from numerous sources impairs performance. In the current study, the influence of foreign-accent variability on spoken word identification was evaluated in two experiments. Experiment 1 used a between-subjects design to test word identification in noise in single-talker and two multiple-talker conditions: multiple talkers with the same accent and multiple talkers with different accents. Identification performance was highest in the single-talker condition, but there was no difference between the single-accent and multiple-accent conditions. Experiment 2 further explored word recognition for multiple talkers in single-accent versus multiple-accent conditions using a mixed design. A detriment to word recognition was observed in the multiple-accent condition compared to the single-accent condition, but the effect differed across the language backgrounds tested. These results demonstrate that the processing of foreign-accent variation may influence word recognition in ways similar to other sources of variability (e.g., speaking rate or style) in that the inclusion of multiple foreign accents can result in a small but significant performance decrement beyond the multiple-talker effect.

  14. Neural substrates of view-invariant object recognition developed without experiencing rotations of the objects.

    PubMed

    Okamura, Jun-Ya; Yamaguchi, Reona; Honda, Kazunari; Wang, Gang; Tanaka, Keiji

    2014-11-05

    One fails to recognize an unfamiliar object across changes in viewing angle when it must be discriminated from similar distractor objects. View-invariant recognition gradually develops as the viewer repeatedly sees the objects in rotation. It is assumed that different views of each object are associated with one another while their successive appearance is experienced in rotation. However, natural experience of objects also contains ample opportunities to discriminate among objects at each of the multiple viewing angles. Our previous behavioral experiments showed that after experiencing a new set of object stimuli during a task that required only discrimination at each of four viewing angles at 30° intervals, monkeys could recognize the objects across changes in viewing angle up to 60°. By recording activities of neurons from the inferotemporal cortex after various types of preparatory experience, we here found a possible neural substrate for the monkeys' performance. For object sets that the monkeys had experienced during the task that required only discrimination at each of four viewing angles, many inferotemporal neurons showed object selectivity covering multiple views. The degree of view generalization found for these object sets was similar to that found for stimulus sets with which the monkeys had been trained to conduct view-invariant recognition. These results suggest that the experience of discriminating new objects in each of several viewing angles develops the partially view-generalized object selectivity distributed over many neurons in the inferotemporal cortex, which in turn bases the monkeys' emergent capability to discriminate the objects across changes in viewing angle. Copyright © 2014 the authors 0270-6474/14/3415047-13$15.00/0.

  15. Is attention essential for inducing synesthetic colors? Evidence from oculomotor distractors.

    PubMed

    Nijboer, Tanja C W; Van der Stigchel, Stefan

    2009-06-30

    In studies investigating visual attention in synesthesia, the targets usually induce a synesthetic color. To measure to what extent attention is necessary to induce synesthetic color experiences, one needs a task in which the synesthetic color is induced by a task-irrelevant distractor. In the current study, an oculomotor distractor task was used in which an eye movement was to be made to a physically colored target while ignoring a single physically colored or synesthetic distractor. Whereas many erroneous eye movements were made to distractors with an identical hue as the target (i.e., capture), much less interference was found with synesthetic distractors. The interference of synesthetic distractors was comparable with achromatic non-digit distractors. These results suggest that attention and hence overt recognition of the inducing stimulus are essential for the synesthetic color experience to occur.

  16. Concreteness norms for 1,659 French words: Relationships with other psycholinguistic variables and word recognition times.

    PubMed

    Bonin, Patrick; Méot, Alain; Bugaiska, Aurélia

    2018-02-12

    Words that correspond to a potential sensory experience-concrete words-have long been found to possess a processing advantage over abstract words in various lexical tasks. We collected norms of concreteness for a set of 1,659 French words, together with other psycholinguistic norms that were not available for these words-context availability, emotional valence, and arousal-but which are important if we are to achieve a better understanding of the meaning of concreteness effects. We then investigated the relationships of concreteness with these newly collected variables, together with other psycholinguistic variables that were already available for this set of words (e.g., imageability, age of acquisition, and sensory experience ratings). Finally, thanks to the variety of psychological norms available for this set of words, we decided to test further the embodied account of concreteness effects in visual-word recognition, championed by Kousta, Vigliocco, Vinson, Andrews, and Del Campo (Journal of Experimental Psychology: General, 140, 14-34, 2011). Similarly, we investigated the influences of concreteness in three word recognition tasks-lexical decision, progressive demasking, and word naming-using a multiple regression approach, based on the reaction times available in Chronolex (Ferrand, Brysbaert, Keuleers, New, Bonin, Méot, Pallier, Frontiers in Psychology, 2; 306, 2011). The norms can be downloaded as supplementary material provided with this article.

  17. Short theta burst stimulation to left frontal cortex prior to encoding enhances subsequent recognition memory

    PubMed Central

    Demeter, Elise; Mirdamadi, Jasmine L.; Meehan, Sean K.; Taylor, Stephan F.

    2016-01-01

    Deep semantic encoding of verbal stimuli can aid in later successful retrieval of those stimuli from long-term episodic memory. Evidence from numerous neuropsychological and neuroimaging experiments demonstrate regions in left prefrontal cortex, including left dorsolateral prefrontal cortex (DLPFC), are important for processes related to encoding. Here, we investigated the relationship between left DLPFC activity during encoding and successful subsequent memory with transcranial magnetic stimulation (TMS). In a pair of experiments using a 2-session within-subjects design, we stimulated either left DLPFC or a control region (Vertex) with a single 2-s train of short theta burst stimulation (sTBS) during a semantic encoding task and then gave participants a recognition memory test. We found that subsequent memory was enhanced on the day left DLPFC was stimulated, relative to the day Vertex was stimulated, and that DLPFC stimulation also increased participants’ confidence in their decisions during the recognition task. We also explored the time course of how long the effects of sTBS persisted. Our data suggest 2 s of sTBS to left DLPFC is capable of enhancing subsequent memory for items encoded up to 15 s following stimulation. Collectively, these data demonstrate sTBS is capable of enhancing long-term memory and provide evidence that TBS protocols are a potentially powerful tool for modulating cognitive function. PMID:27098772

  18. I undervalue you but I need you: the dissociation of attitude and memory toward in-group members.

    PubMed

    Zhao, Ke; Wu, Qi; Shen, Xunbing; Xuan, Yuming; Fu, Xiaolan

    2012-01-01

    In the present study, the in-group bias or in-group derogation among Mainland Chinese was investigated through a rating task and a recognition test. In two experiments,participants from two universities with similar ranks rated novel faces or names and then had a recognition test. Half of the faces or names were labeled as participants' own university and the other half were labeled as their counterpart. Results showed that, for either faces or names, rating scores for out-group members were consistently higher than those for in-group members, whereas the recognition accuracy showed just the opposite. These results indicated that the attitude and memory for group-relevant information might be dissociated among Mainland Chinese.

  19. I Undervalue You but I Need You: The Dissociation of Attitude and Memory Toward In-Group Members

    PubMed Central

    Zhao, Ke; Wu, Qi; Shen, Xunbing; Xuan, Yuming; Fu, Xiaolan

    2012-01-01

    In the present study, the in-group bias or in-group derogation among mainland Chinese was investigated through a rating task and a recognition test. In two experiments,participants from two universities with similar ranks rated novel faces or names and then had a recognition test. Half of the faces or names were labeled as participants' own university and the other half were labeled as their counterpart. Results showed that, for either faces or names, rating scores for out-group members were consistently higher than those for in-group members, whereas the recognition accuracy showed just the opposite. These results indicated that the attitude and memory for group-relevant information might be dissociated among Mainland Chinese. PMID:22412955

  20. Recognizing Biological Motion and Emotions from Point-Light Displays in Autism Spectrum Disorders

    PubMed Central

    Nackaerts, Evelien; Wagemans, Johan; Helsen, Werner; Swinnen, Stephan P.; Wenderoth, Nicole; Alaerts, Kaat

    2012-01-01

    One of the main characteristics of Autism Spectrum Disorder (ASD) are problems with social interaction and communication. Here, we explored ASD-related alterations in ‘reading’ body language of other humans. Accuracy and reaction times were assessed from two observational tasks involving the recognition of ‘biological motion’ and ‘emotions’ from point-light displays (PLDs). Eye movements were recorded during the completion of the tests. Results indicated that typically developed-participants were more accurate than ASD-subjects in recognizing biological motion or emotions from PLDs. No accuracy differences were revealed on two control-tasks (involving the indication of color-changes in the moving point-lights). Group differences in reaction times existed on all tasks, but effect sizes were higher for the biological and emotion recognition tasks. Biological motion recognition abilities were related to a person’s ability to recognize emotions from PLDs. However, ASD-related atypicalities in emotion recognition could not entirely be attributed to more basic deficits in biological motion recognition, suggesting an additional ASD-specific deficit in recognizing the emotional dimension of the point light displays. Eye movements were assessed during the completion of tasks and results indicated that ASD-participants generally produced more saccades and shorter fixation-durations compared to the control-group. However, especially for emotion recognition, these altered eye movements were associated with reductions in task-performance. PMID:22970227

  1. The Role of Anterior Nuclei of the Thalamus: A Subcortical Gate in Memory Processing: An Intracerebral Recording Study.

    PubMed

    Štillová, Klára; Jurák, Pavel; Chládek, Jan; Chrastina, Jan; Halámek, Josef; Bočková, Martina; Goldemundová, Sabina; Říha, Ivo; Rektor, Ivan

    2015-01-01

    To study the involvement of the anterior nuclei of the thalamus (ANT) as compared to the involvement of the hippocampus in the processes of encoding and recognition during visual and verbal memory tasks. We studied intracerebral recordings in patients with pharmacoresistent epilepsy who underwent deep brain stimulation (DBS) of the ANT with depth electrodes implanted bilaterally in the ANT and compared the results with epilepsy surgery candidates with depth electrodes implanted bilaterally in the hippocampus. We recorded the event-related potentials (ERPs) elicited by the visual and verbal memory encoding and recognition tasks. P300-like potentials were recorded in the hippocampus by visual and verbal memory encoding and recognition tasks and in the ANT by the visual encoding and visual and verbal recognition tasks. No significant ERPs were recorded during the verbal encoding task in the ANT. In the visual and verbal recognition tasks, the P300-like potentials in the ANT preceded the P300-like potentials in the hippocampus. The ANT is a structure in the memory pathway that processes memory information before the hippocampus. We suggest that the ANT has a specific role in memory processes, especially memory recognition, and that memory disturbance should be considered in patients with ANT-DBS and in patients with ANT lesions. ANT is well positioned to serve as a subcortical gate for memory processing in cortical structures.

  2. Recognizing biological motion and emotions from point-light displays in autism spectrum disorders.

    PubMed

    Nackaerts, Evelien; Wagemans, Johan; Helsen, Werner; Swinnen, Stephan P; Wenderoth, Nicole; Alaerts, Kaat

    2012-01-01

    One of the main characteristics of Autism Spectrum Disorder (ASD) are problems with social interaction and communication. Here, we explored ASD-related alterations in 'reading' body language of other humans. Accuracy and reaction times were assessed from two observational tasks involving the recognition of 'biological motion' and 'emotions' from point-light displays (PLDs). Eye movements were recorded during the completion of the tests. Results indicated that typically developed-participants were more accurate than ASD-subjects in recognizing biological motion or emotions from PLDs. No accuracy differences were revealed on two control-tasks (involving the indication of color-changes in the moving point-lights). Group differences in reaction times existed on all tasks, but effect sizes were higher for the biological and emotion recognition tasks. Biological motion recognition abilities were related to a person's ability to recognize emotions from PLDs. However, ASD-related atypicalities in emotion recognition could not entirely be attributed to more basic deficits in biological motion recognition, suggesting an additional ASD-specific deficit in recognizing the emotional dimension of the point light displays. Eye movements were assessed during the completion of tasks and results indicated that ASD-participants generally produced more saccades and shorter fixation-durations compared to the control-group. However, especially for emotion recognition, these altered eye movements were associated with reductions in task-performance.

  3. How a Hat May Affect 3-Month-Olds' Recognition of a Face: An Eye-Tracking Study

    PubMed Central

    Bulf, Hermann; Valenza, Eloisa; Turati, Chiara

    2013-01-01

    Recent studies have shown that infants’ face recognition rests on a robust face representation that is resilient to a variety of facial transformations such as rotations in depth, motion, occlusion or deprivation of inner/outer features. Here, we investigated whether 3-month-old infants’ ability to represent the invariant aspects of a face is affected by the presence of an external add-on element, i.e. a hat. Using a visual habituation task, three experiments were carried out in which face recognition was investigated by manipulating the presence/absence of a hat during face encoding (i.e. habituation phase) and face recognition (i.e. test phase). An eye-tracker system was used to record the time infants spent looking at face-relevant information compared to the hat. The results showed that infants’ face recognition was not affected by the presence of the external element when the type of the hat did not vary between the habituation and test phases, and when both the novel and the familiar face wore the same hat during the test phase (Experiment 1). Infants’ ability to recognize the invariant aspects of a face was preserved also when the hat was absent in the habituation phase and the same hat was shown only during the test phase (Experiment 2). Conversely, when the novel face identity competed with a novel hat, the hat triggered the infants’ attention, interfering with the recognition process and preventing the infants’ preference for the novel face during the test phase (Experiment 3). Findings from the current study shed light on how faces and objects are processed when they are simultaneously presented in the same visual scene, contributing to an understanding of how infants respond to the multiple and composite information available in their surrounding environment. PMID:24349378

  4. How a hat may affect 3-month-olds' recognition of a face: an eye-tracking study.

    PubMed

    Bulf, Hermann; Valenza, Eloisa; Turati, Chiara

    2013-01-01

    Recent studies have shown that infants' face recognition rests on a robust face representation that is resilient to a variety of facial transformations such as rotations in depth, motion, occlusion or deprivation of inner/outer features. Here, we investigated whether 3-month-old infants' ability to represent the invariant aspects of a face is affected by the presence of an external add-on element, i.e. a hat. Using a visual habituation task, three experiments were carried out in which face recognition was investigated by manipulating the presence/absence of a hat during face encoding (i.e. habituation phase) and face recognition (i.e. test phase). An eye-tracker system was used to record the time infants spent looking at face-relevant information compared to the hat. The results showed that infants' face recognition was not affected by the presence of the external element when the type of the hat did not vary between the habituation and test phases, and when both the novel and the familiar face wore the same hat during the test phase (Experiment 1). Infants' ability to recognize the invariant aspects of a face was preserved also when the hat was absent in the habituation phase and the same hat was shown only during the test phase (Experiment 2). Conversely, when the novel face identity competed with a novel hat, the hat triggered the infants' attention, interfering with the recognition process and preventing the infants' preference for the novel face during the test phase (Experiment 3). Findings from the current study shed light on how faces and objects are processed when they are simultaneously presented in the same visual scene, contributing to an understanding of how infants respond to the multiple and composite information available in their surrounding environment.

  5. Conflict resolved: On the role of spatial attention in reading and color naming tasks.

    PubMed

    Robidoux, Serje; Besner, Derek

    2015-12-01

    The debate about whether or not visual word recognition requires spatial attention has been marked by a conflict: the results from different tasks yield different conclusions. Experiments in which the primary task is reading based show no evidence that unattended words are processed, whereas when the primary task is color identification, supposedly unattended words do affect processing. However, the color stimuli used to date does not appear to demand as much spatial attention as explicit word reading tasks. We first identify a color stimulus that requires as much spatial attention to identify as does a word. We then demonstrate that when spatial attention is appropriately captured, distractor words in unattended locations do not affect color identification. We conclude that there is no word identification without spatial attention.

  6. Judging Normality and Attractiveness in Faces: Direct Evidence of a More Refined Representation for Own-Race, Young Adult Faces.

    PubMed

    Zhou, Xiaomei; Short, Lindsey A; Chan, Harmonie S J; Mondloch, Catherine J

    2016-09-01

    Young and older adults are more sensitive to deviations from normality in young than older adult faces, suggesting that the dimensions of face space are optimized for young adult faces. Here, we extend these findings to own-race faces and provide converging evidence using an attractiveness rating task. In Experiment 1, Caucasian and Chinese adults were shown own- and other-race face pairs; one member was undistorted and the other had compressed or expanded features. Participants indicated which member of each pair was more normal (a task that requires referencing a norm) and which was more expanded (a task that simply requires discrimination). Participants showed an own-race advantage in the normality task but not the discrimination task. In Experiment 2, participants rated the facial attractiveness of own- and other-race faces (Experiment 2a) or young and older adult faces (Experiment 2b). Between-rater variability in ratings of individual faces was higher for other-race and older adult faces; reduced consensus in attractiveness judgments reflects a less refined face space. Collectively, these results provide direct evidence that the dimensions of face space are optimized for own-race and young adult faces, which may underlie face race- and age-based deficits in recognition. © The Author(s) 2016.

  7. Ad hoc categories and false memories: Memory illusions for categories created on-the-spot.

    PubMed

    Soro, Jerônimo C; Ferreira, Mário B; Semin, Gün R; Mata, André; Carneiro, Paula

    2017-11-01

    Three experiments were designed to test whether experimentally created ad hoc associative networks evoke false memories. We used the DRM (Deese, Roediger, McDermott) paradigm with lists of ad hoc categories composed of exemplars aggregated toward specific goals (e.g., going for a picnic) that do not share any consistent set of features. Experiment 1 revealed considerable levels of false recognitions of critical words from ad hoc categories. False recognitions occurred even when the lists were presented without an organizing theme (i.e., the category's label). Experiments 1 and 2 tested whether (a) the ease of identifying the categories' themes, and (b) the lists' backward associative strength could be driving the effect. List identifiability did not correlate with false recognition, and the effect remained even when backward associative strength was controlled for. Experiment 3 manipulated the distractor items in the recognition task to address the hypothesis that the salience of unrelated items could be facilitating the occurrence of the phenomenon. The effect remained when controlling for this source of facilitation. These results have implications for assumptions made by theories of false memories, namely the preexistence of associations in the activation-monitoring framework and the central role of gist extraction in fuzzy-trace theory, while providing evidence of the occurrence of false memories for more dynamic and context-dependent knowledge structures. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  8. Music-induced changes in functional cerebral asymmetries.

    PubMed

    Hausmann, Markus; Hodgetts, Sophie; Eerola, Tuomas

    2016-04-01

    After decades of research, it remains unclear whether emotion lateralization occurs because one hemisphere is dominant for processing the emotional content of the stimuli, or whether emotional stimuli activate lateralised networks associated with the subjective emotional experience. By using emotion-induction procedures, we investigated the effect of listening to happy and sad music on three well-established lateralization tasks. In a prestudy, Mozart's piano sonata (K. 448) and Beethoven's Moonlight Sonata were rated as the most happy and sad excerpts, respectively. Participants listened to either one emotional excerpt, or sat in silence before completing an emotional chimeric faces task (Experiment 1), visual line bisection task (Experiment 2) and a dichotic listening task (Experiment 3 and 4). Listening to happy music resulted in a reduced right hemispheric bias in facial emotion recognition (Experiment 1) and visuospatial attention (Experiment 2) and increased left hemispheric bias in language lateralization (Experiments 3 and 4). Although Experiments 1-3 revealed an increased positive emotional state after listening to happy music, mediation analyses revealed that the effect on hemispheric asymmetries was not mediated by music-induced emotional changes. The direct effect of music listening on lateralization was investigated in Experiment 4 in which tempo of the happy excerpt was manipulated by controlling for other acoustic features. However, the results of Experiment 4 made it rather unlikely that tempo is the critical cue accounting for the effects. We conclude that listening to music can affect functional cerebral asymmetries in well-established emotional and cognitive laterality tasks, independent of music-induced changes in the emotion state. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Distinct roles of basal forebrain cholinergic neurons in spatial and object recognition memory.

    PubMed

    Okada, Kana; Nishizawa, Kayo; Kobayashi, Tomoko; Sakata, Shogo; Kobayashi, Kazuto

    2015-08-06

    Recognition memory requires processing of various types of information such as objects and locations. Impairment in recognition memory is a prominent feature of amnesia and a symptom of Alzheimer's disease (AD). Basal forebrain cholinergic neurons contain two major groups, one localized in the medial septum (MS)/vertical diagonal band of Broca (vDB), and the other in the nucleus basalis magnocellularis (NBM). The roles of these cell groups in recognition memory have been debated, and it remains unclear how they contribute to it. We use a genetic cell targeting technique to selectively eliminate cholinergic cell groups and then test spatial and object recognition memory through different behavioural tasks. Eliminating MS/vDB neurons impairs spatial but not object recognition memory in the reference and working memory tasks, whereas NBM elimination undermines only object recognition memory in the working memory task. These impairments are restored by treatment with acetylcholinesterase inhibitors, anti-dementia drugs for AD. Our results highlight that MS/vDB and NBM cholinergic neurons are not only implicated in recognition memory but also have essential roles in different types of recognition memory.

  10. Using eye movements as an index of implicit face recognition in autism spectrum disorder.

    PubMed

    Hedley, Darren; Young, Robyn; Brewer, Neil

    2012-10-01

    Individuals with an autism spectrum disorder (ASD) typically show impairment on face recognition tasks. Performance has usually been assessed using overt, explicit recognition tasks. Here, a complementary method involving eye tracking was used to examine implicit face recognition in participants with ASD and in an intelligence quotient-matched non-ASD control group. Differences in eye movement indices between target and foil faces were used as an indicator of implicit face recognition. Explicit face recognition was assessed using old-new discrimination and reaction time measures. Stimuli were faces of studied (target) or unfamiliar (foil) persons. Target images at test were either identical to the images presented at study or altered by changing the lighting, pose, or by masking with visual noise. Participants with ASD performed worse than controls on the explicit recognition task. Eye movement-based measures, however, indicated that implicit recognition may not be affected to the same degree as explicit recognition. Autism Res 2012, 5: 363-379. © 2012 International Society for Autism Research, Wiley Periodicals, Inc. © 2012 International Society for Autism Research, Wiley Periodicals, Inc.

  11. Does letter position coding depend on consonant/vowel status? Evidence with the masked priming technique.

    PubMed

    Perea, Manuel; Acha, Joana

    2009-02-01

    Recently, a number of input coding schemes (e.g., SOLAR model, SERIOL model, open-bigram model, overlap model) have been proposed that capture the transposed-letter priming effect (i.e., faster response times for jugde-JUDGE than for jupte-JUDGE). In their current version, these coding schemes do not assume any processing differences between vowels and consonants. However, in a lexical decision task, Perea and Lupker (2004, JML; Lupker, Perea, & Davis, 2008, L&CP) reported that transposed-letter priming effects occurred for consonant transpositions but not for vowel transpositions. This finding poses a challenge for these recently proposed coding schemes. Here, we report four masked priming experiments that examine whether this consonant/vowel dissociation in transposed-letter priming is task-specific. In Experiment 1, we used a lexical decision task and found a transposed-letter priming effect only for consonant transpositions. In Experiments 2-4, we employed a same-different task - a task which taps early perceptual processes - and found a robust transposed-letter priming effect that did not interact with consonant/vowel status. We examine the implications of these findings for the front-end of the models of visual word recognition.

  12. Recall and recognition of verbal paired associates in early Alzheimer's disease.

    PubMed

    Lowndes, G J; Saling, M M; Ames, D; Chiu, E; Gonzalez, L M; Savage, G R

    2008-07-01

    The primary impairment in early Alzheimer's disease (AD) is encoding/consolidation, resulting from medial temporal lobe (MTL) pathology. AD patients perform poorly on cued-recall paired associate learning (PAL) tasks, which assess the ability of the MTLs to encode relational memory. Since encoding and retrieval processes are confounded within performance indexes on cued-recall PAL, its specificity for AD is limited. Recognition paradigms tend to show good specificity for AD, and are well tolerated, but are typically less sensitive than recall tasks. Associate-recognition is a novel PAL task requiring a combination of recall and recognition processes. We administered a verbal associate-recognition test and cued-recall analogue to 22 early AD patients and 55 elderly controls to compare their ability to discriminate these groups. Both paradigms used eight arbitrarily related word pairs (e.g., pool-teeth) with varying degrees of imageability. Associate-recognition was equally effective as the cued-recall analogue in discriminating the groups, and logistic regression demonstrated classification rates by both tasks were equivalent. These preliminary findings provide support for the clinical value of this recognition tool. Conceptually it has potential for greater specificity in informing neuropsychological diagnosis of AD in clinical samples but this requires further empirical support.

  13. Recognition of visual stimuli and memory for spatial context in schizophrenic patients and healthy volunteers.

    PubMed

    Brébion, Gildas; David, Anthony S; Pilowsky, Lyn S; Jones, Hugh

    2004-11-01

    Verbal and visual recognition tasks were administered to 40 patients with schizophrenia and 40 healthy comparison subjects. The verbal recognition task consisted of discriminating between 16 target words and 16 new words. The visual recognition task consisted of discriminating between 16 target pictures (8 black-and-white and 8 color) and 16 new pictures (8 black-and-white and 8 color). Visual recognition was followed by a spatial context discrimination task in which subjects were required to remember the spatial location of the target pictures at encoding. Results showed that recognition deficit in patients was similar for verbal and visual material. In both schizophrenic and healthy groups, men, but not women, obtained better recognition scores for the colored than for the black-and-white pictures. However, men and women similarly benefited from color to reduce spatial context discrimination errors. Patients showed a significant deficit in remembering the spatial location of the pictures, independently of accuracy in remembering the pictures themselves. These data suggest that patients are impaired in the amount of visual information that they can encode. With regards to the perceptual attributes of the stimuli, memory for spatial information appears to be affected, but not processing of color information.

  14. When study abroad experience fails to deliver: The internal resources threshold effect

    PubMed Central

    Sunderman, Gretchen; Kroll, Judith F.

    2009-01-01

    Some second language (L2) learners return from study abroad experiences with seemingly no change in their L2 ability. In this study we investigate whether a certain level of internal cognitive resources is necessary in order for individuals to take full advantage of the study abroad experience. Specifically, we examined the role of working memory resources in lexical comprehension and production for learners who had or had not studied abroad. Participants included native English learners of Spanish. Participants completed a translation recognition task and a picture-naming task. The results suggest that individuals who lack a certain threshold of working memory resources are unable to benefit from the study abroad context in terms of being able to produce accurately in the L2. PMID:19714256

  15. Chronic treatment with sulbutiamine improves memory in an object recognition task and reduces some amnesic effects of dizocilpine in a spatial delayed-non-match-to-sample task.

    PubMed

    Bizot, Jean-Charles; Herpin, Alexandre; Pothion, Stéphanie; Pirot, Sylvain; Trovero, Fabrice; Ollat, Hélène

    2005-07-01

    The effect of a sulbutiamine chronic treatment on memory was studied in rats with a spatial delayed-non-match-to-sample (DNMTS) task in a radial maze and a two trial object recognition task. After completion of training in the DNMTS task, animals were subjected for 9 weeks to daily injections of either saline or sulbutiamine (12.5 or 25 mg/kg). Sulbutiamine did not modify memory in the DNMTS task but improved it in the object recognition task. Dizocilpine, impaired both acquisition and retention of the DNMTS task in the saline-treated group, but not in the two sulbutiamine-treated groups, suggesting that sulbutiamine may counteract the amnesia induced by a blockade of the N-methyl-D-aspartate glutamate receptors. Taken together, these results are in favor of a beneficial effect of sulbutiamine on working and episodic memory.

  16. Size-Sensitive Perceptual Representations Underlie Visual and Haptic Object Recognition

    PubMed Central

    Craddock, Matt; Lawson, Rebecca

    2009-01-01

    A variety of similarities between visual and haptic object recognition suggests that the two modalities may share common representations. However, it is unclear whether such common representations preserve low-level perceptual features or whether transfer between vision and haptics is mediated by high-level, abstract representations. Two experiments used a sequential shape-matching task to examine the effects of size changes on unimodal and crossmodal visual and haptic object recognition. Participants felt or saw 3D plastic models of familiar objects. The two objects presented on a trial were either the same size or different sizes and were the same shape or different but similar shapes. Participants were told to ignore size changes and to match on shape alone. In Experiment 1, size changes on same-shape trials impaired performance similarly for both visual-to-visual and haptic-to-haptic shape matching. In Experiment 2, size changes impaired performance on both visual-to-haptic and haptic-to-visual shape matching and there was no interaction between the cost of size changes and direction of transfer. Together the unimodal and crossmodal matching results suggest that the same, size-specific perceptual representations underlie both visual and haptic object recognition, and indicate that crossmodal memory for objects must be at least partly based on common perceptual representations. PMID:19956685

  17. Robots Learn to Recognize Individuals from Imitative Encounters with People and Avatars

    NASA Astrophysics Data System (ADS)

    Boucenna, Sofiane; Cohen, David; Meltzoff, Andrew N.; Gaussier, Philippe; Chetouani, Mohamed

    2016-02-01

    Prior to language, human infants are prolific imitators. Developmental science grounds infant imitation in the neural coding of actions, and highlights the use of imitation for learning from and about people. Here, we used computational modeling and a robot implementation to explore the functional value of action imitation. We report 3 experiments using a mutual imitation task between robots, adults, typically developing children, and children with Autism Spectrum Disorder. We show that a particular learning architecture - specifically one combining artificial neural nets for (i) extraction of visual features, (ii) the robot’s motor internal state, (iii) posture recognition, and (iv) novelty detection - is able to learn from an interactive experience involving mutual imitation. This mutual imitation experience allowed the robot to recognize the interactive agent in a subsequent encounter. These experiments using robots as tools for modeling human cognitive development, based on developmental theory, confirm the promise of developmental robotics. Additionally, findings illustrate how person recognition may emerge through imitative experience, intercorporeal mapping, and statistical learning.

  18. Robots Learn to Recognize Individuals from Imitative Encounters with People and Avatars

    PubMed Central

    Boucenna, Sofiane; Cohen, David; Meltzoff, Andrew N.; Gaussier, Philippe; Chetouani, Mohamed

    2016-01-01

    Prior to language, human infants are prolific imitators. Developmental science grounds infant imitation in the neural coding of actions, and highlights the use of imitation for learning from and about people. Here, we used computational modeling and a robot implementation to explore the functional value of action imitation. We report 3 experiments using a mutual imitation task between robots, adults, typically developing children, and children with Autism Spectrum Disorder. We show that a particular learning architecture - specifically one combining artificial neural nets for (i) extraction of visual features, (ii) the robot’s motor internal state, (iii) posture recognition, and (iv) novelty detection - is able to learn from an interactive experience involving mutual imitation. This mutual imitation experience allowed the robot to recognize the interactive agent in a subsequent encounter. These experiments using robots as tools for modeling human cognitive development, based on developmental theory, confirm the promise of developmental robotics. Additionally, findings illustrate how person recognition may emerge through imitative experience, intercorporeal mapping, and statistical learning. PMID:26844862

  19. Social Recognition Memory Requires Two Stages of Protein Synthesis in Mice

    ERIC Educational Resources Information Center

    Wolf, Gerald; Engelmann, Mario; Richter, Karin

    2005-01-01

    Olfactory recognition memory was tested in adult male mice using a social discrimination task. The testing was conducted to begin to characterize the role of protein synthesis and the specific brain regions associated with activity in this task. Long-term olfactory recognition memory was blocked when the protein synthesis inhibitor anisomycin was…

  20. Early visual experience and the recognition of basic facial expressions: involvement of the middle temporal and inferior frontal gyri during haptic identification by the early blind

    PubMed Central

    Kitada, Ryo; Okamoto, Yuko; Sasaki, Akihiro T.; Kochiyama, Takanori; Miyahara, Motohide; Lederman, Susan J.; Sadato, Norihiro

    2012-01-01

    Face perception is critical for social communication. Given its fundamental importance in the course of evolution, the innate neural mechanisms can anticipate the computations necessary for representing faces. However, the effect of visual deprivation on the formation of neural mechanisms that underlie face perception is largely unknown. We previously showed that sighted individuals can recognize basic facial expressions by haptics surprisingly well. Moreover, the inferior frontal gyrus (IFG) and posterior superior temporal sulcus (pSTS) in the sighted subjects are involved in haptic and visual recognition of facial expressions. Here, we conducted both psychophysical and functional magnetic-resonance imaging (fMRI) experiments to determine the nature of the neural representation that subserves the recognition of basic facial expressions in early blind individuals. In a psychophysical experiment, both early blind and sighted subjects haptically identified basic facial expressions at levels well above chance. In the subsequent fMRI experiment, both groups haptically identified facial expressions and shoe types (control). The sighted subjects then completed the same task visually. Within brain regions activated by the visual and haptic identification of facial expressions (relative to that of shoes) in the sighted group, corresponding haptic identification in the early blind activated regions in the inferior frontal and middle temporal gyri. These results suggest that the neural system that underlies the recognition of basic facial expressions develops supramodally even in the absence of early visual experience. PMID:23372547

  1. Early visual experience and the recognition of basic facial expressions: involvement of the middle temporal and inferior frontal gyri during haptic identification by the early blind.

    PubMed

    Kitada, Ryo; Okamoto, Yuko; Sasaki, Akihiro T; Kochiyama, Takanori; Miyahara, Motohide; Lederman, Susan J; Sadato, Norihiro

    2013-01-01

    Face perception is critical for social communication. Given its fundamental importance in the course of evolution, the innate neural mechanisms can anticipate the computations necessary for representing faces. However, the effect of visual deprivation on the formation of neural mechanisms that underlie face perception is largely unknown. We previously showed that sighted individuals can recognize basic facial expressions by haptics surprisingly well. Moreover, the inferior frontal gyrus (IFG) and posterior superior temporal sulcus (pSTS) in the sighted subjects are involved in haptic and visual recognition of facial expressions. Here, we conducted both psychophysical and functional magnetic-resonance imaging (fMRI) experiments to determine the nature of the neural representation that subserves the recognition of basic facial expressions in early blind individuals. In a psychophysical experiment, both early blind and sighted subjects haptically identified basic facial expressions at levels well above chance. In the subsequent fMRI experiment, both groups haptically identified facial expressions and shoe types (control). The sighted subjects then completed the same task visually. Within brain regions activated by the visual and haptic identification of facial expressions (relative to that of shoes) in the sighted group, corresponding haptic identification in the early blind activated regions in the inferior frontal and middle temporal gyri. These results suggest that the neural system that underlies the recognition of basic facial expressions develops supramodally even in the absence of early visual experience.

  2. Turning an advantage into a disadvantage: ambiguity effects in lexical decision versus reading tasks.

    PubMed

    Piercey, C D; Joordens, S

    2000-06-01

    When performing a lexical decision task, participants can correctly categorize letter strings as words faster if they have multiple meanings (i.e., ambiguous words) than if they have one meaning (i.e., unambiguous words). In contrast, when reading connected text, participants tend to fixate longer on ambiguous words than on unambiguous words. Why are ambiguous words at an advantage in one word recognition task, and at a disadvantage in another? These disparate results can be reconciled if it is assumed that ambiguous words are relatively fast to reach a semantic-blend state sufficient for supporting lexical decisions, but then slow to escape the blend when the task requires a specific meaning be retrieved. We report several experiments that support this possibility.

  3. Do dyslexic individuals present a reduced visual attention span? Evidence from visual recognition tasks of non-verbal multi-character arrays.

    PubMed

    Yeari, Menahem; Isser, Michal; Schiff, Rachel

    2017-07-01

    A controversy has recently developed regarding the hypothesis that developmental dyslexia may be caused, in some cases, by a reduced visual attention span (VAS). To examine this hypothesis, independent of phonological abilities, researchers tested the ability of dyslexic participants to recognize arrays of unfamiliar visual characters. Employing this test, findings were rather equivocal: dyslexic participants exhibited poor performance in some studies but normal performance in others. The present study explored four methodological differences revealed between the two sets of studies that might underlie their conflicting results. Specifically, in two experiments we examined whether a VAS deficit is (a) specific to recognition of multi-character arrays as wholes rather than of individual characters within arrays, (b) specific to characters' position within arrays rather than to characters' identity, or revealed only under a higher attention load due to (c) low-discriminable characters, and/or (d) characters' short exposure. Furthermore, in this study we examined whether pure dyslexic participants who do not have attention disorder exhibit a reduced VAS. Although comorbidity of dyslexia and attention disorder is common and the ability to sustain attention for a long time plays a major rule in the visual recognition task, the presence of attention disorder was neither evaluated nor ruled out in previous studies. Findings did not reveal any differences between the performance of dyslexic and control participants on eight versions of the visual recognition task. These findings suggest that pure dyslexic individuals do not present a reduced visual attention span.

  4. PCANet: A Simple Deep Learning Baseline for Image Classification?

    PubMed

    Chan, Tsung-Han; Jia, Kui; Gao, Shenghua; Lu, Jiwen; Zeng, Zinan; Ma, Yi

    2015-12-01

    In this paper, we propose a very simple deep learning network for image classification that is based on very basic data processing components: 1) cascaded principal component analysis (PCA); 2) binary hashing; and 3) blockwise histograms. In the proposed architecture, the PCA is employed to learn multistage filter banks. This is followed by simple binary hashing and block histograms for indexing and pooling. This architecture is thus called the PCA network (PCANet) and can be extremely easily and efficiently designed and learned. For comparison and to provide a better understanding, we also introduce and study two simple variations of PCANet: 1) RandNet and 2) LDANet. They share the same topology as PCANet, but their cascaded filters are either randomly selected or learned from linear discriminant analysis. We have extensively tested these basic networks on many benchmark visual data sets for different tasks, including Labeled Faces in the Wild (LFW) for face verification; the MultiPIE, Extended Yale B, AR, Facial Recognition Technology (FERET) data sets for face recognition; and MNIST for hand-written digit recognition. Surprisingly, for all tasks, such a seemingly naive PCANet model is on par with the state-of-the-art features either prefixed, highly hand-crafted, or carefully learned [by deep neural networks (DNNs)]. Even more surprisingly, the model sets new records for many classification tasks on the Extended Yale B, AR, and FERET data sets and on MNIST variations. Additional experiments on other public data sets also demonstrate the potential of PCANet to serve as a simple but highly competitive baseline for texture classification and object recognition.

  5. Age-related differences in listening effort during degraded speech recognition

    PubMed Central

    Ward, Kristina M.; Shen, Jing; Souza, Pamela E.; Grieco-Calub, Tina M.

    2016-01-01

    Objectives The purpose of the current study was to quantify age-related differences in executive control as it relates to dual-task performance, which is thought to represent listening effort, during degraded speech recognition. Design Twenty-five younger adults (18–24 years) and twenty-one older adults (56–82 years) completed a dual-task paradigm that consisted of a primary speech recognition task and a secondary visual monitoring task. Sentence material in the primary task was either unprocessed or spectrally degraded into 8, 6, or 4 spectral channels using noise-band vocoding. Performance on the visual monitoring task was assessed by the accuracy and reaction time of participants’ responses. Performance on the primary and secondary task was quantified in isolation (i.e., single task) and during the dual-task paradigm. Participants also completed a standardized psychometric measure of executive control, including attention and inhibition. Statistical analyses were implemented to evaluate changes in listeners’ performance on the primary and secondary tasks (1) per condition (unprocessed vs. vocoded conditions); (2) per task (baseline vs. dual task); and (3) per group (younger vs. older adults). Results Speech recognition declined with increasing spectral degradation for both younger and older adults when they performed the task in isolation or concurrently with the visual monitoring task. Older adults were slower and less accurate than younger adults on the visual monitoring task when performed in isolation, which paralleled age-related differences in standardized scores of executive control. When compared to single-task performance, older adults experienced greater declines in secondary-task accuracy, but not reaction time, than younger adults. Furthermore, results revealed that age-related differences in executive control significantly contributed to age-related differences on the visual monitoring task during the dual-task paradigm. Conclusions Older adults experienced significantly greater declines in secondary-task accuracy during degraded speech recognition than younger adults. These findings are interpreted as suggesting that older listeners expended greater listening effort than younger listeners, and may be partially attributed to age-related differences in executive control. PMID:27556526

  6. Human target acquisition performance

    NASA Astrophysics Data System (ADS)

    Teaney, Brian P.; Du Bosq, Todd W.; Reynolds, Joseph P.; Thompson, Roger; Aghera, Sameer; Moyer, Steven K.; Flug, Eric; Espinola, Richard; Hixson, Jonathan

    2012-06-01

    The battlefield has shifted from armored vehicles to armed insurgents. Target acquisition (identification, recognition, and detection) range performance involving humans as targets is vital for modern warfare. The acquisition and neutralization of armed insurgents while at the same time minimizing fratricide and civilian casualties is a mounting concern. U.S. Army RDECOM CERDEC NVESD has conducted many experiments involving human targets for infrared and reflective band sensors. The target sets include human activities, hand-held objects, uniforms & armament, and other tactically relevant targets. This paper will define a set of standard task difficulty values for identification and recognition associated with human target acquisition performance.

  7. Within-person adaptivity in frugal judgments from memory.

    PubMed

    Filevich, Elisa; Horn, Sebastian S; Kühn, Simone

    2017-12-22

    Humans can exploit recognition memory as a simple cue for judgment. The utility of recognition depends on the interplay with the environment, particularly on its predictive power (validity) in a domain. It is, therefore, an important question whether people are sensitive to differences in recognition validity between domains. Strategic, intra-individual changes in the reliance on recognition have not been investigated so far. The present study fills this gap by scrutinizing within-person changes in using a frugal strategy, the recognition heuristic (RH), across two task domains that differed in recognition validity. The results showed adaptive changes in the reliance on recognition between domains. However, these changes were neither associated with the individual recognition validities nor with corresponding changes in these validities. These findings support a domain-adaptivity explanation, suggesting that people have broader intuitions about the usefulness of recognition across different domains that are nonetheless sufficiently robust for adaptive decision making. The analysis of metacognitive confidence reports mirrored and extended these results. Like RH use, confidence ratings covaried with task domain, but not with individual recognition validities. The changes in confidence suggest that people may have metacognitive access to information about global differences between task domains, but not to individual cue validities.

  8. Response procedure, memory, and dichotic emotion recognition.

    PubMed

    Voyer, Daniel; Dempsey, Danielle; Harding, Jennifer A

    2014-03-01

    Three experiments investigated the role of memory and rehearsal in a dichotic emotion recognition task by manipulating the response procedure as well as the interval between encoding and retrieval while taking into account order of report. For all experiments, right-handed undergraduates were presented with dichotic pairs of the words bower, dower, power, and tower pronounced in a sad, angry, happy, or neutral tone of voice. Participants were asked to report the two emotions presented on each trial by clicking on the corresponding drawings or words on a computer screen, either following no delay or a five second delay. Experiment 1 applied the delay conditions as a between-subjects factor whereas it was a within-subject factor in Experiment 2. In Experiments 1 and 2, more correct responses occurred for the left than the right ear, reflecting a left ear advantage (LEA) that was slightly larger with a nonverbal than a verbal response. The LEA was also found to be larger with no delay than with the 5s delay. In addition, participants typically responded first to the left ear stimulus. In fact, the first response produced a LEA whereas the second response produced a right ear advantage. Experiment 3 involved a concurrent task during the delay to prevent rehearsal. In Experiment 3, the pattern of results supported the claim that rehearsal could account for the findings of the first two experiments. The findings are interpreted in the context of the role of rehearsal and memory in models of dichotic listening. Copyright © 2013 Elsevier Inc. All rights reserved.

  9. Can anchor models explain inverted-U effects in facial judgments?

    PubMed

    Mignault, Alain; Bhaumik, Arijit; Chaudhuri, Avi

    2009-06-01

    Researchers in a variety of disciplines have found that participants take less time and generate less diversity of responses when judging stimuli towards the ends of a scale than when judging those near the center. Three types of models, connectionist, exemplar, and anchor models, can account for these inverted-U effects. Anchor models assume that stimuli near the ends of the scale are used as anchors to compare with the other stimuli, implying that anchor representations are activated for each judgment. Therefore, participants should learn the anchors better than the other stimuli. Participants were 40 students from the Department of Psychology at McGill University (5 men; M age = 20.5 yr.; SD = 1.7). The experiment involved two tasks: first participants judged facial gender and then performed a recognition task. The results showed no correlation between the position on the gender scale and recognition accuracy. Several hypotheses were offered to explain these results.

  10. Control of working memory: effects of attention training on target recognition and distractor salience in an auditory selection task.

    PubMed

    Melara, Robert D; Tong, Yunxia; Rao, Aparna

    2012-01-09

    Behavioral and electrophysiological measures of target and distractor processing were examined in an auditory selective attention task before and after three weeks of distractor suppression training. Behaviorally, training improved target recognition and led to less conservative and more rapid responding. Training also effectively shortened the temporal distance between distractors and targets needed to achieve a fixed level of target sensitivity. The effects of training on event-related potentials were restricted to the distracting stimulus: earlier N1 latency, enhanced P2 amplitude, and weakened P3 amplitude. Nevertheless, as distractor P2 amplitude increased, so too did target P3 amplitude, connecting experience-dependent changes in distractor processing with greater distinctiveness of targets in working memory. We consider the effects of attention training on the processing priorities, representational noise, and inhibitory processes operating in working memory. Copyright © 2011 Elsevier B.V. All rights reserved.

  11. The effects of articulatory suppression on word recognition in Serbian.

    PubMed

    Tenjović, Lazar; Lalović, Dejan

    2005-11-01

    The relatedness of phonological coding to the articulatory mechanisms in visual word recognition vary in different writing systems. While articulatory suppression (i.e., continuous verbalising during a visual word processing task) has a detrimental effect on the processing of Japanese words printed in regular syllabic Khana script, it has no such effect on the processing of irregular alphabetic English words. Besner (1990) proposed an experiment in the Serbian language, written in Cyrillic and Roman regular but alphabetic scripts, to disentangle the importance of script regularity vs. the syllabic-alphabetic dimension for the effects observed. Articulatory suppression had an equally detrimental effect in a lexical decision task for both alphabetically regular and distorted (by a mixture of the two alphabets) Serbian words, but comparisons of articulatory suppression effect size obtained in Serbian to those obtained in English and Japanese suggest "alphabeticity-syllabicity" to be the more critical dimension in determining the relatedness of phonological coding and articulatory activity.

  12. Sensitivity of negative subsequent memory and task-negative effects to age and associative memory performance.

    PubMed

    de Chastelaine, Marianne; Mattson, Julia T; Wang, Tracy H; Donley, Brian E; Rugg, Michael D

    2015-07-01

    The present fMRI experiment employed associative recognition to investigate the relationships between age and encoding-related negative subsequent memory effects and task-negative effects. Young, middle-aged and older adults (total n=136) were scanned while they made relational judgments on visually presented word pairs. In a later memory test, the participants made associative recognition judgments on studied, rearranged (items studied on different trials) and new pairs. Several regions, mostly localized to the default mode network, demonstrated negative subsequent memory effects in an across age-group analysis. All but one of these regions also demonstrated task-negative effects, although there was no correlation between the size of the respective effects. Whereas negative subsequent memory effects demonstrated a graded attenuation with age, task-negative effects declined markedly between the young and the middle-aged group, but showed no further reduction in the older group. Negative subsequent memory effects did not correlate with memory performance within any age group. By contrast, in the older group only, task-negative effects predicted later memory performance. The findings demonstrate that negative subsequent memory and task-negative effects depend on dissociable neural mechanisms and likely reflect distinct cognitive processes. The relationship between task-negative effects and memory performance in the older group might reflect the sensitivity of these effects to variations in amount of age-related neuropathology. This article is part of a Special Issue entitled SI: Memory. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Sensory, Cognitive, and Sensorimotor Learning Effects in Recognition Memory for Music.

    PubMed

    Mathias, Brian; Tillmann, Barbara; Palmer, Caroline

    2016-08-01

    Recent research suggests that perception and action are strongly interrelated and that motor experience may aid memory recognition. We investigated the role of motor experience in auditory memory recognition processes by musicians using behavioral, ERP, and neural source current density measures. Skilled pianists learned one set of novel melodies by producing them and another set by perception only. Pianists then completed an auditory memory recognition test during which the previously learned melodies were presented with or without an out-of-key pitch alteration while the EEG was recorded. Pianists indicated whether each melody was altered from or identical to one of the original melodies. Altered pitches elicited a larger N2 ERP component than original pitches, and pitches within previously produced melodies elicited a larger N2 than pitches in previously perceived melodies. Cortical motor planning regions were more strongly activated within the time frame of the N2 following altered pitches in previously produced melodies compared with previously perceived melodies, and larger N2 amplitudes were associated with greater detection accuracy following production learning than perception learning. Early sensory (N1) and later cognitive (P3a) components elicited by pitch alterations correlated with predictions of sensory echoic and schematic tonality models, respectively, but only for the perception learning condition, suggesting that production experience alters the extent to which performers rely on sensory and tonal recognition cues. These findings provide evidence for distinct time courses of sensory, schematic, and motoric influences within the same recognition task and suggest that learned auditory-motor associations influence responses to out-of-key pitches.

  14. Change blindness and visual memory: visual representations get rich and act poor.

    PubMed

    Varakin, D Alexander; Levin, Daniel T

    2006-02-01

    Change blindness is often taken as evidence that visual representations are impoverished, while successful recognition of specific objects is taken as evidence that they are richly detailed. In the current experiments, participants performed cover tasks that required each object in a display to be attended. Change detection trials were unexpectedly introduced and surprise recognition tests were given for nonchanging displays. For both change detection and recognition, participants had to distinguish objects from the same basic-level category, making it likely that specific visual information had to be used for successful performance. Although recognition was above chance, incidental change detection usually remained at floor. These results help reconcile demonstrations of poor change detection with demonstrations of good memory because they suggest that the capability to store visual information in memory is not reflected by the visual system's tendency to utilize these representations for purposes of detecting unexpected changes.

  15. Integration trumps selection in object recognition.

    PubMed

    Saarela, Toni P; Landy, Michael S

    2015-03-30

    Finding and recognizing objects is a fundamental task of vision. Objects can be defined by several "cues" (color, luminance, texture, etc.), and humans can integrate sensory cues to improve detection and recognition [1-3]. Cortical mechanisms fuse information from multiple cues [4], and shape-selective neural mechanisms can display cue invariance by responding to a given shape independent of the visual cue defining it [5-8]. Selective attention, in contrast, improves recognition by isolating a subset of the visual information [9]. Humans can select single features (red or vertical) within a perceptual dimension (color or orientation), giving faster and more accurate responses to items having the attended feature [10, 11]. Attention elevates neural responses and sharpens neural tuning to the attended feature, as shown by studies in psychophysics and modeling [11, 12], imaging [13-16], and single-cell and neural population recordings [17, 18]. Besides single features, attention can select whole objects [19-21]. Objects are among the suggested "units" of attention because attention to a single feature of an object causes the selection of all of its features [19-21]. Here, we pit integration against attentional selection in object recognition. We find, first, that humans can integrate information near optimally from several perceptual dimensions (color, texture, luminance) to improve recognition. They cannot, however, isolate a single dimension even when the other dimensions provide task-irrelevant, potentially conflicting information. For object recognition, it appears that there is mandatory integration of information from multiple dimensions of visual experience. The advantage afforded by this integration, however, comes at the expense of attentional selection. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. To Fear Is to Gain? The Role of Fear Recognition in Risky Decision Making in TBI Patients and Healthy Controls

    PubMed Central

    Visser-Keizer, Annemarie C.; Westerhof-Evers, Herma J.; Gerritsen, Marleen J. J.; van der Naalt, Joukje; Spikman, Jacoba M.

    2016-01-01

    Fear is an important emotional reaction that guides decision making in situations of ambiguity or uncertainty. Both recognition of facial expressions of fear and decision making ability can be impaired after traumatic brain injury (TBI), in particular when the frontal lobe is damaged. So far, it has not been investigated how recognition of fear influences risk behavior in healthy subjects and TBI patients. The ability to recognize fear is thought to be related to the ability to experience fear and to use it as a warning signal to guide decision making. We hypothesized that a better ability to recognize fear would be related to a better regulation of risk behavior, with healthy controls outperforming TBI patients. To investigate this, 59 healthy subjects and 49 TBI patients were assessed with a test for emotion recognition (Facial Expression of Emotion: Stimuli and Tests) and a gambling task (Iowa Gambling Task (IGT)). The results showed that, regardless of post traumatic amnesia duration or the presence of frontal lesions, patients were more impaired than healthy controls on both fear recognition and decision making. In both groups, a significant relationship was found between better fear recognition, the development of an advantageous strategy across the IGT and less risk behavior in the last blocks of the IGT. Educational level moderated this relationship in the final block of the IGT. This study has important clinical implications, indicating that impaired decision making and risk behavior after TBI can be preceded by deficits in the processing of fear. PMID:27870900

  17. To Fear Is to Gain? The Role of Fear Recognition in Risky Decision Making in TBI Patients and Healthy Controls.

    PubMed

    Visser-Keizer, Annemarie C; Westerhof-Evers, Herma J; Gerritsen, Marleen J J; van der Naalt, Joukje; Spikman, Jacoba M

    2016-01-01

    Fear is an important emotional reaction that guides decision making in situations of ambiguity or uncertainty. Both recognition of facial expressions of fear and decision making ability can be impaired after traumatic brain injury (TBI), in particular when the frontal lobe is damaged. So far, it has not been investigated how recognition of fear influences risk behavior in healthy subjects and TBI patients. The ability to recognize fear is thought to be related to the ability to experience fear and to use it as a warning signal to guide decision making. We hypothesized that a better ability to recognize fear would be related to a better regulation of risk behavior, with healthy controls outperforming TBI patients. To investigate this, 59 healthy subjects and 49 TBI patients were assessed with a test for emotion recognition (Facial Expression of Emotion: Stimuli and Tests) and a gambling task (Iowa Gambling Task (IGT)). The results showed that, regardless of post traumatic amnesia duration or the presence of frontal lesions, patients were more impaired than healthy controls on both fear recognition and decision making. In both groups, a significant relationship was found between better fear recognition, the development of an advantageous strategy across the IGT and less risk behavior in the last blocks of the IGT. Educational level moderated this relationship in the final block of the IGT. This study has important clinical implications, indicating that impaired decision making and risk behavior after TBI can be preceded by deficits in the processing of fear.

  18. Integration trumps selection in object recognition

    PubMed Central

    Saarela, Toni P.; Landy, Michael S.

    2015-01-01

    Summary Finding and recognizing objects is a fundamental task of vision. Objects can be defined by several “cues” (color, luminance, texture etc.), and humans can integrate sensory cues to improve detection and recognition [1–3]. Cortical mechanisms fuse information from multiple cues [4], and shape-selective neural mechanisms can display cue-invariance by responding to a given shape independent of the visual cue defining it [5–8]. Selective attention, in contrast, improves recognition by isolating a subset of the visual information [9]. Humans can select single features (red or vertical) within a perceptual dimension (color or orientation), giving faster and more accurate responses to items having the attended feature [10,11]. Attention elevates neural responses and sharpens neural tuning to the attended feature, as shown by studies in psychophysics and modeling [11,12], imaging [13–16], and single-cell and neural population recordings [17,18]. Besides single features, attention can select whole objects [19–21]. Objects are among the suggested “units” of attention because attention to a single feature of an object causes the selection of all of its features [19–21]. Here, we pit integration against attentional selection in object recognition. We find, first, that humans can integrate information near-optimally from several perceptual dimensions (color, texture, luminance) to improve recognition. They cannot, however, isolate a single dimension even when the other dimensions provide task-irrelevant, potentially conflicting information. For object recognition, it appears that there is mandatory integration of information from multiple dimensions of visual experience. The advantage afforded by this integration, however, comes at the expense of attentional selection. PMID:25802154

  19. Atoms of recognition in human and computer vision.

    PubMed

    Ullman, Shimon; Assif, Liav; Fetaya, Ethan; Harari, Daniel

    2016-03-08

    Discovering the visual features and representations used by the brain to recognize objects is a central problem in the study of vision. Recently, neural network models of visual object recognition, including biological and deep network models, have shown remarkable progress and have begun to rival human performance in some challenging tasks. These models are trained on image examples and learn to extract features and representations and to use them for categorization. It remains unclear, however, whether the representations and learning processes discovered by current models are similar to those used by the human visual system. Here we show, by introducing and using minimal recognizable images, that the human visual system uses features and processes that are not used by current models and that are critical for recognition. We found by psychophysical studies that at the level of minimal recognizable images a minute change in the image can have a drastic effect on recognition, thus identifying features that are critical for the task. Simulations then showed that current models cannot explain this sensitivity to precise feature configurations and, more generally, do not learn to recognize minimal images at a human level. The role of the features shown here is revealed uniquely at the minimal level, where the contribution of each feature is essential. A full understanding of the learning and use of such features will extend our understanding of visual recognition and its cortical mechanisms and will enhance the capacity of computational models to learn from visual experience and to deal with recognition and detailed image interpretation.

  20. Episodic Short-Term Recognition Requires Encoding into Visual Working Memory: Evidence from Probe Recognition after Letter Report

    PubMed Central

    Poth, Christian H.; Schneider, Werner X.

    2016-01-01

    Human vision is organized in discrete processing episodes (e.g., eye fixations or task-steps). Object information must be transmitted across episodes to enable episodic short-term recognition: recognizing whether a current object has been seen in a previous episode. We ask whether episodic short-term recognition presupposes that objects have been encoded into capacity-limited visual working memory (VWM), which retains visual information for report. Alternatively, it could rely on the activation of visual features or categories that occurs before encoding into VWM. We assessed the dependence of episodic short-term recognition on VWM by a new paradigm combining letter report and probe recognition. Participants viewed displays of 10 letters and reported as many as possible after a retention interval (whole report). Next, participants viewed a probe letter and indicated whether it had been one of the 10 letters (probe recognition). In Experiment 1, probe recognition was more accurate for letters that had been encoded into VWM (reported letters) compared with non-encoded letters (non-reported letters). Interestingly, those letters that participants reported in their whole report had been near to one another within the letter displays. This suggests that the encoding into VWM proceeded in a spatially clustered manner. In Experiment 2, participants reported only one of 10 letters (partial report) and probes either referred to this letter, to letters that had been near to it, or far from it. Probe recognition was more accurate for near than for far letters, although none of these letters had to be reported. These findings indicate that episodic short-term recognition is constrained to a small number of simultaneously presented objects that have been encoded into VWM. PMID:27713722

  1. Episodic Short-Term Recognition Requires Encoding into Visual Working Memory: Evidence from Probe Recognition after Letter Report.

    PubMed

    Poth, Christian H; Schneider, Werner X

    2016-01-01

    Human vision is organized in discrete processing episodes (e.g., eye fixations or task-steps). Object information must be transmitted across episodes to enable episodic short-term recognition: recognizing whether a current object has been seen in a previous episode. We ask whether episodic short-term recognition presupposes that objects have been encoded into capacity-limited visual working memory (VWM), which retains visual information for report. Alternatively, it could rely on the activation of visual features or categories that occurs before encoding into VWM. We assessed the dependence of episodic short-term recognition on VWM by a new paradigm combining letter report and probe recognition. Participants viewed displays of 10 letters and reported as many as possible after a retention interval (whole report). Next, participants viewed a probe letter and indicated whether it had been one of the 10 letters (probe recognition). In Experiment 1, probe recognition was more accurate for letters that had been encoded into VWM (reported letters) compared with non-encoded letters (non-reported letters). Interestingly, those letters that participants reported in their whole report had been near to one another within the letter displays. This suggests that the encoding into VWM proceeded in a spatially clustered manner. In Experiment 2, participants reported only one of 10 letters (partial report) and probes either referred to this letter, to letters that had been near to it, or far from it. Probe recognition was more accurate for near than for far letters, although none of these letters had to be reported. These findings indicate that episodic short-term recognition is constrained to a small number of simultaneously presented objects that have been encoded into VWM.

  2. Different underlying mechanisms for face emotion and gender processing during feature-selective attention: Evidence from event-related potential studies.

    PubMed

    Wang, Hailing; Ip, Chengteng; Fu, Shimin; Sun, Pei

    2017-05-01

    Face recognition theories suggest that our brains process invariant (e.g., gender) and changeable (e.g., emotion) facial dimensions separately. To investigate whether these two dimensions are processed in different time courses, we analyzed the selection negativity (SN, an event-related potential component reflecting attentional modulation) elicited by face gender and emotion during a feature selective attention task. Participants were instructed to attend to a combination of face emotion and gender attributes in Experiment 1 (bi-dimensional task) and to either face emotion or gender in Experiment 2 (uni-dimensional task). The results revealed that face emotion did not elicit a substantial SN, whereas face gender consistently generated a substantial SN in both experiments. These results suggest that face gender is more sensitive to feature-selective attention and that face emotion is encoded relatively automatically on SN, implying the existence of different underlying processing mechanisms for invariant and changeable facial dimensions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Episodic Memory and Future Thinking During Early Childhood: Linking the Past and Future

    PubMed Central

    Cuevas, Kimberly; Rajan, Vinaya; Morasch, Katherine C.; Bell, Martha Ann

    2015-01-01

    Despite extensive examination of episodic memory and future thinking development, little is known about the concurrent emergence of these capacities during early childhood. In Experiment 1, 3-year-olds participated in an episodic memory hiding task [“what, when, where” (WWW) components] with an episodic future thinking component. In Experiment 2, a group of 4-year-olds (including children from Experiment 1) participated in the same task (different objects and locations), providing the first longitudinal investigation of episodic memory and future thinking. Although children exhibited age-related improvements in recall, recognition, and binding of the WWW episodic memory components, there were no age-related changes in episodic future thinking. At both ages, WWW episodic memory performance was higher than future thinking performance, and episodic future thinking and WWW memory components were unrelated. These findings suggest that the WWW components of episodic memory are potentially less fragile than the future components when assessed in a cognitively demanding task. PMID:25864990

  4. Recognizing the bank robber and spotting the difference: emotional state and global vs. local attentional set.

    PubMed

    Pacheco-Unguetti, Antonia Pilar; Acosta, Alberto; Lupiáñez, Juan

    2014-01-01

    In two experiments (161 participants in total), we investigated how current mood influences processing styles (global vs. local). Participants watched a video of a bank robbery before receiving a positive, negative or neutral induction, and they performed two tasks: a face-recognition task about the bank robber as global processing measure, and a spot-the-difference task using neutral pictures (Experiment-1) or emotional scenes (Experiment-2) as local processing measure. Results showed that positive mood induction favoured a global processing style, enhancing participants' ability to correctly identify a face even when they watched the video before the mood-induction. This shows that, besides influencing encoding processes, mood state can be also related to retrieval processes. On the contrary, negative mood induction enhanced a local processing style, making easier and faster the detection of differences between nearly identical pictures, independently of their valence. This dissociation supports the hypothesis that current mood modulates processing through activation of different cognitive styles.

  5. Episodic memory and future thinking during early childhood: Linking the past and future.

    PubMed

    Cuevas, Kimberly; Rajan, Vinaya; Morasch, Katherine C; Bell, Martha Ann

    2015-07-01

    Despite extensive examination of episodic memory and future thinking development, little is known about the concurrent emergence of these capacities during early childhood. In Experiment 1, 3-year-olds participated in an episodic memory hiding task ("what, when, where" [WWW] components) with an episodic future thinking component. In Experiment 2, a group of 4-year-olds (including children from Experiment 1) participated in the same task (different objects and locations), providing the first longitudinal investigation of episodic memory and future thinking. Although children exhibited age-related improvements in recall, recognition, and binding of the WWW episodic memory components, there were no age-related changes in episodic future thinking. At both ages, WWW episodic memory performance was higher than future thinking performance, and episodic future thinking and WWW memory components were unrelated. These findings suggest that the WWW components of episodic memory are potentially less fragile than the future components when assessed in a cognitively demanding task. © 2015 Wiley Periodicals, Inc.

  6. The development of cross-cultural recognition of vocal emotion during childhood and adolescence.

    PubMed

    Chronaki, Georgia; Wigelsworth, Michael; Pell, Marc D; Kotz, Sonja A

    2018-06-14

    Humans have an innate set of emotions recognised universally. However, emotion recognition also depends on socio-cultural rules. Although adults recognise vocal emotions universally, they identify emotions more accurately in their native language. We examined developmental trajectories of universal vocal emotion recognition in children. Eighty native English speakers completed a vocal emotion recognition task in their native language (English) and foreign languages (Spanish, Chinese, and Arabic) expressing anger, happiness, sadness, fear, and neutrality. Emotion recognition was compared across 8-to-10, 11-to-13-year-olds, and adults. Measures of behavioural and emotional problems were also taken. Results showed that although emotion recognition was above chance for all languages, native English speaking children were more accurate in recognising vocal emotions in their native language. There was a larger improvement in recognising vocal emotion from the native language during adolescence. Vocal anger recognition did not improve with age for the non-native languages. This is the first study to demonstrate universality of vocal emotion recognition in children whilst supporting an "in-group advantage" for more accurate recognition in the native language. Findings highlight the role of experience in emotion recognition, have implications for child development in modern multicultural societies and address important theoretical questions about the nature of emotions.

  7. Does Talker-Specific Information Influence Lexical Competition? Evidence from Phonological Priming

    ERIC Educational Resources Information Center

    Dufour, Sophie; Nguyen, Noël

    2017-01-01

    In this study, we examined whether the lexical competition process embraced by most models of spoken word recognition is sensitive to talker-specific information. We used a lexical decision task and a long lag priming experiment in which primes and targets sharing all phonemes except the last one (e.g., /bagaR/"fight" vs.…

  8. Effects of Hearing and Aging on Sentence-Level Time-Gated Word Recognition

    ERIC Educational Resources Information Center

    Molis, Michelle R.; Kampel, Sean D.; McMillan, Garnett P.; Gallun, Frederick J.; Dann, Serena M.; Konrad-Martin, Dawn

    2015-01-01

    Purpose: Aging is known to influence temporal processing, but its relationship to speech perception has not been clearly defined. To examine listeners' use of contextual and phonetic information, the Revised Speech Perception in Noise test (R-SPIN) was used to develop a time-gated word (TGW) task. Method: In Experiment 1, R-SPIN sentence lists…

  9. Neurophysiological Evidence for Underspecified Lexical Representations: Asymmetries with Word Initial Variations

    ERIC Educational Resources Information Center

    Friedrich, Claudia K.; Lahiri, Aditi; Eulitz, Carsten

    2008-01-01

    How does the mental lexicon cope with phonetic variants in recognition of spoken words? Using a lexical decision task with and without fragment priming, the authors compared the processing of German words and pseudowords that differed only in the place of articulation of the initial consonant (place). Across both experiments, event-related brain…

  10. An empirical investigation of sparse distributed memory using discrete speech recognition

    NASA Technical Reports Server (NTRS)

    Danforth, Douglas G.

    1990-01-01

    Presented here is a step by step analysis of how the basic Sparse Distributed Memory (SDM) model can be modified to enhance its generalization capabilities for classification tasks. Data is taken from speech generated by a single talker. Experiments are used to investigate the theory of associative memories and the question of generalization from specific instances.

  11. Eye-Tracking Study on Facial Emotion Recognition Tasks in Individuals with High-Functioning Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Tsang, Vicky

    2018-01-01

    The eye-tracking experiment was carried out to assess fixation duration and scan paths that individuals with and without high-functioning autism spectrum disorders employed when identifying simple and complex emotions. Participants viewed human photos of facial expressions and decided on the identification of emotion, the negative-positive emotion…

  12. Using Eye Movement Analysis to Study Auditory Effects on Visual Memory Recall

    PubMed Central

    Marandi, Ramtin Zargari; Sabzpoushan, Seyed Hojjat

    2014-01-01

    Recent studies in affective computing are focused on sensing human cognitive context using biosignals. In this study, electrooculography (EOG) was utilized to investigate memory recall accessibility via eye movement patterns. 12 subjects were participated in our experiment wherein pictures from four categories were presented. Each category contained nine pictures of which three were presented twice and the rest were presented once only. Each picture presentation took five seconds with an adjoining three seconds interval. Similarly, this task was performed with new pictures together with related sounds. The task was free viewing and participants were not informed about the task's purpose. Using pattern recognition techniques, participants’ EOG signals in response to repeated and non-repeated pictures were classified for with and without sound stages. The method was validated with eight different participants. Recognition rate in “with sound” stage was significantly reduced as compared with “without sound” stage. The result demonstrated that the familiarity of visual-auditory stimuli can be detected from EOG signals and the auditory input potentially improves the visual recall process. PMID:25436085

  13. EEG based topography analysis in string recognition task

    NASA Astrophysics Data System (ADS)

    Ma, Xiaofei; Huang, Xiaolin; Shen, Yuxiaotong; Qin, Zike; Ge, Yun; Chen, Ying; Ning, Xinbao

    2017-03-01

    Vision perception and recognition is a complex process, during which different parts of brain are involved depending on the specific modality of the vision target, e.g. face, character, or word. In this study, brain activities in string recognition task compared with idle control state are analyzed through topographies based on multiple measurements, i.e. sample entropy, symbolic sample entropy and normalized rhythm power, extracted from simultaneously collected scalp EEG. Our analyses show that, for most subjects, both symbolic sample entropy and normalized gamma power in string recognition task are significantly higher than those in idle state, especially at locations of P4, O2, T6 and C4. It implies that these regions are highly involved in string recognition task. Since symbolic sample entropy measures complexity, from the perspective of new information generation, and normalized rhythm power reveals the power distributions in frequency domain, complementary information about the underlying dynamics can be provided through the two types of indices.

  14. Development of the Ability to Use Facial, Situational, and Vocal Cues to Infer Others' Affective States.

    ERIC Educational Resources Information Center

    Farber, Ellen A.; Moely, Barbara E.

    Results of two studies investigating children's abilities to use different kinds of cues to infer another's affective state are reported in this paper. In the first study, 48 children (3, 4, and 6 to 7 years of age) were given three different kinds of tasks (interpersonal task, facial recognition task, and vocal recognition task). A cross-age…

  15. Face recognition in age related macular degeneration: perceived disability, measured disability, and performance with a bioptic device.

    PubMed

    Tejeria, L; Harper, R A; Artes, P H; Dickinson, C M

    2002-09-01

    (1) To explore the relation between performance on tasks of familiar face recognition (FFR) and face expression difference discrimination (FED) with both perceived disability in face recognition and clinical measures of visual function in subjects with age related macular degeneration (AMD). (2) To quantify the gain in performance for face recognition tasks when subjects use a bioptic telescopic low vision device. 30 subjects with AMD (age range 66-90 years; visual acuity 0.4-1.4 logMAR) were recruited for the study. Perceived (self rated) disability in face recognition was assessed by an eight item questionnaire covering a range of issues relating to face recognition. Visual functions measured were distance visual acuity (ETDRS logMAR charts), continuous text reading acuity (MNRead charts), contrast sensitivity (Pelli-Robson chart), and colour vision (large panel D-15). In the FFR task, images of famous people had to be identified. FED was assessed by a forced choice test where subjects had to decide which one of four images showed a different facial expression. These tasks were repeated with subjects using a bioptic device. Overall perceived disability in face recognition did not correlate with performance on either task, although a specific item on difficulty recognising familiar faces did correlate with FFR (r = 0.49, p<0.05). FFR performance was most closely related to distance acuity (r = -0.69, p<0.001), while FED performance was most closely related to continuous text reading acuity (r = -0.79, p<0.001). In multiple regression, neither contrast sensitivity nor colour vision significantly increased the explained variance. When using a bioptic telescope, FFR performance improved in 86% of subjects (median gain = 49%; p<0.001), while FED performance increased in 79% of subjects (median gain = 50%; p<0.01). Distance and reading visual acuity are closely associated with measured task performance in FFR and FED. A bioptic low vision device can offer a significant improvement in performance for face recognition tasks, and may be useful in reducing the handicap associated with this disability. There is, however, little evidence for a correlation between self rated difficulty in face recognition and measured performance for either task. Further work is needed to explore the complex relation between the perception of disability and measured performance.

  16. Face recognition in age related macular degeneration: perceived disability, measured disability, and performance with a bioptic device

    PubMed Central

    Tejeria, L; Harper, R A; Artes, P H; Dickinson, C M

    2002-01-01

    Aims: (1) To explore the relation between performance on tasks of familiar face recognition (FFR) and face expression difference discrimination (FED) with both perceived disability in face recognition and clinical measures of visual function in subjects with age related macular degeneration (AMD). (2) To quantify the gain in performance for face recognition tasks when subjects use a bioptic telescopic low vision device. Methods: 30 subjects with AMD (age range 66–90 years; visual acuity 0.4–1.4 logMAR) were recruited for the study. Perceived (self rated) disability in face recognition was assessed by an eight item questionnaire covering a range of issues relating to face recognition. Visual functions measured were distance visual acuity (ETDRS logMAR charts), continuous text reading acuity (MNRead charts), contrast sensitivity (Pelli-Robson chart), and colour vision (large panel D-15). In the FFR task, images of famous people had to be identified. FED was assessed by a forced choice test where subjects had to decide which one of four images showed a different facial expression. These tasks were repeated with subjects using a bioptic device. Results: Overall perceived disability in face recognition did not correlate with performance on either task, although a specific item on difficulty recognising familiar faces did correlate with FFR (r = 0.49, p<0.05). FFR performance was most closely related to distance acuity (r = −0.69, p<0.001), while FED performance was most closely related to continuous text reading acuity (r = −0.79, p<0.001). In multiple regression, neither contrast sensitivity nor colour vision significantly increased the explained variance. When using a bioptic telescope, FFR performance improved in 86% of subjects (median gain = 49%; p<0.001), while FED performance increased in 79% of subjects (median gain = 50%; p<0.01). Conclusion: Distance and reading visual acuity are closely associated with measured task performance in FFR and FED. A bioptic low vision device can offer a significant improvement in performance for face recognition tasks, and may be useful in reducing the handicap associated with this disability. There is, however, little evidence for a correlation between self rated difficulty in face recognition and measured performance for either task. Further work is needed to explore the complex relation between the perception of disability and measured performance. PMID:12185131

  17. Directed forgetting: differential effects on typical and distinctive faces.

    PubMed

    Metzger, Mitchell M

    2011-01-01

    Directed forgetting (DF) occurs when stimuli presented during the study phase are followed by "forget" and "remember" cues. On a subsequent memory test, poor memory is observed for stimuli followed by the forget cues, compared to stimuli followed by the remember cues. Although DF is most commonly observed with verbal tasks, the present study extended intentional forgetting research for nonverbal stimuli and examined whether faces were susceptible to DF. Results confirmed that the presentation of a forget cue significantly reduced recognition for faces, as compared to faces followed by a remember cue. Additionally, a well-established finding in face recognition is that distinctive faces are better remembered than typical faces, and Experiment 2 assessed whether face appearance influenced the degree of DF. Results indicate that the DF effect observed in Experiment 1 was replicated in Experiment 2 and that the effect was more pronounced for those faces that were typical in appearance.

  18. Humor in print health advertisements: enhanced attention, privileged recognition, and persuasiveness of preventive messages.

    PubMed

    Blanc, Nathalie; Brigaud, Emmanuelle

    2014-01-01

    This study tested the effect of humor in one particular type of print advertisement: the preventive health ads for three topics (alcohol, tobacco, obesity). Previous research using commercial ads demonstrated that individuals' attention is spontaneously attracted by humor, leading to a memory advantage for humorous information over nonhumorous information. Two experiments investigated whether the positive effect of humor can occur with preventive health ads. In Experiment 1, participants observed humorous and nonhumorous health ads while their viewing times were recorded. In Experiment 2, to compare humorous and nonhumorous ads, the memory of health messages was assessed through a recognition task and a convincing score was collected. The results confirmed that, compared to nonhumorous health ads, those using humor received prolonged attention, were judged more convincing, and their messages were better recognized. Overall, these findings suggest that humor can be of use in preventive health communication.

  19. Stages of functional processing and the bihemispheric recognition of Japanese Kana script.

    PubMed

    Yoshizaki, K

    2000-04-01

    Two experiments were carried out in order to examine the effects of functional steps on the benefits of interhemispheric integration. The purpose of Experiment 1 was to investigate the validity of the Banich (1995a) model, where the benefits of interhemispheric processing increase as the task involves more functional steps. The 16 right-handed subjects were given two types of Hiragana-Katakana script matching tasks. One was the Name Identity (NI) task, and the other was the vowel matching (VM) task, which involved more functional steps compared to the NI task. The VM task required subjects to make a decision whether or not a pair of Katakana-Hiragana scripts had a common vowel. In both tasks, a pair of Kana scripts (Katakana-Hiragana scripts) was tachistoscopically presented in the unilateral visual fields or the bilateral visual fields, where each letter was presented in each visual field. A bilateral visual fields advantage (BFA) was found in both tasks, and the size of this did not differ between the tasks, suggesting that these findings did not support the Banich model. The purpose of Experiment 2 was to examine the effects of imbalanced processing load between the hemispheres on the benefits of interhemispheric integration. In order to manipulate the balance of processing load across the hemispheres, the revised vowel matching (r-VM) task was developed by amending the VM task. The r-VM task was the same as the VM task in Experiment 1, except that a script that has only vowel sound was presented as a counterpart of a pair of Kana scripts. The 24 right-handed subjects were given the r-VM and NI tasks. The results showed that although a BFA showed up in the NI task, it did not in the r-VM task. These results suggested that the balance of processing load between hemispheres would have an influence on the bilateral hemispheric processing.

  20. Task-Dependent Masked Priming Effects in Visual Word Recognition

    PubMed Central

    Kinoshita, Sachiko; Norris, Dennis

    2012-01-01

    A method used widely to study the first 250 ms of visual word recognition is masked priming: These studies have yielded a rich set of data concerning the processes involved in recognizing letters and words. In these studies, there is an implicit assumption that the early processes in word recognition tapped by masked priming are automatic, and masked priming effects should therefore be invariant across tasks. Contrary to this assumption, masked priming effects are modulated by the task goal: For example, only word targets show priming in the lexical decision task, but both words and non-words do in the same-different task; semantic priming effects are generally weak in the lexical decision task but are robust in the semantic categorization task. We explain how such task dependence arises within the Bayesian Reader account of masked priming (Norris and Kinoshita, 2008), and how the task dissociations can be used to understand the early processes in lexical access. PMID:22675316

  1. Selective inattention to anxiety-linked stimuli.

    PubMed

    Blum, G S; Barbour, J S

    1979-06-01

    The term selective inattention as used here subsumes those phenomena whose primary function is the active blocking or attenuation of partially processed contents en route to conscious expression. Examples are anxiety-motivated forgetting or perceptual distortion and hypnotically induced negative hallucinations. Studies in the field of selective attention have typically been designed to explain what takes place in a task in which the subject is first instructed to attend to a particular stimulus and then to consciously execute that instruction as well as he can. The rejection of content in process is examined only sceondarily as a consequence of the acceptance of relevant information. In the present experiments and theorizing, the emphasis instead is on inhibitory operations that take place automatically, without conscious intent, in response to a potential anxiety reaction. Experiment 1 explored the interaction of anxiety-linked inattention with strength of a target stimulus. Three female subjects were programmed under hypnosis to respond posthypnotically in the On condition with prescribed degrees of anxiety when certain Blacky pictures popped into mind later ,t the end of experimental trials; in the Off conditionall pictures were to become neutral. With the three female subjects still under hypnosis, each of the loaded pictures was then paired with a four-letter work relevant to the individual's own version of what was happening in the picture. The waking recognition task, carried out with amnesia for the prior hypnotic programming, consisted of tachistoscopic exposure of loaded words and physically similar filler words at four durations within a baseline range of recognition accuracy from 50%--75% correct. The data yielded a curvilinear relationship in which the recognition of only the loaded words was significnatly lower in the On condition at the 60%--70% range of recognition accuracy but not at shorter or longer stimulus durations. Experiment 2, for which the prior hypnotic programming of the same three subjects was similar to Experiment 1, used an anagram approach to comparable four-letter words, except that pleasure-loaded words were introduced as a control along with filler words. Four durations of tachistoscopic exposure of the anagrams were used with each individual, and the major dependent variable was response latency measured in milliseconds. An independent measure of perceptual discriminability of the scrambled stimulus letters was obtained to isolate perceptual from cognitive aspects of the task. The results indicated that both low perceivability and high solvability increase the likelihood of response delays specifically in the presence of anxiety-linked stimuli. Experiment 3 was a nonhypnotic replication of Experiment 2, using 12 male and 13 female subjects. The potential affective loading of key anxiety and pleasure words was accomplished by structured scenarios for the Blacky pictures in which subjects were asked to place themselves as vividly as possible...

  2. Outlining face processing skills of portrait artists: Perceptual experience with faces predicts performance.

    PubMed

    Devue, Christel; Barsics, Catherine

    2016-10-01

    Most humans seem to demonstrate astonishingly high levels of skill in face processing if one considers the sophisticated level of fine-tuned discrimination that face recognition requires. However, numerous studies now indicate that the ability to process faces is not as fundamental as once thought and that performance can range from despairingly poor to extraordinarily high across people. Here we studied people who are super specialists of faces, namely portrait artists, to examine how their specific visual experience with faces relates to a range of face processing skills (perceptual discrimination, short- and longer term recognition). Artists show better perceptual discrimination and, to some extent, recognition of newly learned faces than controls. They are also more accurate on other perceptual tasks (i.e., involving non-face stimuli or mental rotation). By contrast, artists do not display an advantage compared to controls on longer term face recognition (i.e., famous faces) nor on person recognition from other sensorial modalities (i.e., voices). Finally, the face inversion effect exists in artists and controls and is not modulated by artistic practice. Advantages in face processing for artists thus seem to closely mirror perceptual and visual short term memory skills involved in portraiture. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Influence of auditory attention on sentence recognition captured by the neural phase.

    PubMed

    Müller, Jana Annina; Kollmeier, Birger; Debener, Stefan; Brand, Thomas

    2018-03-07

    The aim of this study was to investigate whether attentional influences on speech recognition are reflected in the neural phase entrained by an external modulator. Sentences were presented in 7 Hz sinusoidally modulated noise while the neural response to that modulation frequency was monitored by electroencephalogram (EEG) recordings in 21 participants. We implemented a selective attention paradigm including three different attention conditions while keeping physical stimulus parameters constant. The participants' task was either to repeat the sentence as accurately as possible (speech recognition task), to count the number of decrements implemented in modulated noise (decrement detection task), or to do both (dual task), while the EEG was recorded. Behavioural analysis revealed reduced performance in the dual task condition for decrement detection, possibly reflecting limited cognitive resources. EEG analysis revealed no significant differences in power for the 7 Hz modulation frequency, but an attention-dependent phase difference between tasks. Further phase analysis revealed a significant difference 500 ms after sentence onset between trials with correct and incorrect responses for speech recognition, indicating that speech recognition performance and the neural phase are linked via selective attention mechanisms, at least shortly after sentence onset. However, the neural phase effects identified were small and await further investigation. © 2018 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  4. Action Identity in Style Simulation Systems: Do Players Consider Machine-Generated Music As of Their Own Style?

    PubMed

    Khatchatourov, Armen; Pachet, François; Rowe, Victoria

    2016-01-01

    The generation of musical material in a given style has been the subject of many studies with the increased sophistication of artificial intelligence models of musical style. In this paper we address a question of primary importance for artificial intelligence and music psychology: can such systems generate music that users indeed consider as corresponding to their own style? We address this question through an experiment involving both performance and recognition tasks with musically naïve school-age children. We asked 56 children to perform a free-form improvisation from which two kinds of music excerpt were created. One was a mere recording of original performances. The other was created by a software program designed to simulate the participants' style, based on their original performances. Two hours after the performance task, the children completed the recognition task in two conditions, one with the original excerpts and one with machine-generated music. Results indicate that the success rate is practically equivalent in two conditions: children tended to make correct attribution of the excerpts to themselves or to others, whether the music was human-produced or machine-generated (mean accuracy = 0.75 and = 0.71, respectively). We discuss this equivalence in accuracy for machine-generated and human produced music in the light of the literature on memory effects and action identity which addresses the recognition of one's own production.

  5. Conversion of short-term to long-term memory in the novel object recognition paradigm

    PubMed Central

    Moore, Shannon J.; Deshpande, Kaivalya; Stinnett, Gwen S.; Seasholtz, Audrey F.; Murphy, Geoffrey G.

    2013-01-01

    It is well-known that stress can significantly impact learning; however, whether this effect facilitates or impairs the resultant memory depends on the characteristics of the stressor. Investigation of these dynamics can be confounded by the role of the stressor in motivating performance in a task. Positing a cohesive model of the effect of stress on learning and memory necessitates elucidating the consequences of stressful stimuli independently from task-specific functions. Therefore, the goal of this study was to examine the effect of manipulating a task-independent stressor (elevated light level) on short-term and long-term memory in the novel object recognition paradigm. Short-term memory was elicited in both low light and high light conditions, but long-term memory specifically required high light conditions during the acquisition phase (familiarization trial) and was independent of the light level during retrieval (test trial). Additionally, long-term memory appeared to be independent of stress-mediated glucocorticoid release, as both low and high light produced similar levels of plasma corticosterone, which further did not correlate with subsequent memory performance. Finally, both short-term and long-term memory showed no savings between repeated experiments suggesting that this novel object recognition paradigm may be useful for longitudinal studies, particularly when investigating treatments to stabilize or enhance weak memories in neurodegenerative diseases or during age-related cognitive decline. PMID:23835143

  6. Conversion of short-term to long-term memory in the novel object recognition paradigm.

    PubMed

    Moore, Shannon J; Deshpande, Kaivalya; Stinnett, Gwen S; Seasholtz, Audrey F; Murphy, Geoffrey G

    2013-10-01

    It is well-known that stress can significantly impact learning; however, whether this effect facilitates or impairs the resultant memory depends on the characteristics of the stressor. Investigation of these dynamics can be confounded by the role of the stressor in motivating performance in a task. Positing a cohesive model of the effect of stress on learning and memory necessitates elucidating the consequences of stressful stimuli independently from task-specific functions. Therefore, the goal of this study was to examine the effect of manipulating a task-independent stressor (elevated light level) on short-term and long-term memory in the novel object recognition paradigm. Short-term memory was elicited in both low light and high light conditions, but long-term memory specifically required high light conditions during the acquisition phase (familiarization trial) and was independent of the light level during retrieval (test trial). Additionally, long-term memory appeared to be independent of stress-mediated glucocorticoid release, as both low and high light produced similar levels of plasma corticosterone, which further did not correlate with subsequent memory performance. Finally, both short-term and long-term memory showed no savings between repeated experiments suggesting that this novel object recognition paradigm may be useful for longitudinal studies, particularly when investigating treatments to stabilize or enhance weak memories in neurodegenerative diseases or during age-related cognitive decline. Copyright © 2013 Elsevier Inc. All rights reserved.

  7. Action Identity in Style Simulation Systems: Do Players Consider Machine-Generated Music As of Their Own Style?

    PubMed Central

    Khatchatourov, Armen; Pachet, François; Rowe, Victoria

    2016-01-01

    The generation of musical material in a given style has been the subject of many studies with the increased sophistication of artificial intelligence models of musical style. In this paper we address a question of primary importance for artificial intelligence and music psychology: can such systems generate music that users indeed consider as corresponding to their own style? We address this question through an experiment involving both performance and recognition tasks with musically naïve school-age children. We asked 56 children to perform a free-form improvisation from which two kinds of music excerpt were created. One was a mere recording of original performances. The other was created by a software program designed to simulate the participants' style, based on their original performances. Two hours after the performance task, the children completed the recognition task in two conditions, one with the original excerpts and one with machine-generated music. Results indicate that the success rate is practically equivalent in two conditions: children tended to make correct attribution of the excerpts to themselves or to others, whether the music was human-produced or machine-generated (mean accuracy = 0.75 and = 0.71, respectively). We discuss this equivalence in accuracy for machine-generated and human produced music in the light of the literature on memory effects and action identity which addresses the recognition of one's own production. PMID:27199788

  8. Visual recognition and inference using dynamic overcomplete sparse learning.

    PubMed

    Murray, Joseph F; Kreutz-Delgado, Kenneth

    2007-09-01

    We present a hierarchical architecture and learning algorithm for visual recognition and other visual inference tasks such as imagination, reconstruction of occluded images, and expectation-driven segmentation. Using properties of biological vision for guidance, we posit a stochastic generative world model and from it develop a simplified world model (SWM) based on a tractable variational approximation that is designed to enforce sparse coding. Recent developments in computational methods for learning overcomplete representations (Lewicki & Sejnowski, 2000; Teh, Welling, Osindero, & Hinton, 2003) suggest that overcompleteness can be useful for visual tasks, and we use an overcomplete dictionary learning algorithm (Kreutz-Delgado, et al., 2003) as a preprocessing stage to produce accurate, sparse codings of images. Inference is performed by constructing a dynamic multilayer network with feedforward, feedback, and lateral connections, which is trained to approximate the SWM. Learning is done with a variant of the back-propagation-through-time algorithm, which encourages convergence to desired states within a fixed number of iterations. Vision tasks require large networks, and to make learning efficient, we take advantage of the sparsity of each layer to update only a small subset of elements in a large weight matrix at each iteration. Experiments on a set of rotated objects demonstrate various types of visual inference and show that increasing the degree of overcompleteness improves recognition performance in difficult scenes with occluded objects in clutter.

  9. Acquired prosopagnosia without word recognition deficits.

    PubMed

    Susilo, Tirta; Wright, Victoria; Tree, Jeremy J; Duchaine, Bradley

    2015-01-01

    It has long been suggested that face recognition relies on specialized mechanisms that are not involved in visual recognition of other object categories, including those that require expert, fine-grained discrimination at the exemplar level such as written words. But according to the recently proposed many-to-many theory of object recognition (MTMT), visual recognition of faces and words are carried out by common mechanisms [Behrmann, M., & Plaut, D. C. ( 2013 ). Distributed circuits, not circumscribed centers, mediate visual recognition. Trends in Cognitive Sciences, 17, 210-219]. MTMT acknowledges that face and word recognition are lateralized, but posits that the mechanisms that predominantly carry out face recognition still contribute to word recognition and vice versa. MTMT makes a key prediction, namely that acquired prosopagnosics should exhibit some measure of word recognition deficits. We tested this prediction by assessing written word recognition in five acquired prosopagnosic patients. Four patients had lesions limited to the right hemisphere while one had bilateral lesions with more pronounced lesions in the right hemisphere. The patients completed a total of seven word recognition tasks: two lexical decision tasks and five reading aloud tasks totalling more than 1200 trials. The performances of the four older patients (3 female, age range 50-64 years) were compared to those of 12 older controls (8 female, age range 56-66 years), while the performances of the younger prosopagnosic (male, 31 years) were compared to those of 14 younger controls (9 female, age range 20-33 years). We analysed all results at the single-patient level using Crawford's t-test. Across seven tasks, four prosopagnosics performed as quickly and accurately as controls. Our results demonstrate that acquired prosopagnosia can exist without word recognition deficits. These findings are inconsistent with a key prediction of MTMT. They instead support the hypothesis that face recognition is carried out by specialized mechanisms that do not contribute to recognition of written words.

  10. The Low-Frequency Encoding Disadvantage: Word Frequency Affects Processing Demands

    ERIC Educational Resources Information Center

    Diana, Rachel A.; Reder, Lynne M.

    2006-01-01

    Low-frequency words produce more hits and fewer false alarms than high-frequency words in a recognition task. The low-frequency hit rate advantage has sometimes been attributed to processes that operate during the recognition test (e.g., L. M. Reder et al., 2000). When tasks other than recognition, such as recall, cued recall, or associative…

  11. Déjà vu in unilateral temporal-lobe epilepsy is associated with selective familiarity impairments on experimental tasks of recognition memory.

    PubMed

    Martin, Chris B; Mirsattari, Seyed M; Pruessner, Jens C; Pietrantonio, Sandra; Burneo, Jorge G; Hayman-Abello, Brent; Köhler, Stefan

    2012-11-01

    In déjà vu, a phenomenological impression of familiarity for the current visual environment is experienced with a sense that it should in fact not feel familiar. The fleeting nature of this phenomenon in daily life, and the difficulty in developing experimental paradigms to elicit it, has hindered progress in understanding déjà vu. Some neurological patients with temporal-lobe epilepsy (TLE) consistently experience déjà vu at the onset of their seizures. An investigation of such patients offers a unique opportunity to shed light on its possible underlying mechanisms. In the present study, we sought to determine whether unilateral TLE patients with déjà vu (TLE+) show a unique pattern of interictal memory deficits that selectively affect familiarity assessment. In Experiment 1, we employed a Remember-Know paradigm for categorized visual scenes and found evidence for impairments that were limited to familiarity-based responses. In Experiment 2, we administered an exclusion task for highly similar categorized visual scenes that placed both recognition processes in opposition. TLE+ patients again displayed recognition impairments, and these impairments spared their ability to engage recollective processes so as to counteract familiarity. The selective deficits we observed in TLE+ patients contrasted with the broader pattern of recognition-memory impairments that was present in a control group of unilateral patients without déjà vu (TLE-). MRI volumetry revealed that ipsilateral medial temporal structures were less broadly affected in TLE+ than in TLE- patients, with a trend for more focal volume reductions in the rhinal cortices of the TLE+ group. The current findings establish a first empirical link between déjà vu in TLE and processes of familiarity assessment, as defined and measured in current cognitive models. They also reveal a pattern of selectivity in recognition impairments that is rarely observed and, thus, of significant theoretical interest to the memory literature at large. Copyright © 2012 Elsevier Ltd. All rights reserved.

  12. Convolutional neural networks and face recognition task

    NASA Astrophysics Data System (ADS)

    Sochenkova, A.; Sochenkov, I.; Makovetskii, A.; Vokhmintsev, A.; Melnikov, A.

    2017-09-01

    Computer vision tasks are remaining very important for the last couple of years. One of the most complicated problems in computer vision is face recognition that could be used in security systems to provide safety and to identify person among the others. There is a variety of different approaches to solve this task, but there is still no universal solution that would give adequate results in some cases. Current paper presents following approach. Firstly, we extract an area containing face, then we use Canny edge detector. On the next stage we use convolutional neural networks (CNN) to finally solve face recognition and person identification task.

  13. Multi-task learning with group information for human action recognition

    NASA Astrophysics Data System (ADS)

    Qian, Li; Wu, Song; Pu, Nan; Xu, Shulin; Xiao, Guoqiang

    2018-04-01

    Human action recognition is an important and challenging task in computer vision research, due to the variations in human motion performance, interpersonal differences and recording settings. In this paper, we propose a novel multi-task learning framework with group information (MTL-GI) for accurate and efficient human action recognition. Specifically, we firstly obtain group information through calculating the mutual information according to the latent relationship between Gaussian components and action categories, and clustering similar action categories into the same group by affinity propagation clustering. Additionally, in order to explore the relationships of related tasks, we incorporate group information into multi-task learning. Experimental results evaluated on two popular benchmarks (UCF50 and HMDB51 datasets) demonstrate the superiority of our proposed MTL-GI framework.

  14. A test of the orthographic recoding hypothesis

    NASA Astrophysics Data System (ADS)

    Gaygen, Daniel E.

    2003-04-01

    The Orthographic Recoding Hypothesis [D. E. Gaygen and P. A. Luce, Percept. Psychophys. 60, 465-483 (1998)] was tested. According to this hypothesis, listeners recognize spoken words heard for the first time by mapping them onto stored representations of the orthographic forms of the words. Listeners have a stable orthographic representation of words, but no phonological representation, when those words have been read frequently but never heard or spoken. Such may be the case for low frequency words such as jargon. Three experiments using visually and auditorily presented nonword stimuli tested this hypothesis. The first two experiments were explicit tests of memory (old-new tests) for words presented visually. In the first experiment, the recognition of auditorily presented nonwords was facilitated when they previously appeared on a visually presented list. The second experiment was similar, but included a concurrent articulation task during a visual word list presentation, thus preventing covert rehearsal of the nonwords. The results were similar to the first experiment. The third experiment was an indirect test of memory (auditory lexical decision task) for visually presented nonwords. Auditorily presented nonwords were identified as nonwords significantly more slowly if they had previously appeared on the visually presented list accompanied by a concurrent articulation task.

  15. The effect of semantic transparency on the processing of morphologically derived words: Evidence from decision latencies and event-related potentials.

    PubMed

    Jared, Debra; Jouravlev, Olessia; Joanisse, Marc F

    2017-03-01

    Decomposition theories of morphological processing in visual word recognition posit an early morpho-orthographic parser that is blind to semantic information, whereas parallel distributed processing (PDP) theories assume that the transparency of orthographic-semantic relationships influences processing from the beginning. To test these alternatives, the performance of participants on transparent (foolish), quasi-transparent (bookish), opaque (vanish), and orthographic control words (bucket) was examined in a series of 5 experiments. In Experiments 1-3 variants of a masked priming lexical-decision task were used; Experiment 4 used a masked priming semantic decision task, and Experiment 5 used a single-word (nonpriming) semantic decision task with a color-boundary manipulation. In addition to the behavioral data, event-related potential (ERP) data were collected in Experiments 1, 2, 4, and 5. Across all experiments, we observed a graded effect of semantic transparency in behavioral and ERP data, with the largest effect for semantically transparent words, the next largest for quasi-transparent words, and the smallest for opaque words. The results are discussed in terms of decomposition versus PDP approaches to morphological processing. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  16. The Role of Anterior Nuclei of the Thalamus: A Subcortical Gate in Memory Processing: An Intracerebral Recording Study

    PubMed Central

    Štillová, Klára; Jurák, Pavel; Chládek, Jan; Chrastina, Jan; Halámek, Josef; Bočková, Martina; Goldemundová, Sabina; Říha, Ivo; Rektor, Ivan

    2015-01-01

    Objective To study the involvement of the anterior nuclei of the thalamus (ANT) as compared to the involvement of the hippocampus in the processes of encoding and recognition during visual and verbal memory tasks. Methods We studied intracerebral recordings in patients with pharmacoresistent epilepsy who underwent deep brain stimulation (DBS) of the ANT with depth electrodes implanted bilaterally in the ANT and compared the results with epilepsy surgery candidates with depth electrodes implanted bilaterally in the hippocampus. We recorded the event-related potentials (ERPs) elicited by the visual and verbal memory encoding and recognition tasks. Results P300-like potentials were recorded in the hippocampus by visual and verbal memory encoding and recognition tasks and in the ANT by the visual encoding and visual and verbal recognition tasks. No significant ERPs were recorded during the verbal encoding task in the ANT. In the visual and verbal recognition tasks, the P300-like potentials in the ANT preceded the P300-like potentials in the hippocampus. Conclusions The ANT is a structure in the memory pathway that processes memory information before the hippocampus. We suggest that the ANT has a specific role in memory processes, especially memory recognition, and that memory disturbance should be considered in patients with ANT-DBS and in patients with ANT lesions. ANT is well positioned to serve as a subcortical gate for memory processing in cortical structures. PMID:26529407

  17. Real-Time Performance Feedback for the Manual Control of Spacecraft

    NASA Astrophysics Data System (ADS)

    Karasinski, John Austin

    Real-time performance metrics were developed to quantify workload, situational awareness, and manual task performance for use as visual feedback to pilots of aerospace vehicles. Results from prior lunar lander experiments with variable levels of automation were replicated and extended to provide insights for the development of real-time metrics. Increased levels of automation resulted in increased flight performance, lower workload, and increased situational awareness. Automated Speech Recognition (ASR) was employed to detect verbal callouts as a limited measure of subjects' situational awareness. A one-dimensional manual tracking task and simple instructor-model visual feedback scheme was developed. This feedback was indicated to the operator by changing the color of a guidance element on the primary flight display, similar to how a flight instructor points out elements of a display to a student pilot. Experiments showed that for this low-complexity task, visual feedback did not change subject performance, but did increase the subjects' measured workload. Insights gained from these experiments were applied to a Simplified Aid for EVA Rescue (SAFER) inspection task. The effects of variations of an instructor-model performance-feedback strategy on human performance in a novel SAFER inspection task were investigated. Real-time feedback was found to have a statistically significant effect of improving subject performance and decreasing workload in this complicated four degree of freedom manual control task with two secondary tasks.

  18. Evidence for modality-independent order coding in working memory.

    PubMed

    Depoorter, Ann; Vandierendonck, André

    2009-03-01

    The aim of the present study was to investigate the representation of serial order in working memory, more specifically whether serial order is coded by means of a modality-dependent or a modality-independent order code. This was investigated by means of a series of four experiments based on a dual-task methodology in which one short-term memory task was embedded between the presentation and recall of another short-term memory task. Two aspects were varied in these memory tasks--namely, the modality of the stimulus materials (verbal or visuo-spatial) and the presence of an order component in the task (an order or an item memory task). The results of this study showed impaired primary-task recognition performance when both the primary and the embedded task included an order component, irrespective of the modality of the stimulus materials. If one or both of the tasks did not contain an order component, less interference was found. The results of this study support the existence of a modality-independent order code.

  19. Improving the Performance of an Auditory Brain-Computer Interface Using Virtual Sound Sources by Shortening Stimulus Onset Asynchrony

    PubMed Central

    Sugi, Miho; Hagimoto, Yutaka; Nambu, Isao; Gonzalez, Alejandro; Takei, Yoshinori; Yano, Shohei; Hokari, Haruhide; Wada, Yasuhiro

    2018-01-01

    Recently, a brain-computer interface (BCI) using virtual sound sources has been proposed for estimating user intention via electroencephalogram (EEG) in an oddball task. However, its performance is still insufficient for practical use. In this study, we examine the impact that shortening the stimulus onset asynchrony (SOA) has on this auditory BCI. While very short SOA might improve its performance, sound perception and task performance become difficult, and event-related potentials (ERPs) may not be induced if the SOA is too short. Therefore, we carried out behavioral and EEG experiments to determine the optimal SOA. In the experiments, participants were instructed to direct attention to one of six virtual sounds (target direction). We used eight different SOA conditions: 200, 300, 400, 500, 600, 700, 800, and 1,100 ms. In the behavioral experiment, we recorded participant behavioral responses to target direction and evaluated recognition performance of the stimuli. In all SOA conditions, recognition accuracy was over 85%, indicating that participants could recognize the target stimuli correctly. Next, using a silent counting task in the EEG experiment, we found significant differences between target and non-target sound directions in all but the 200-ms SOA condition. When we calculated an identification accuracy using Fisher discriminant analysis (FDA), the SOA could be shortened by 400 ms without decreasing the identification accuracies. Thus, improvements in performance (evaluated by BCI utility) could be achieved. On average, higher BCI utilities were obtained in the 400 and 500-ms SOA conditions. Thus, auditory BCI performance can be optimized for both behavioral and neurophysiological responses by shortening the SOA. PMID:29535602

  20. Discrimination and categorization of emotional facial expressions and faces in Parkinson's disease.

    PubMed

    Alonso-Recio, Laura; Martín, Pilar; Rubio, Sandra; Serrano, Juan M

    2014-09-01

    Our objective was to compare the ability to discriminate and categorize emotional facial expressions (EFEs) and facial identity characteristics (age and/or gender) in a group of 53 individuals with Parkinson's disease (PD) and another group of 53 healthy subjects. On the one hand, by means of discrimination and identification tasks, we compared two stages in the visual recognition process that could be selectively affected in individuals with PD. On the other hand, facial expression versus gender and age comparison permits us to contrast whether the emotional or non-emotional content influences the configural perception of faces. In Experiment I, we did not find differences between groups, either with facial expression or age, in discrimination tasks. Conversely, in Experiment II, we found differences between the groups, but only in the EFE identification task. Taken together, our results indicate that configural perception of faces does not seem to be globally impaired in PD. However, this ability is selectively altered when the categorization of emotional faces is required. A deeper assessment of the PD group indicated that decline in facial expression categorization is more evident in a subgroup of patients with higher global impairment (motor and cognitive). Taken together, these results suggest that the problems found in facial expression recognition may be associated with the progressive neuronal loss in frontostriatal and mesolimbic circuits, which characterizes PD. © 2013 The British Psychological Society.

  1. Differentiating between self and others: an ALE meta-analysis of fMRI studies of self-recognition and theory of mind.

    PubMed

    van Veluw, Susanne J; Chance, Steven A

    2014-03-01

    The perception of self and others is a key aspect of social cognition. In order to investigate the neurobiological basis of this distinction we reviewed two classes of task that study self-awareness and awareness of others (theory of mind, ToM). A reliable task to measure self-awareness is the recognition of one's own face in contrast to the recognition of others' faces. False-belief tasks are widely used to identify neural correlates of ToM as a measure of awareness of others. We performed an activation likelihood estimation meta-analysis, using the fMRI literature on self-face recognition and false-belief tasks. The brain areas involved in performing false-belief tasks were the medial prefrontal cortex (MPFC), bilateral temporo-parietal junction, precuneus, and the bilateral middle temporal gyrus. Distinct self-face recognition regions were the right superior temporal gyrus, the right parahippocampal gyrus, the right inferior frontal gyrus/anterior cingulate cortex, and the left inferior parietal lobe. Overlapping brain areas were the superior temporal gyrus, and the more ventral parts of the MPFC. We confirmed that self-recognition in contrast to recognition of others' faces, and awareness of others involves a network that consists of separate, distinct neural pathways, but also includes overlapping regions of higher order prefrontal cortex where these processes may be combined. Insights derived from the neurobiology of disorders such as autism and schizophrenia are consistent with this notion.

  2. Familiarity and face emotion recognition in patients with schizophrenia.

    PubMed

    Lahera, Guillermo; Herrera, Sara; Fernández, Cristina; Bardón, Marta; de los Ángeles, Victoria; Fernández-Liria, Alberto

    2014-01-01

    To assess the emotion recognition in familiar and unknown faces in a sample of schizophrenic patients and healthy controls. Face emotion recognition of 18 outpatients diagnosed with schizophrenia (DSM-IVTR) and 18 healthy volunteers was assessed with two Emotion Recognition Tasks using familiar faces and unknown faces. Each subject was accompanied by 4 familiar people (parents, siblings or friends), which were photographed by expressing the 6 Ekman's basic emotions. Face emotion recognition in familiar faces was assessed with this ad hoc instrument. In each case, the patient scored (from 1 to 10) the subjective familiarity and affective valence corresponding to each person. Patients with schizophrenia not only showed a deficit in the recognition of emotions on unknown faces (p=.01), but they also showed an even more pronounced deficit on familiar faces (p=.001). Controls had a similar success rate in the unknown faces task (mean: 18 +/- 2.2) and the familiar face task (mean: 17.4 +/- 3). However, patients had a significantly lower score in the familiar faces task (mean: 13.2 +/- 3.8) than in the unknown faces task (mean: 16 +/- 2.4; p<.05). In both tests, the highest number of errors was with emotions of anger and fear. Subjectively, the patient group showed a lower level of familiarity and emotional valence to their respective relatives (p<.01). The sense of familiarity may be a factor involved in the face emotion recognition and it may be disturbed in schizophrenia. © 2013.

  3. The Initial Development of Object Knowledge by a Learning Robot

    PubMed Central

    Modayil, Joseph; Kuipers, Benjamin

    2008-01-01

    We describe how a robot can develop knowledge of the objects in its environment directly from unsupervised sensorimotor experience. The object knowledge consists of multiple integrated representations: trackers that form spatio-temporal clusters of sensory experience, percepts that represent properties for the tracked objects, classes that support efficient generalization from past experience, and actions that reliably change object percepts. We evaluate how well this intrinsically acquired object knowledge can be used to solve externally specified tasks including object recognition and achieving goals that require both planning and continuous control. PMID:19953188

  4. Three-dimensional model-based object recognition and segmentation in cluttered scenes.

    PubMed

    Mian, Ajmal S; Bennamoun, Mohammed; Owens, Robyn

    2006-10-01

    Viewpoint independent recognition of free-form objects and their segmentation in the presence of clutter and occlusions is a challenging task. We present a novel 3D model-based algorithm which performs this task automatically and efficiently. A 3D model of an object is automatically constructed offline from its multiple unordered range images (views). These views are converted into multidimensional table representations (which we refer to as tensors). Correspondences are automatically established between these views by simultaneously matching the tensors of a view with those of the remaining views using a hash table-based voting scheme. This results in a graph of relative transformations used to register the views before they are integrated into a seamless 3D model. These models and their tensor representations constitute the model library. During online recognition, a tensor from the scene is simultaneously matched with those in the library by casting votes. Similarity measures are calculated for the model tensors which receive the most votes. The model with the highest similarity is transformed to the scene and, if it aligns accurately with an object in the scene, that object is declared as recognized and is segmented. This process is repeated until the scene is completely segmented. Experiments were performed on real and synthetic data comprised of 55 models and 610 scenes and an overall recognition rate of 95 percent was achieved. Comparison with the spin images revealed that our algorithm is superior in terms of recognition rate and efficiency.

  5. Associative false consumer memory: effects of need for cognition and encoding task.

    PubMed

    Parker, Andrew; Dagnall, Neil

    2018-04-01

    Two experiments investigated the effects of product-attribute associations on false consumer memory. In both experiments, subjects were presented with sets of related product attributes under incidental encoding conditions. Later, recognition memory was tested with studied attributes, non-studied but associated attributes (critical lures) and non-studied unrelated attributes. In Experiment 1, the effect of Need for Cognition (NFC) was assessed. It was found that individuals high in NFC recognised more presented attributes and falsely recognised more associative critical lures. The increase in both true and associative false memory was accompanied by a greater number of responses that index the retrieval of detailed episodic-like information. Experiment 2, replicated the main findings through an experimental manipulation of the encoding task that required subjects to consider purchase likelihood. Explanations for these findings are considered from the perspective of activation processes and knowledge structures in the form of gist-based representations.

  6. Working memory affects false memory production for emotional events.

    PubMed

    Mirandola, Chiara; Toffalini, Enrico; Ciriello, Alfonso; Cornoldi, Cesare

    2017-01-01

    Whereas a link between working memory (WM) and memory distortions has been demonstrated, its influence on emotional false memories is unclear. In two experiments, a verbal WM task and a false memory paradigm for negative, positive or neutral events were employed. In Experiment 1, we investigated individual differences in verbal WM and found that the interaction between valence and WM predicted false recognition, with negative and positive material protecting high WM individuals against false remembering; the beneficial effect of negative material disappeared in low WM participants. In Experiment 2, we lowered the WM capacity of half of the participants with a double task request, which led to an overall increase in false memories; furthermore, consistent with Experiment 1, the increase in negative false memories was larger than that of neutral or positive ones. It is concluded that WM plays a critical role in determining false memory production, specifically influencing the processing of negative material.

  7. Holistic processing of words modulated by reading experience.

    PubMed

    Wong, Alan C-N; Bukach, Cindy M; Yuen, Crystal; Yang, Lizhuang; Leung, Shirley; Greenspon, Emma

    2011-01-01

    Perceptual expertise has been studied intensively with faces and object categories involving detailed individuation. A common finding is that experience in fulfilling the task demand of fine, subordinate-level discrimination between highly similar instances is associated with the development of holistic processing. This study examines whether holistic processing is also engaged by expert word recognition, which is thought to involve coarser, basic-level processing that is more part-based. We adopted a paradigm widely used for faces--the composite task, and found clear evidence of holistic processing for English words. A second experiment further showed that holistic processing for words was sensitive to the amount of experience with the language concerned (native vs. second-language readers) and with the specific stimuli (words vs. pseudowords). The adoption of a paradigm from the face perception literature to the study of expert word perception is important for further comparison between perceptual expertise with words and face-like expertise.

  8. The picture superiority effect in a cross-modality recognition task.

    PubMed

    Stenbert, G; Radeborg, K; Hedman, L R

    1995-07-01

    Words and pictures were studied and recognition tests given in which each studied object was to be recognized in both word and picture format. The main dependent variable was the latency of the recognition decision. The purpose was to investigate the effects of study modality (word or picture), of congruence between study and test modalities, and of priming resulting from repeated testing. Experiments 1 and 2 used the same basic design, but the latter also varied retention interval. Experiment 3 added a manipulation of instructions to name studied objects, and Experiment 4 deviated from the others by presenting both picture and word referring to the same object together for study. The results showed that congruence between study and test modalities consistently facilitated recognition. Furthermore, items studied as pictures were more rapidly recognized than were items studied as words. With repeated testing, the second instance was affected by its predecessor, but the facilitating effect of picture-to-word priming exceeded that of word-to-picture priming. The finds suggest a two- stage recognition process, in which the first is based on perceptual familiarity and the second uses semantic links for a retrieval search. Common-code theories that grant privileged access to the semantic code for pictures or, alternatively, dual-code theories that assume mnemonic superiority for the image code are supported by the findings. Explanations of the picture superiority effect as resulting from dual encoding of pictures are not supported by the data.

  9. Deficits in recognition, identification, and discrimination of facial emotions in patients with bipolar disorder.

    PubMed

    Benito, Adolfo; Lahera, Guillermo; Herrera, Sara; Muncharaz, Ramón; Benito, Guillermo; Fernández-Liria, Alberto; Montes, José Manuel

    2013-01-01

    To analyze the recognition, identification, and discrimination of facial emotions in a sample of outpatients with bipolar disorder (BD). Forty-four outpatients with diagnosis of BD and 48 matched control subjects were selected. Both groups were assessed with tests for recognition (Emotion Recognition-40 - ER40), identification (Facial Emotion Identification Test - FEIT), and discrimination (Facial Emotion Discrimination Test - FEDT) of facial emotions, as well as a theory of mind (ToM) verbal test (Hinting Task). Differences between groups were analyzed, controlling the influence of mild depressive and manic symptoms. Patients with BD scored significantly lower than controls on recognition (ER40), identification (FEIT), and discrimination (FEDT) of emotions. Regarding the verbal measure of ToM, a lower score was also observed in patients compared to controls. Patients with mild syndromal depressive symptoms obtained outcomes similar to patients in euthymia. A significant correlation between FEDT scores and global functioning (measured by the Functioning Assessment Short Test, FAST) was found. These results suggest that, even in euthymia, patients with BD experience deficits in recognition, identification, and discrimination of facial emotions, with potential functional implications.

  10. Does cortisol modulate emotion recognition and empathy?

    PubMed

    Duesenberg, Moritz; Weber, Juliane; Schulze, Lars; Schaeuffele, Carmen; Roepke, Stefan; Hellmann-Regen, Julian; Otte, Christian; Wingenfeld, Katja

    2016-04-01

    Emotion recognition and empathy are important aspects in the interaction and understanding of other people's behaviors and feelings. The Human environment comprises of stressful situations that impact social interactions on a daily basis. Aim of the study was to examine the effects of the stress hormone cortisol on emotion recognition and empathy. In this placebo-controlled study, 40 healthy men and 40 healthy women (mean age 24.5 years) received either 10mg of hydrocortisone or placebo. We used the Multifaceted Empathy Test to measure emotional and cognitive empathy. Furthermore, we examined emotion recognition from facial expressions, which contained two emotions (anger and sadness) and two emotion intensities (40% and 80%). We did not find a main effect for treatment or sex on either empathy or emotion recognition but a sex × emotion interaction on emotion recognition. The main result was a four-way-interaction on emotion recognition including treatment, sex, emotion and task difficulty. At 40% task difficulty, women recognized angry faces better than men in the placebo condition. Furthermore, in the placebo condition, men recognized sadness better than anger. At 80% task difficulty, men and women performed equally well in recognizing sad faces but men performed worse compared to women with regard to angry faces. Apparently, our results did not support the hypothesis that increases in cortisol concentration alone influence empathy and emotion recognition in healthy young individuals. However, sex and task difficulty appear to be important variables in emotion recognition from facial expressions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Is lorazepam-induced amnesia specific to the type of memory or to the task used to assess it?

    PubMed

    File, S E; Sharma, R; Shaffer, J

    1992-01-01

    Retrieval tasks can be classified along a continuum from conceptually driven (relying on the encoded meaning of the material) to data driven (relying on the perceptual record and surface features of the material). Since most explicit memory tests are conceptually driven and most implicit memory tests are data driven there has been considerable confounding of the memory system being assessed and the processing required by the retrieval task. The purpose of the present experiment was to investigate the effects of lorazepam on explicit memory, using both types of retrieval task. Lorazepam (2.5 mg) or matched placebo was administered to healthy volunteers and changes in subjective mood ratings and in performance in tests of memory were measured. Lorazepam made subjects significantly more drowsy, feeble, clumsy, muzzy, lethargic and mentally slow. Lorazepam significantly impaired recognition memory for slides, impaired the number of words remembered when the retrieval was cued by the first two letters and reduced the number of pictures remembered when retention was cued with picture fragments. Thus episodic memory was impaired whether the task used was conceptually driven (as in slide recognition) or data driven, as in the other two tasks. Analyses of covariance indicated that the memory impairments were independent of increased sedation, as assessed by self-ratings. In contrast to the deficits in episodic memory, there were no lorazepam-induced impairments in tests of semantic memory, whether this was measured in the conceptually driven task of category generation or in the data-driven task of wordstem completion.

  12. Multi-tasking arbitration and behaviour design for human-interactive robots

    NASA Astrophysics Data System (ADS)

    Kobayashi, Yuichi; Onishi, Masaki; Hosoe, Shigeyuki; Luo, Zhiwei

    2013-05-01

    Robots that interact with humans in household environments are required to handle multiple real-time tasks simultaneously, such as carrying objects, collision avoidance and conversation with human. This article presents a design framework for the control and recognition processes to meet these requirements taking into account stochastic human behaviour. The proposed design method first introduces a Petri net for synchronisation of multiple tasks. The Petri net formulation is converted to Markov decision processes and processed in an optimal control framework. Three tasks (safety confirmation, object conveyance and conversation) interact and are expressed by the Petri net. Using the proposed framework, tasks that normally tend to be designed by integrating many if-then rules can be designed in a systematic manner in a state estimation and optimisation framework from the viewpoint of the shortest time optimal control. The proposed arbitration method was verified by simulations and experiments using RI-MAN, which was developed for interactive tasks with humans.

  13. Aberrant neural networks for the recognition memory of socially relevant information in patients with schizophrenia.

    PubMed

    Oh, Jooyoung; Chun, Ji-Won; Kim, Eunseong; Park, Hae-Jeong; Lee, Boreom; Kim, Jae-Jin

    2017-01-01

    Patients with schizophrenia exhibit several cognitive deficits, including memory impairment. Problems with recognition memory can hinder socially adaptive behavior. Previous investigations have suggested that altered activation of the frontotemporal area plays an important role in recognition memory impairment. However, the cerebral networks related to these deficits are not known. The aim of this study was to elucidate the brain networks required for recognizing socially relevant information in patients with schizophrenia performing an old-new recognition task. Sixteen patients with schizophrenia and 16 controls participated in this study. First, the subjects performed the theme-identification task during functional magnetic resonance imaging. In this task, pictures depicting social situations were presented with three words, and the subjects were asked to select the best theme word for each picture. The subjects then performed an old-new recognition task in which they were asked to discriminate whether the presented words were old or new. Task performance and neural responses in the old-new recognition task were compared between the subject groups. An independent component analysis of the functional connectivity was performed. The patients with schizophrenia exhibited decreased discriminability and increased activation of the right superior temporal gyrus compared with the controls during correct responses. Furthermore, aberrant network activities were found in the frontopolar and language comprehension networks in the patients. The functional connectivity analysis showed aberrant connectivity in the frontopolar and language comprehension networks in the patients with schizophrenia, and these aberrations possibly contribute to their low recognition performance and social dysfunction. These results suggest that the frontopolar and language comprehension networks are potential therapeutic targets in patients with schizophrenia.

  14. Effect of physical workload and modality of information presentation on pattern recognition and navigation task performance by high-fit young males.

    PubMed

    Zahabi, Maryam; Zhang, Wenjuan; Pankok, Carl; Lau, Mei Ying; Shirley, James; Kaber, David

    2017-11-01

    Many occupations require both physical exertion and cognitive task performance. Knowledge of any interaction between physical demands and modalities of cognitive task information presentation can provide a basis for optimising performance. This study examined the effect of physical exertion and modality of information presentation on pattern recognition and navigation-related information processing. Results indicated males of equivalent high fitness, between the ages of 18 and 34, rely more on visual cues vs auditory or haptic for pattern recognition when exertion level is high. We found that navigation response time was shorter under low and medium exertion levels as compared to high intensity. Navigation accuracy was lower under high level exertion compared to medium and low levels. In general, findings indicated that use of the haptic modality for cognitive task cueing decreased accuracy in pattern recognition responses. Practitioner Summary: An examination was conducted on the effect of physical exertion and information presentation modality in pattern recognition and navigation. In occupations requiring information presentation to workers, who are simultaneously performing a physical task, the visual modality appears most effective under high level exertion while haptic cueing degrades performance.

  15. Shape shifting: Local landmarks interfere with navigation by, and recognition of, global shape.

    PubMed

    Buckley, Matthew G; Smith, Alastair D; Haselgrove, Mark

    2014-03-01

    An influential theory of spatial navigation states that the boundary shape of an environment is preferentially encoded over and above other spatial cues, such that it is impervious to interference from alternative sources of information. We explored this claim with 3 intradimensional-extradimensional shift experiments, designed to examine the interaction of landmark and geometric features of the environment in a virtual navigation task. In Experiments 1 and 2, participants were first required to find a hidden goal using information provided by the shape of the arena or landmarks integrated into the arena boundary (Experiment 1) or within the arena itself (Experiment 2). Participants were then transferred to a different-shaped arena that contained novel landmarks and were again required to find a hidden goal. In both experiments, participants who were navigating on the basis of cues that were from the same dimension that was previously relevant (intradimensional shift) learned to find the goal significantly faster than participants who were navigating on the basis of cues that were from a dimension that was previously irrelevant (extradimensional shift). This suggests that shape information does not hold special status when learning about an environment. Experiment 3 replicated Experiment 2 and also assessed participants' recognition of the global shape of the navigated arenas. Recognition was attenuated when landmarks were relevant to navigation throughout the experiment. The results of these experiments are discussed in terms of associative and non-associative theories of spatial learning.

  16. Knowing what to remember and forget: a developmental study of cue memory in intentional forgetting.

    PubMed

    Lehman, E B; Morath, R; Franklin, K; Elbaz, V

    1998-09-01

    These experiments are the first to investigate children's encoding and use of information about a memory cue in Bjork's (1972) intentional forgetting task. In Experiment 1, children in Grades 2, 4, and 6 and college students were given cues to either remember or forget after the presentation of each picture. Recall and recognition tests of pictures and cues followed. The procedure in Experiment 2 was identical to that in Experiment 1 except that the list of presentation pictures was altered for some children (Grades 3 and 4) and adolescents (Grades 8 and 9) so that remember and forget cues were associated with particular taxonomic categories. In Experiment 3, the testing component was modified so that children (Grades 2, 3, and 4) and college students were asked to recall only the cue associated with each picture. The results indicated that (1) children as young as second graders encode the cue associated with each picture, although to a lesser extent than do college students, (2) much improvement in intentional forgetting is still occurring during adolescence, (3) only adults adequately cluster their recall by cue, (4) associating remember and forget cues with items from different categories does not increase the differentiation between cues, and (5) eliminating picture recall and recognition has minimal effects on the magnitude of cue judgments. These results suggest that children's difficulties on intentional forgetting tasks stem, at least in part, from their poorer encoding of information about whether an item should be remembered or forgotten.

  17. Spoken Language Processing in the Clarissa Procedure Browser

    NASA Technical Reports Server (NTRS)

    Rayner, M.; Hockey, B. A.; Renders, J.-M.; Chatzichrisafis, N.; Farrell, K.

    2005-01-01

    Clarissa, an experimental voice enabled procedure browser that has recently been deployed on the International Space Station, is as far as we know the first spoken dialog system in space. We describe the objectives of the Clarissa project and the system's architecture. In particular, we focus on three key problems: grammar-based speech recognition using the Regulus toolkit; methods for open mic speech recognition; and robust side-effect free dialogue management for handling undos, corrections and confirmations. We first describe the grammar-based recogniser we have build using Regulus, and report experiments where we compare it against a class N-gram recogniser trained off the same 3297 utterance dataset. We obtained a 15% relative improvement in WER and a 37% improvement in semantic error rate. The grammar-based recogniser moreover outperforms the class N-gram version for utterances of all lengths from 1 to 9 words inclusive. The central problem in building an open-mic speech recognition system is being able to distinguish between commands directed at the system, and other material (cross-talk), which should be rejected. Most spoken dialogue systems make the accept/reject decision by applying a threshold to the recognition confidence score. NASA shows how a simple and general method, based on standard approaches to document classification using Support Vector Machines, can give substantially better performance, and report experiments showing a relative reduction in the task-level error rate by about 25% compared to the baseline confidence threshold method. Finally, we describe a general side-effect free dialogue management architecture that we have implemented in Clarissa, which extends the "update semantics'' framework by including task as well as dialogue information in the information state. We show that this enables elegant treatments of several dialogue management problems, including corrections, confirmations, querying of the environment, and regression testing.

  18. Generation of oculomotor images during tasks requiring visual recognition of polygons.

    PubMed

    Olivier, G; de Mendoza, J L

    2001-06-01

    This paper concerns the contribution of mentally simulated ocular exploration to generation of a visual mental image. In Exp. 1, repeated exploration of the outlines of an irregular decagon allowed an incidental learning of the shape. Analyses showed subjects memorized their ocular movements rather than the polygon. In Exp. 2, exploration of a reversible figure such as a Necker cube varied in opposite directions. Then, both perspective possibilities are presented. The perspective the subjects recognized depended on the way they explored the ambiguous figure. In both experiments, during recognition the subjects recalled a visual mental image of the polygon they compared with the different polygons proposed for recognition. To interpret the data, hypotheses concerning common processes underlying both motor intention of ocular movements and generation of a visual image are suggested.

  19. Semantic Ambiguity: Do Multiple Meanings Inhibit or Facilitate Word Recognition?

    PubMed

    Haro, Juan; Ferré, Pilar

    2018-06-01

    It is not clear whether multiple unrelated meanings inhibit or facilitate word recognition. Some studies have found a disadvantage for words having multiple meanings with respect to unambiguous words in lexical decision tasks (LDT), whereas several others have shown a facilitation for such words. In the present study, we argue that these inconsistent findings may be due to the approach employed to select ambiguous words across studies. To address this issue, we conducted three LDT experiments in which we varied the measure used to classify ambiguous and unambiguous words. The results suggest that multiple unrelated meanings facilitate word recognition. In addition, we observed that the approach employed to select ambiguous words may affect the pattern of experimental results. This evidence has relevant implications for theoretical accounts of ambiguous words processing and representation.

  20. Enhancing of chemical compound and drug name recognition using representative tag scheme and fine-grained tokenization.

    PubMed

    Dai, Hong-Jie; Lai, Po-Ting; Chang, Yung-Chun; Tsai, Richard Tzong-Han

    2015-01-01

    The functions of chemical compounds and drugs that affect biological processes and their particular effect on the onset and treatment of diseases have attracted increasing interest with the advancement of research in the life sciences. To extract knowledge from the extensive literatures on such compounds and drugs, the organizers of BioCreative IV administered the CHEMical Compound and Drug Named Entity Recognition (CHEMDNER) task to establish a standard dataset for evaluating state-of-the-art chemical entity recognition methods. This study introduces the approach of our CHEMDNER system. Instead of emphasizing the development of novel feature sets for machine learning, this study investigates the effect of various tag schemes on the recognition of the names of chemicals and drugs by using conditional random fields. Experiments were conducted using combinations of different tokenization strategies and tag schemes to investigate the effects of tag set selection and tokenization method on the CHEMDNER task. This study presents the performance of CHEMDNER of three more representative tag schemes-IOBE, IOBES, and IOB12E-when applied to a widely utilized IOB tag set and combined with the coarse-/fine-grained tokenization methods. The experimental results thus reveal that the fine-grained tokenization strategy performance best in terms of precision, recall and F-scores when the IOBES tag set was utilized. The IOBES model with fine-grained tokenization yielded the best-F-scores in the six chemical entity categories other than the "Multiple" entity category. Nonetheless, no significant improvement was observed when a more representative tag schemes was used with the coarse or fine-grained tokenization rules. The best F-scores that were achieved using the developed system on the test dataset of the CHEMDNER task were 0.833 and 0.815 for the chemical documents indexing and the chemical entity mention recognition tasks, respectively. The results herein highlight the importance of tag set selection and the use of different tokenization strategies. Fine-grained tokenization combined with the tag set IOBES most effectively recognizes chemical and drug names. To the best of the authors' knowledge, this investigation is the first comprehensive investigation use of various tag set schemes combined with different tokenization strategies for the recognition of chemical entities.

  1. Approach to recognition of flexible form for credit card expiration date recognition as example

    NASA Astrophysics Data System (ADS)

    Sheshkus, Alexander; Nikolaev, Dmitry P.; Ingacheva, Anastasia; Skoryukina, Natalya

    2015-12-01

    In this paper we consider a task of finding information fields within document with flexible form for credit card expiration date field as example. We discuss main difficulties and suggest possible solutions. In our case this task is to be solved on mobile devices therefore computational complexity has to be as low as possible. In this paper we provide results of the analysis of suggested algorithm. Error distribution of the recognition system shows that suggested algorithm solves the task with required accuracy.

  2. Backward masking, the suffix effect, and preperceptual storage.

    PubMed

    Kallman, H J; Massaro, D W

    1983-04-01

    This article considers the use of auditory backward recognition masking (ABRM) and stimulus suffix experiments as indexes of preperceptual auditory storage. In the first part of the article, two ABRM experiments that failed to demonstrate a mask disinhibition effect found previously in stimulus suffix experiments are reported. The failure to demonstrate mask disinhibition is inconsistent with an explanation of ABRM in terms of lateral inhibition. In the second part of the article, evidence is presented to support the conclusion that the suffix effect involves the contributions of later processing stages and does not provide an uncontaminated index of preperceptual storage. In contrast, it is claimed that ABRM experiments provide the most direct index of the temporal course of perceptual recognition. Partial-report tasks and other paradigms are also evaluated in terms of their contributions to an understanding of preperceptual auditory storage. Differences between interruption and integration masking are discussed along with the role of preperceptual auditory storage in speech perception.

  3. People with chronic facial pain perform worse than controls at a facial emotion recognition task, but it is not all about the emotion.

    PubMed

    von Piekartz, H; Wallwork, S B; Mohr, G; Butler, D S; Moseley, G L

    2015-04-01

    Alexithymia, or a lack of emotional awareness, is prevalent in some chronic pain conditions and has been linked to poor recognition of others' emotions. Recognising others' emotions from their facial expression involves both emotional and motor processing, but the possible contribution of motor disruption has not been considered. It is possible that poor performance on emotional recognition tasks could reflect problems with emotional processing, motor processing or both. We hypothesised that people with chronic facial pain would be less accurate in recognising others' emotions from facial expressions, would be less accurate in a motor imagery task involving the face, and that performance on both tasks would be positively related. A convenience sample of 19 people (15 females) with chronic facial pain and 19 gender-matched controls participated. They undertook two tasks; in the first task, they identified the facial emotion presented in a photograph. In the second, they identified whether the person in the image had a facial feature pointed towards their left or right side, a well-recognised paradigm to induce implicit motor imagery. People with chronic facial pain performed worse than controls at both tasks (Facially Expressed Emotion Labelling (FEEL) task P < 0·001; left/right judgment task P < 0·001). Participants who were more accurate at one task were also more accurate at the other, regardless of group (P < 0·001, r(2)  = 0·523). Participants with chronic facial pain were worse than controls at both the FEEL emotion recognition task and the left/right facial expression task and performance covaried within participants. We propose that disrupted motor processing may underpin or at least contribute to the difficulty that facial pain patients have in emotion recognition and that further research that tests this proposal is warranted. © 2014 John Wiley & Sons Ltd.

  4. Facial emotion recognition is inversely correlated with tremor severity in essential tremor.

    PubMed

    Auzou, Nicolas; Foubert-Samier, Alexandra; Dupouy, Sandrine; Meissner, Wassilios G

    2014-04-01

    We here assess limbic and orbitofrontal control in 20 patients with essential tremor (ET) and 18 age-matched healthy controls using the Ekman Facial Emotion Recognition Task and the IOWA Gambling Task. Our results show an inverse relation between facial emotion recognition and tremor severity. ET patients also showed worse performance in joy and fear recognition, as well as subtle abnormalities in risk detection, but these differences did not reach significance after correction for multiple testing.

  5. Word position affects stimulus recognition: evidence for early ERP short-term plastic modulation.

    PubMed

    Spironelli, Chiara; Galfano, Giovanni; Umiltà, Carlo; Angrilli, Alessandro

    2011-12-01

    The present study was aimed at investigating the short-term plastic changes that follow word learning at a neurophysiological level. The main hypothesis was that word position (left or right visual field, LVF/RH or RVF/LH) in the initial learning phase would leave a trace that affected, in the subsequent recognition phase, the Recognition Potential (i.e., the first negative component distinguishing words from other stimuli) elicited 220-240 ms after centrally presented stimuli. Forty-eight students were administered, in the learning phase, 125 words for 4s, randomly presented half in the left and half in the right visual field. In the recognition phase, participants were split into two equal groups, one was assigned to the Word task, the other to the Picture task (in which half of the 125 pictures were new, and half matched prior studied words). During the Word task, old RVF/LH words elicited significantly greater negativity in left posterior sites with respect to old LVF/RH words, which in turn showed the same pattern of activation evoked by new words. Therefore, correspondence between stimulus spatial position and hemisphere specialized in automatic word recognition created a robust prime for subsequent recognition. During the Picture task, pictures matching old RVF/LH words showed no differences compared with new pictures, but evoked significantly greater negativity than pictures matching old LVF/RH words. Thus, the priming effect vanished when the task required a switch from visual analysis to stored linguistic information, whereas the lack of correspondence between stimulus position and network specialized in automatic word recognition (i.e., when words were presented to the LVF/RH) revealed the implicit costs for recognition. Results support the view that short-term plastic changes occurring in a linguistic learning task interact with both stimulus position and modality (written word vs. picture representation). Copyright © 2011 Elsevier B.V. All rights reserved.

  6. Perirhinal Cortex Resolves Feature Ambiguity in Configural Object Recognition and Perceptual Oddity Tasks

    ERIC Educational Resources Information Center

    Bartko, Susan J.; Winters, Boyer D.; Cowell, Rosemary A.; Saksida, Lisa M.; Bussey, Timothy J.

    2007-01-01

    The perirhinal cortex (PRh) has a well-established role in object recognition memory. More recent studies suggest that PRh is also important for two-choice visual discrimination tasks. Specifically, it has been suggested that PRh contains conjunctive representations that help resolve feature ambiguity, which occurs when a task cannot easily be…

  7. Acute Alcohol Effects on Repetition Priming and Word Recognition Memory with Equivalent Memory Cues

    ERIC Educational Resources Information Center

    Ray, Suchismita; Bates, Marsha E.

    2006-01-01

    Acute alcohol intoxication effects on memory were examined using a recollection-based word recognition memory task and a repetition priming task of memory for the same information without explicit reference to the study context. Memory cues were equivalent across tasks; encoding was manipulated by varying the frequency of occurrence (FOC) of words…

  8. Recognition memory span in autopsy-confirmed Dementia with Lewy Bodies and Alzheimer's Disease.

    PubMed

    Salmon, David P; Heindel, William C; Hamilton, Joanne M; Vincent Filoteo, J; Cidambi, Varun; Hansen, Lawrence A; Masliah, Eliezer; Galasko, Douglas

    2015-08-01

    Evidence from patients with amnesia suggests that recognition memory span tasks engage both long-term memory (i.e., secondary memory) processes mediated by the diencephalic-medial temporal lobe memory system and working memory processes mediated by fronto-striatal systems. Thus, the recognition memory span task may be particularly effective for detecting memory deficits in disorders that disrupt both memory systems. The presence of unique pathology in fronto-striatal circuits in Dementia with Lewy Bodies (DLB) compared to AD suggests that performance on the recognition memory span task might be differentially affected in the two disorders even though they have quantitatively similar deficits in secondary memory. In the present study, patients with autopsy-confirmed DLB or AD, and Normal Control (NC) participants, were tested on separate recognition memory span tasks that required them to retain increasing amounts of verbal, spatial, or visual object (i.e., faces) information across trials. Results showed that recognition memory spans for verbal and spatial stimuli, but not face stimuli, were lower in patients with DLB than in those with AD, and more impaired relative to NC performance. This was despite similar deficits in the two patient groups on independent measures of secondary memory such as the total number of words recalled from long-term storage on the Buschke Selective Reminding Test. The disproportionate vulnerability of recognition memory span task performance in DLB compared to AD may be due to greater fronto-striatal involvement in DLB and a corresponding decrement in cooperative interaction between working memory and secondary memory processes. Assessment of recognition memory span may contribute to the ability to distinguish between DLB and AD relatively early in the course of disease. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Recognition Memory Span in Autopsy-Confirmed Dementia with Lewy Bodies and Alzheimer’s Disease

    PubMed Central

    Salmon, David P.; Heindel, William C.; Hamilton, Joanne M.; Filoteo, J. Vincent; Cidambi, Varun; Hansen, Lawrence A.; Masliah, Eliezer; Galasko, Douglas

    2016-01-01

    Evidence from patients with amnesia suggests that recognition memory span tasks engage both long-term memory (i.e., secondary memory) processes mediated by the diencephalic-medial temporal lobe memory system and working memory processes mediated by fronto-striatal systems. Thus, the recognition memory span task may be particularly effective for detecting memory deficits in disorders that disrupt both memory systems. The presence of unique pathology in fronto-striatal circuits in Dementia with Lewy Bodies (DLB) compared to AD suggests that performance on the recognition memory span task might be differentially affected in the two disorders even though they have quantitatively similar deficits in secondary memory. In the present study, patients with autopsy-confirmed DLB or AD, and normal control (NC) participants, were tested on separate recognition memory span tasks that required them to retain increasing amounts of verbal, spatial, or visual object (i.e., faces) information across trials. Results showed that recognition memory spans for verbal and spatial stimuli, but not face stimuli, were lower in patients with DLB than in those with AD, and more impaired relative to NC performance. This was despite similar deficits in the two patient groups on independent measures of secondary memory such as the total number of words recalled from Long-Term Storage on the Buschke Selective Reminding Test. The disproportionate vulnerability of recognition memory span task performance in DLB compared to AD may be due to greater fronto-striatal involvement in DLB and a corresponding decrement in cooperative interaction between working memory and secondary memory processes. Assessment of recognition memory span may contribute to the ability to distinguish between DLB and AD relatively early in the course of disease. PMID:26184443

  10. Adults' strategies for simple addition and multiplication: verbal self-reports and the operand recognition paradigm.

    PubMed

    Metcalfe, Arron W S; Campbell, Jamie I D

    2011-05-01

    Accurate measurement of cognitive strategies is important in diverse areas of psychological research. Strategy self-reports are a common measure, but C. Thevenot, M. Fanget, and M. Fayol (2007) proposed a more objective method to distinguish different strategies in the context of mental arithmetic. In their operand recognition paradigm, speed of recognition memory for problem operands after solving a problem indexes strategy (e.g., direct memory retrieval vs. a procedural strategy). Here, in 2 experiments, operand recognition time was the same following simple addition or multiplication, but, consistent with a wide variety of previous research, strategy reports indicated much greater use of procedures (e.g., counting) for addition than multiplication. Operation, problem size (e.g., 2 + 3 vs. 8 + 9), and operand format (digits vs. words) had interactive effects on reported procedure use that were not reflected in recognition performance. Regression analyses suggested that recognition time was influenced at least as much by the relative difficulty of the preceding problem as by the strategy used. The findings indicate that the operand recognition paradigm is not a reliable substitute for strategy reports and highlight the potential impact of difficulty-related carryover effects in sequential cognitive tasks.

  11. Effect of task demands on dual coding of pictorial stimuli.

    PubMed

    Babbitt, B C

    1982-01-01

    Recent studies have suggested that verbal labeling of a picture does not occur automatically. Although several experiments using paired-associate tasks produced little evidence indicating the use of a verbal code with picture stimuli, the tasks were probably not sensitive to whether the codes were activated initially. It is possible that verbal labels were activated at input, but not used later in performing the tasks. The present experiment used a color-naming interference task in order to assess, with a more sensitive measure, the amount of verbal coding occurring in response to word or picture input. Subjects named the color of ink in which words were printed following either word or picture input. If verbal labeling of the input occurs, then latency of color naming should increase when the input item and color-naming word are related. The results provided substantial evidence of such verbal activation when the input items were words. However, the presence of verbal activation with picture input was a function of task demands. Activation occurred when a recall memory test was used, but not when a recognition memory test was used. The results support the conclusion that name information (labels) need not be activated during presentation of visual stimuli.

  12. A Reversed-Typicality Effect in Pictures but Not in Written Words in Deaf and Hard of Hearing Adolescents

    ERIC Educational Resources Information Center

    Li, Degao; Gao, Kejuan; Wu, Xueyun; Xong, Ying; Chen, Xiaojun; He, Weiwei; Li, Ling; Huang, Jingjia

    2015-01-01

    Two experiments investigated Chinese deaf and hard of hearing (DHH) adolescents' recognition of category names in an innovative task of semantic categorization. In each trial, the category-name target appeared briefly at the screen center followed by two words or two pictures for two basic-level exemplars of high or middle typicality, which…

  13. An Investigation of the Individual Differences in Cognitive Factors that Contribute to Bilingual Lexical Disambiguation

    ERIC Educational Resources Information Center

    Areas da Luz Fontes, Ana B.

    2010-01-01

    The objective of this study was to investigate the effects of working memory capacity, access to subordinate meanings of L1 homonyms and degree of cross-language activation on the access to subordinate meanings of L2 homonyms. In Experiment 1, Spanish-English bilinguals completed a word recognition task which assessed how quickly and accurately…

  14. Autonomous planning and scheduling on the TechSat 21 mission

    NASA Technical Reports Server (NTRS)

    Sherwood, R.; Chien, S.; Castano, R.; Rabideau, G.

    2002-01-01

    The Autonomous Sciencecraft Experiment (ASE) will fly onboard the Air Force TechSat 21 constellation of three spacecraft scheduled for launch in 2006. ASE uses onboard continuous planning, robust task and goal-based execution, model-based mode identification and reconfiguration, and onboard machine learning and pattern recognition to radically increase science return by enabling intelligent downlink selection and autonomous retargeting.

  15. Identifying cognitive preferences for attractive female faces: an event-related potential experiment using a study-test paradigm.

    PubMed

    Zhang, Yan; Kong, Fanchang; Chen, Hong; Jackson, Todd; Han, Li; Meng, Jing; Yang, Zhou; Gao, Jianguo; Najam ul Hasan, Abbasi

    2011-11-01

    In this experiment, sensitivity to female facial attractiveness was examined by comparing event-related potentials (ERPs) in response to attractive and unattractive female faces within a study-test paradigm. Fourteen heterosexual participants (age range 18-24 years, mean age 21.67 years) were required to judge 84 attractive and 84 unattractive face images as either "attractive" or "unattractive." They were then asked whether they had previously viewed each face in a recognition task in which 50% of the images were novel. Analyses indicated that attractive faces elicited more enhanced ERP amplitudes than did unattractive faces in judgment (N300 and P350-550 msec) and recognition (P160 and N250-400 msec and P400-700 msec) tasks on anterior locations. Moreover, longer reaction times and higher accuracy rate were observed in identifying attractive faces than unattractive faces. In sum, this research identified neural and behavioral bases related to cognitive preferences for judging and recognizing attractive female faces. Explanations for the results are that attractive female faces arouse more intense positive emotions in participants than do unattractive faces, and they also represent reproductive fitness and mating value from the evolutionary perspective. Copyright © 2011 Wiley-Liss, Inc.

  16. Test battery for measuring the perception and recognition of facial expressions of emotion

    PubMed Central

    Wilhelm, Oliver; Hildebrandt, Andrea; Manske, Karsten; Schacht, Annekathrin; Sommer, Werner

    2014-01-01

    Despite the importance of perceiving and recognizing facial expressions in everyday life, there is no comprehensive test battery for the multivariate assessment of these abilities. As a first step toward such a compilation, we present 16 tasks that measure the perception and recognition of facial emotion expressions, and data illustrating each task's difficulty and reliability. The scoring of these tasks focuses on either the speed or accuracy of performance. A sample of 269 healthy young adults completed all tasks. In general, accuracy and reaction time measures for emotion-general scores showed acceptable and high estimates of internal consistency and factor reliability. Emotion-specific scores yielded lower reliabilities, yet high enough to encourage further studies with such measures. Analyses of task difficulty revealed that all tasks are suitable for measuring emotion perception and emotion recognition related abilities in normal populations. PMID:24860528

  17. The effect of colour congruency on shape discriminations of novel objects.

    PubMed

    Nicholson, Karen G; Humphrey, G Keith

    2004-01-01

    Although visual object recognition is primarily shape driven, colour assists the recognition of some objects. It is unclear, however, just how colour information is coded with respect to shape in long-term memory and how the availability of colour in the visual image facilitates object recognition. We examined the role of colour in the recognition of novel, 3-D objects by manipulating the congruency of object colour across the study and test phases, using an old/new shape-identification task. In experiment 1, we found that participants were faster at correctly identifying old objects on the basis of shape information when these objects were presented in their original colour, rather than in a different colour. In experiments 2 and 3, we found that participants were faster at correctly identifying old objects on the basis of shape information when these objects were presented with their original part-colour conjunctions, rather than in different or in reversed part-colour conjunctions. In experiment 4, we found that participants were quite poor at the verbal recall of part-colour conjunctions for correctly identified old objects, presented as grey-scale images at test. In experiment 5, we found that participants were significantly slower at correctly identifying old objects when object colour was incongruent across study and test, than when background colour was incongruent across study and test. The results of these experiments suggest that both shape and colour information are stored as part of the long-term representation of these novel objects. Results are discussed in terms of how colour might be coded with respect to shape in stored object representations.

  18. Holistic processing, contact, and the other-race effect in face recognition.

    PubMed

    Zhao, Mintao; Hayward, William G; Bülthoff, Isabelle

    2014-12-01

    Face recognition, holistic processing, and processing of configural and featural facial information are known to be influenced by face race, with better performance for own- than other-race faces. However, whether these various other-race effects (OREs) arise from the same underlying mechanisms or from different processes remains unclear. The present study addressed this question by measuring the OREs in a set of face recognition tasks, and testing whether these OREs are correlated with each other. Participants performed different tasks probing (1) face recognition, (2) holistic processing, (3) processing of configural information, and (4) processing of featural information for both own- and other-race faces. Their contact with other-race people was also assessed with a questionnaire. The results show significant OREs in tasks testing face memory and processing of configural information, but not in tasks testing either holistic processing or processing of featural information. Importantly, there was no cross-task correlation between any of the measured OREs. Moreover, the level of other-race contact predicted only the OREs obtained in tasks testing face memory and processing of configural information. These results indicate that these various cross-race differences originate from different aspects of face processing, in contrary to the view that the ORE in face recognition is due to cross-race differences in terms of holistic processing. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  19. The processing of auditory and visual recognition of self-stimuli.

    PubMed

    Hughes, Susan M; Nicholson, Shevon E

    2010-12-01

    This study examined self-recognition processing in both the auditory and visual modalities by determining how comparable hearing a recording of one's own voice was to seeing photograph of one's own face. We also investigated whether the simultaneous presentation of auditory and visual self-stimuli would either facilitate or inhibit self-identification. Ninety-one participants completed reaction-time tasks of self-recognition when presented with their own faces, own voices, and combinations of the two. Reaction time and errors made when responding with both the right and left hand were recorded to determine if there were lateralization effects on these tasks. Our findings showed that visual self-recognition for facial photographs appears to be superior to auditory self-recognition for voice recordings. Furthermore, a combined presentation of one's own face and voice appeared to inhibit rather than facilitate self-recognition and there was a left-hand advantage for reaction time on the combined-presentation tasks. Copyright © 2010 Elsevier Inc. All rights reserved.

  20. Detection and recognition of mechanical, digging and vehicle signals in the optical fiber pre-warning system

    NASA Astrophysics Data System (ADS)

    Tian, Qing; Yang, Dan; Zhang, Yuan; Qu, Hongquan

    2018-04-01

    This paper presents detection and recognition method to locate and identify harmful intrusions in the optical fiber pre-warning system (OFPS). Inspired by visual attention architecture (VAA), the process flow is divided into two parts, i.e., data-driven process and task-driven process. At first, data-driven process takes all the measurements collected by the system as input signals, which is handled by detection method to locate the harmful intrusion in both spatial domain and time domain. Then, these detected intrusion signals are taken over by task-driven process. Specifically, we get pitch period (PP) and duty cycle (DC) of the intrusion signals to identify the mechanical and manual digging (MD) intrusions respectively. For the passing vehicle (PV) intrusions, their strong low frequency component can be used as good feature. In generally, since the harmful intrusion signals only account for a small part of whole measurements, the data-driven process reduces the amount of input data for subsequent task-driven process considerably. Furthermore, the task-driven process determines the harmful intrusions orderly according to their severity, which makes a priority mechanism for the system as well as targeted processing for different harmful intrusion. At last, real experiments are performed to validate the effectiveness of this method.

Top